id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.14334
Design of an Adaptive Lightweight LiDAR to Decouple Robot-Camera Geometry
A fundamental challenge in robot perception is the coupling of the sensor pose and robot pose. This has led to research in active vision where robot pose is changed to reorient the sensor to areas of interest for perception. Further, egomotion such as jitter, and external effects such as wind and others affect perception requiring additional effort in software such as image stabilization. This effect is particularly pronounced in micro-air vehicles and micro-robots who typically are lighter and subject to larger jitter but do not have the computational capability to perform stabilization in real-time. We present a novel microelectromechanical (MEMS) mirror LiDAR system to change the field of view of the LiDAR independent of the robot motion. Our design has the potential for use on small, low-power systems where the expensive components of the LiDAR can be placed external to the small robot. We show the utility of our approach in simulation and on prototype hardware mounted on a UAV. We believe that this LiDAR and its compact movable scanning design provide mechanisms to decouple robot and sensor geometry allowing us to simplify robot perception. We also demonstrate examples of motion compensation using IMU and external odometry feedback in hardware.
Yuyang Chen, Dingkang Wang, Lenworth Thomas, Karthik Dantu, Sanjeev J. Koppal
2023-02-28T06:03:42Z
http://arxiv.org/abs/2302.14334v2
# Design of an Adaptive Lightweight LiDAR to Decouple Robot-Camera Geometry ###### Abstract A fundamental challenge in robot perception is the coupling of the sensor pose and robot pose. This has led to research in active vision where robot pose is changed to reorient the sensor to areas of interest for perception. Further, egomotion such as jitter, and external effects such as wind and others affect perception requiring additional effort in software such as image stabilization. This effect is particularly pronounced in micro-air vehicles and micro-robots who typically are lighter and subject to larger jitter but do not have the computational capability to perform stabilization in real-time. We present a novel microelectromechanical (MEMS) mirror LiDAR system to change the field of view of the LiDAR independent of the robot motion. Our design has the potential for use on small, low-power systems where the expensive components of the LiDAR can be placed external to the small robot. We show the utility of our approach in simulation and on prototype hardware mounted on a UAV. We believe that this LiDAR and its compact movable scanning design provide mechanisms to decouple robot and sensor geometry allowing us to simplify robot perception. We also demonstrate examples of motion compensation using IMU and external odometry feedback in hardware. ## I Introduction Modern autonomy is largely driven by vision and depth sensors for perception. Most such techniques make an implicit assumption that the relative pose of the sensor w.r.t. the robot is fixed and changes in sensor viewpoint require a change in the robot pose. This implies that fast-moving robots must deal with motion compensation (i.e. camera-robot _stabilization_) and that robots need to reorient themselves to observe the relevant parts of the scene. Correspondingly, stabilization [32, 12, 43, 35] and active vision [7, 4, 60, 36] are well-studied problems. Let us consider the specific example of image stabilization. While successful, most such methods compensate through _post-capture_ processing of sensor data. _We contend that this is simply not feasible for the next generation of fast miniature robots_ such as robotic bees [51], crawling and walking robots [18], and other micro-air vehicles [31]. For example, flapping-wing robots such as the RoboBee exhibit a high frequency rocking motion (at about 120 Hz in one design) due to the piezo-electric actuation [17]. Environmental factors such as wind affects micro-robots to a greater extent than a larger robot. There might be aerodynamic instability due ornithopter-based shock absorption [53]. The egomotion of small robots (and onboard sensors) is quite extreme making any sensing challenging. While there have been software methods to correct for such effects for cameras [6] and LiDARs [34], this is often difficult to perform in real-time onboard due to the computational, energy and latency constraints on the robot mentioned above. Without proper motion compensation for miniature devices, we will not be able to unlock the full potential of what is one of the ten grand challenges in robotics [54]. ### _Key Idea: Compensation_ during _Imaging_ Our idea is for motion correction to happen in sensor hardware during imaging such that measurements are already compensated without requiring onboard computing. This paper Fig. 1: Our design is given above with the prototype motion-compensated LiDAR (up), and we also prepared a design for future work to integrate this onto smaller platforms. shows the motion compensation advantage of decoupling robot-camera geometry, and providing the ability to control the camera properties independent of the robot pose could bring about a new perspective to robot perception and simplify the autonomy pipeline. We demonstrate this through the design of a MEMS-driven LiDAR and perform compensation in two ways - (i) onboard IMU, and (ii) external feedback of robot pose at a high rate. We are inspired by animal eyes that have fast mechanical movements that compensate for motion, in real-time and at high accuracy [1]. In Figure 2, we show frames \(V(t)\) from a video of a havk (_Buteo jamicensis_) being moved by a human trainer [15]. We also show the average of the video \(\sum_{t}\frac{V(t)}{T}\) over a time interval \(T\). Note that the averaged image shows motion blurring, except where the eagle mechanically compensates for the shifts. We envision biologically-inspired motion compensation that happens during sensing. These sensors need to adaptively change their orientation, in real-time, and in concert with robot goals such as mapping or navigation. Effectively, the rotation matrix \(R\) must cancel out robot motion to provide a "stable" view of a scene. ### _MEMS Mirror-enabled Adaptive LIDAR_ The ability to reorient sensor pose could have many uses in robotics, particularly in image alignment during motion such as in SLAM. If the camera and robot are rigidly attached, then the camera experiences all the motion the robot experiences, including jitter and other potential disturbances that are detrimental to the Visual SLAM task. This could result in spurious features, errors in localization, and incorrect feature association leading to an inaccurate map. In this paper, we describe a sensor design that can perform image reorientation of a LiDAR in hardware without the need for any software processing for such compensation. Previously, pan-tilt-zoom (PTZ) cameras have attempted to address this problem. However, they use mechanical actuation which can react in ones of Hz making it not suitable for tasks such as egomotion compensation in real-time. This is evidenced by the limited use of PTZ cameras on robots - most robots just have sensors rigidly attached. Our designs break through these past difficulties by exploiting recently available microelectromechanical (MEMS) and opto-mechanical components for changing camera parameters. Opto-MEMS components are famously fast (many kHz), and they allow the changing of the LiDAR projection offset orientation during robot motion, such that the view of LiDAR is effectively static. By changing LiDAR views two orders of magnitude (or more) faster than robot motion, we can effectively allow for camera views to be independent of the robot view. In this work, we can compensate the LiDAR point cloud using an onboard IMU or external feedback such as motion tracking setup. More generally, such compensation allows the robot to focus on the control task while the camera can perform perception (which is required for the control task) independently, and greatly simplifies robot planning as the planner does not need to account for perception and just needs to reason about the control task at hand. MEMS LiDAR optics have the advantages of small size and low power consumption [44, 23, 24]. Our algorithmic and system design contributions beyond this are: * We present the design of a novel LiDAR sensor adopting a MEMS mirror similar to this LiDAR MEMS scanner [49]. This design enables wide non-resonant scanning angles for arbitrary orientations. We integrate this with two types of feedback (IMU and external sensors) to demonstrate quick and high-rate motion compensation. Figure 1 shows the design of our sensor. * We describe and geometrically characterize our sensor, showing that compensation in hardware can reduce the number of unknowns for proprioceptive and exteroceptive tasks. In a simulation, we characterize the effect of compensation delay and compensation rate to identify benefits for robot perception. The quantitative and qualitative results of these simulations are shown in Sect. III. * We show UAV flight with a proof-of-concept hardware prototype combining external feedback with the MEMS mirror for egomotion compensation. We enable UAV flight by tethering the MEMS modulator to the other heavy necessary components, like the laser, photodetector, optics, the driver circuit, and the signal processing circuitry. The frequencies of the mirror modulation and IMU measurement are much higher than typical robot egomotion. Our prototype MEMS compensated scan system can perform such compensation in under 10 ms. Please see the accompanying video for proper visualization, and see Fig. 18. * We provide an implementation of the sensor in the Fig. 2: Biological motion compensation. The position and the angle of the head of the hawk remain stable despite body motion to provide the hawk an stabilized vision. [https://www.youtube.com/watch?v=aqgewVCC0k0](https://www.youtube.com/watch?v=aqgewVCC0k0) Gazebo simulator. Using this simulated sensor, we propose a framework to adapt modern LiDAR SLAM pipeline to incorporate motion compensation. We adapt a modern LiDAR SLAM pipeline LIO-SAM [39] to incorporate motion compensation to use such a sensor and demonstrate the utility of such motion compensation. We will open-source the sensor implementation, the UAV simulation environment, as well as our LIO-SAM adaptations 1 on publication. Footnote 1: [https://github.com/yuyangch/Motion_Compensated_LIO-SAM](https://github.com/yuyangch/Motion_Compensated_LIO-SAM) ## II Related Work **Small, compact LiDAR for small robotics:** MEMS mirrors have been studied to build compact LiDAR systems [44, 23, 24]. For instance, Kasturi et al. demonstrated a UVA-borne LiDAR with an electrostatic MEMS mirror scanner that could fit into a small volume of 70 mm \(\times\) 60 mm \(\times\) 60 mm and weighed about only 50 g [23]. Kimoto et al. developed a LiDAR with an electromagnetic resonant MEMS mirror for robotic vehicles [24]. **Comparison to software-based compensation:** Motion compensation techniques and image stabilization techniques have been widely used in image captures. Similar to imaging devices, LiDAR point cloud shows point cloud blurring, motion artifacts caused by the motion of the LiDAR or the motion of the target object. Some software-based LiDAR motion compensation use ICP (iterative closest point) [32] and feature matching [12] to find translation and rotation of successive point clouds. Software-based compensation for robotics motion has been studied in great detail in SLAM algorithms [43] or Expectation- maximization (EM) methods[35]. Software-based motion compensation have a relative high computation barrier for micro-robotics and may degrade if the point cloud have large discrepancy. Some of the software-based motion compensation relies on the capture of a full frame of point cloud, so it cannot capture the motion frequency higher than the frame rate. For most of the LiDAR (other than flash LiDAR), especially the single scanning beam MEMS LiDAR, the rolling shutter effect caused by the LiDAR motion jitter remains a problem. In contrast to these approaches, we wish to compensate the sensor in hardware, during image capture. Hardware LiDAR motion compensation has several benefits. First, the compensation can be implemented to every LiDAR scanning pulse (for 2D MEMS based LiDAR), which can correct the rolling shutter effect and improve the motion response range. Second, the motion compensation algorithm very simple and can be implement on a low-power microcontroller or FPGA. Third, even if the hardware motion compensation still have some errors, it provides a better initialization for the following software compensation. These ideas are closer to how PTZ cameras track dynamic objects [26, 19] and assist with background subtraction [40]. However, compared to these approaches, we can tackle egomotion of much higher frequencies, which is a unique challenge of micro-robots. We compensate signals much closer to those seen in adaptive optics and control for camera shake [2, 46, 3]. In addition, our system is on a free moving robot, rather than a fixed viewpoint. **Motorized gimbals:** Comparing to motorized image stabilization systems [22], MEMS mirrors not only have smaller size and lighter weight, but their frequency response bandwidth are better than the bulky and heavy camera stabilizer. The MEMS mirror response time is can be less than 10 ms or even less than 1 ms. The servo motor of the camera stabilizer has a bandwidth width less than 30 Hz because they are bulky and have heavy load [28, 37]. This results in a response time higher than 10 ms. **Motion compensation in displays and robotics:** Motion-compensated MEMS mirror scanner has been applied for projection, [13], where hand-shake is an issue. In contrast, we deal with the vibration of much higher frequencies, and our approach is closest to adaptive optics for robotics. For example, [42, 41] change the zoom and focal lengths of cameras for SLAM. We compensate using small mirrors, utilizing a rich tradition of compensation in device characterization[29] and to improve SNR [16]. Compared to all the previous methods, we are the first to show IMU-based LiDAR compensation with a MEMS mirror in hardware. **LiDAR SLAM**: Ever since the seminal work of [57], successive LIDAR SLAM designs largely follow a LiDAR SLAM architecture similar to Figure 10, where the front end consists De-skew and Feature extraction stages, while the backend usually consists of ICP and Pose Graph Optimization packages such as g2o [25] or GTSAM [8] that globally optimizes the odometry information as estimated by LiDAR visual odometry. Successive efforts moved towards improvement in the following sub areas: 1) tightly coupling LiDAR and IMU [56]; 2) updating the backend PGO optimizer [39]; 3) updating the back end's ICP [33][55]; 4) updating the front end's point-cloud data structure to do away with ICP's feature dependence [52]_Nevertheless, to the best of our knowledge, all existing LiDAR SLAM systems are designed for LiDARs that are rigidly attached, via fixed joints, to robots and vehicles._ **Sensor reorientation in Active SLAM:** There has been a lot of work in the area of perception-aware path planning. A basic assumption of this line of work is that the sensor is rigidly attached to the robot, and therefore, its field of view can be changed only by changing the pose of the robot. [7][36][9] improve SLAM accuracy by actively changing the robot trajectory to improve the field-of-view. Our sensor can simplify these works by changing the FOV in hardware without requiring additional constraints on the path planning. ## III Understanding the Benefits of Compensated LiDAR in Simulation ### _Basic LiDAR geometry_ A MEMS-based LIDAR scanning system consists of a laser beam reflected off a small mirror. Voltages control the mirror by physically tilting it to different angles. This allows for LIDAR depth measurements at the direction corresponding the mirror position. Let the function controlling the azimuth be \(\phi(V(t))\) and the function controlling the elevation be \(\theta(V(t))\), where \(V\) is the input voltage that varies with time step \(t\). To characterize our sensor, we use the the structure-from-motion (SFM) framework with the LIDAR projection matrix \(\mathbf{P}\) and the robot's rotation \(\mathbf{R}\) and translation \(t\) \[\mathbf{P}=\mathbf{K}\begin{bmatrix}\mathbf{R}&t\end{bmatrix} \tag{1}\] Where \(\mathbf{K}\) is an identity matrix. In our scenario, the 'pixels' \(\mathbf{x}\) relate to the mirror vector orientation \((\theta(V(t)),\phi(V(t))\) on a plane at unit distance from the mirror along the z-axis, and are obtained by projections of 3D points \(\mathbf{X}\). Many robotics applications need point cloud alignment across frames, which needs us to recover unknown rotation and translation that minimizes the following optimization. \[\min_{\mathbf{R},t}\|\mathbf{x}-\mathbf{P}\mathbf{X}\|. \tag{2}\] This optimization _usually happens in software, after LiDAR and IMU measurements_[45]. Our key idea is that the MEMS mirror provides an opportunity to compensate or control two aspects of the projection matrix \(\mathbf{P}\)_before capture, in hardware_.. In this paper, we propose to control a new aspect of the SFM equation in hardware: the rotation matrix \(\mathbf{R}\). Given the robot pose (from onboard IMU or other sensing) and the intrinsic matrix, we can easily perform post-capture translation estimation. \[\min_{t}\|\mathbf{x}-\mathbf{P}\mathbf{X}\|. \tag{3}\] In other words, hardware compensation with MEMS mirrors simplifies the post-capture LIDAR alignment methods to _just finding translation_\(t\), allowing for lightweight and low-latency algorithms to be used with minimal computational effort. ### _Benefits of IMU-compensated LiDAR in SLAM_ We demonstrate the benefits of motion compensated LiDAR in simulation. Our setup is as follows - we use Airsim [38] running on Unreal Engine 4 for realistic perception and visualization. We tested two scenarios - a scene with geometric objects, called _Blocks scene_ shown in Figure 3(a), and an outdoor scene with a bridge and mountains, called _Mountains scene_ shown in Figure 3(e). In both scenes, the LiDAR is mounted on a prototype quadrotor UAV. We run LOAM [57], an open-source state-of-the-art LiDAR SLAM system to map the environment and localize the UAV. As described earlier, motion compensation can be achieved through various means such as a gimble, active compensation of a pan-tilt-zoom camera or MEMS-based hardware compensation like our system. The differences between these methods are along two dimensions - (i) latency of compensation, called _compensation delay_ from now on, and (ii) number of times we can compensate in a second, called _compensation rate_. By varying these two parameters in simulation, we compare each method's performance. In order to systematically compensate based on IMU input, we perform some pre-processing of the IMU data. To smooth out the high angular velocity body movements, an angular moving average LiDAR stabilization algorithm is implemented. This method stores the past UAV orientations in a sliding, fix length queue, and reorients the mounted LiDAR towards the average of the past orientations. The average of the orientations is calculated through Linear Interpolation (LERP) of the stored quaternions. We detailed our calculations in Appendix. The method is also known as Quaternion \(L_{2}\)-mean [14]. Given the relative shot duration of the sliding window, and the relatively small range of rotation that's cover during simulation flights, the prerequisite of using this method is met. It helps remove the impulsive jerky movements that may be observed by the LiDAR, akin to a low-pass filter. Fig. 3: (a) Representative simulation scenario - Blocks scene (b) Mapping the Blocks scene with compensation at 55Hz and no delay (c) Mapping the Blocks scene without compensation (d) Mapping the Blocks scene with compensation at 55Hz and delay of 150ms. (e) Mountains scene (f) mountains scene simulation with 55Hz compensation and 0ms delay (g) mountains scene simulation without compensation. (h) mountains scene simulation with 5Hz compensation and 0ms delay. In the experiment, the UAV performs three back-and-forth lateral flights between two way points. During the alternation of way points, the UAV reaches 130deg/s in the X body axis. The mounted LiDAR is configured at 16 channels, 360 degree horizontal FOV, 30deg vertical FOV and with 150,000 Hz sample rate, akin to commercially available LiDARs. To quantify performance, we calculate _odometry error_, the difference between the ground truth UAV positions and those positions estimated by LOAM. Figure 4 show the results from our simulations for Blocks scene and Mountains scene. We set the compensation rate to five different values - uncompensated, 5Hz, 10 Hz, 30 Hz, and 55 Hz. We set the compensation delay to five values - no delay (0 ms), 30 ms, 90 ms and 150 ms. Both the position error and angular error are high when the compensation rate is uncompensated or 5 Hz in the Blocks scene (Figure 3(a)). It is significantly lower for 10 Hz, 30 Hz and 55 Hz. This shows that smaller rates of compensation as performed by a mechanical gimbal or a PTZ camera (which operate at 5 Hz or lower) are far less effective than a faster compensation mechanism such as the one proposed by us. Similarly, the error in position as well as orientation is low when the compensation delay is either 0ms or 30 ms (Figure 3(b)). For larger compensation delays such as 90ms and 150 ms, the error is several times that of when the compensation delay is 30 ms. This shows that as the compensation delay is higher, as it could be with software-based compensation on low-power embedded systems, it is far less effective and leads to greater error in trajectory estimation. This further argues for a system such as ours that is able to perform compensation in hardware, and therefore at a higher rate. The trends are similar, albeit less pronounced in the Mountains scene where features are much less distinct and feature matching is more challenging in general. This proof-of-concept set of simulations encouraged us to build our proposed system. ## IV Novel LiDAR Design We propose a simple and effective design, where the MEMS mirror and photodetector are placed on a movable head. For image stabilization, we are also able to place the IMU there. A LiDAR engine and accompanying electronics are tethered to this device, which can be light and small enough for micro-robots. To enable both the LiDAR scanning and compensated scanning at high rate, it is important to understand the characterization of the MEMS scanner. ### _The MEMS mirror_ All the compensation effects and size advantages described so far will be nullified if the MEMS mirror cannot survive the shock, vibration and shake associated real-world robots. Here Fig. 4: UAV odometry error while varying compensation rate and compensation delay in two scenes we analyze the robustness of the MEMS mirror device for such platforms. Most MEMS mirrors rely on high-quality factor resonant scanning to achieve wide field-of-view (FoV), which leads to heavy ringing effects and overshoot with sudden changes of direction [30, 47]. A suitable MEMS mirror for motion-compensated scanning is expected to have a wide non-resonant scanning angle, smooth and fast step responses,can operate under common robotics vibration and can survive shock. To achieve this goal, we adopt a popular electrothermal bimorph actuated MEMS mirror design [21, 48] to build this MEMS mirror. The employed MEMS mirror is fabricated with Al/SiO\({}_{2}\) based inverted-series-connected (ISC) bimorph actuation structure reported in [21]. This type of MEMS mirror has the advantages of simple and mature fabrication process [58, 50], wide non-resonant scanning angle, linear response and good stiffness. A new electrothermal MEMS mirror is designed and fabricated with the adaption of the motion compensation application. We note that other previously reported MEMS mirrors with electrothermal actuators, electrostatic actuators, or electromagnetic actuators may also be applicable to the motion compensated LiDAR scanning [23, 20, 49]. ### _Compensation Algorithm_ In the previous sections, we saw the advantages of MEMS mirror-based compensation and the feasibility for use in a robotic LiDAR. Here we focus on the details of the hardware-based rotation compensation algorithm using MEMS mirror scanning LiDAR and sensing for the compensation. The MEMS mirror reflect a single ray of light towards a point in the spherical coordinate \(\{\alpha,\beta,r\}\). The \(\{\alpha,\beta\}\) are the two angular control input to the mirror to achieve such target. We will first establish the local(robot) and global (world) frames, then introduce known helper conversion from spherical to Cartesian coordinates, and finally gets into the details of compensation. #### Iii-B1 Coordinate Systems Our LiDAR can compensate for rotation, but it can not compensate for translation. So all discussion here on in drops translation from \(SE(3)\) and will only be focus on \(SO(3)\). Let the robot have rotation \(R^{w}_{robot}\in SO(3)\) relative to the world frame. In here, the frame of the un-moving base of the LiDAR sensor have Identity rotation \(R^{w}_{base}\in SO(3)\) and therefore identical \(SO(3)\) transformation as the robot frame. Let \(R^{w}_{desired}\in SO(3)\) be the desired rotation target in the world frame. \(R^{w}_{desired}\) can be decided by the users. For example, it can remain a constant rotation matrix, to impose a stabilization control policy and have the robot's FOV to remain up-right. Other possibility includes aiming towards a specific world frame target \(t\in R^{3}\) which we will touch on later, in IV-B6. #### Iii-B2 Spherical-to-Cartesian Conversions It is important to outline the conversion from the spherical, which is the control coordinate, to normal Cartesian coordinate. Points in the spherical coordinate \(\{\alpha,\beta,r\}\) can be converted to Cartesian coordinate via known equations, \[p_{cartesian}=\begin{bmatrix}x\\ y\\ z\end{bmatrix}=\begin{bmatrix}r\cos\alpha\cos\beta\\ r\cos\alpha\sin\beta\\ r\sin\alpha\end{bmatrix} \tag{4}\] and vice versa: \[p_{spherical}=\begin{bmatrix}\alpha\\ \beta\\ r\end{bmatrix}=\begin{bmatrix}\arctan\frac{z}{\sqrt{x^{2}+y^{2}_{M}}}\\ \arctan\frac{y}{x}\\ \sqrt{x^{2}+y^{2}+z^{2}}\end{bmatrix} \tag{5}\] Note that both \(p_{cartesian}\) and \(p_{spherical}\) are points located in the robot's local coordinate frame, \(R^{w}_{robot}\). Other literature's refer to this frame as the local frame, or camera frame. #### Iii-B3 Spatial Scanning A set of \(i\) spherical control coordinates \(\{\alpha_{i},\beta_{i},r_{i}\}\) defines the scanning pattern of the LiDAR, this is up to the user. For example, \(\{\alpha_{i},\beta_{i}\}\) can be restricted to certain range to define a FOV limit. This range can span \((0,360)\) degrees like a commercially available velodyne LiDAR, or it can be smaller. #### Iii-B4 General Rotation Compensation Let \(R_{control}\) be the rotation from robot rotation \(R^{w}_{robot}\) to the desired rotation \(R^{w}_{desired}\), therefore \(R^{w}_{desired}+R_{control}R^{w}_{robot}\). We have, \[R_{control}=R^{w}_{desired}(R^{w}_{robot})^{T} \tag{6}\] Now, all points in the spatial scanning pattern \(p_{spherical}=\{\alpha_{i},\beta_{i},r_{i}\}\) of the robot frame \(R^{w}_{robot}\) can be re-projected to the desired frame \(R^{w}_{desired}\), we first translate each \(\{\alpha_{i},\beta_{i},r_{i}\}\) to Cartesian \(p_{cartesian}\) by Equation 4. Then: \[p_{desired-cartesian}=R_{control\mathcal{D}artesian} \tag{7}\] Then we can translate the rotated points \(p_{desired-cartesian_{i}}\) back to the spherical coordinate \(p_{desired-spherical_{i}}\), via equation 5 for point \(i\)'s control input. It is important to note that, this full \(SO(3)\) compensation is only achievable because our LiDAR project individual point \(p_{i}\) independently from other points in the set. In the case of a traditional camera or a commercially available LiDAR like Velodyne, The entire set of \(p_{i}\) can be viewed as being projected as a group and correlate to each other. In these other sensors, Full \(SO(3)\) compensation is not achievable, even if the sensors are mounted to the robot by a universal joint with 2 degree of freedom \(\alpha,\beta\). But we will also analyze this special case of grouped points re-projection, since our LiDAR can achieve this 2-axis only compensation. #### Iii-B5 Special Case: 2-axes only compensation Let \(R_{control}\) be limited to 2-axes rotation only: \[R^{*}_{control}=\begin{bmatrix}\cos\beta&0&\sin\beta\\ 0&1&0\\ -\sin\beta&0&\cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha&-\sin\alpha&0\\ \sin\alpha&\cos\alpha&0\\ 0&0&1\end{bmatrix} \tag{8}\] Our LiDAR can then use \(R^{*}_{control}\) to perform 2-axis only compensation. Further, this compensation can be readily extend to commercially available cameras and LiDARs (such as Velodyne) mounted on a universal joint to the robot frame. #### Iv-B6 Target Aiming Let \(t_{target}^{w}\in\mathbb{R}^{3}\) be the target of interest in the world frame, and let \(t_{robot}^{w}\) be the robot's current world frame translation. Then \[p_{aim}=t_{target}^{w}-t_{robot}^{w} \tag{9}\] outlines the ray direction which we want to align our "principle axis", or the projection center point \(p_{cartesian}:\{x=1,y=0,z=0\}\) towards. We can simply translate \(p_{aim}\) to spherical coordinate via equation 5 to find \(\alpha,\beta\), then compose a \(R_{control}^{*}\) via equation 8 for the entire scanning grid. #### Iv-B7 MEMS Related details MEMS-related details, relating to the 1-dimension controls of each actuation axis \(\{\alpha,\beta\}\), including analysis of robot motion shock on the MEMS as well as preliminary pointcloud stitching, are included in the appendix. ### _LiDAR Hardware Specifics_ Our prototype (Figure 5) uses an InGaAs avalanche photodiode module (Thorlabs, APD130C). A fiber with a length of 3 m delivers the laser from the laser source to the scanner head. The gain-switch laser (Leishen, LEP-1550-600) is collimated and reflected by the MEMS mirror. The X-axis of the IMU (VectorNav, VN-100) is parallel to the neutral scanning direction of the MEMS mirror. The in-run bias stability of the gyroscope is \(5-7^{\circ}\)/hr typ. The scanner head sits on a tripod so that it can be rotated in the yaw and pitch directions. In the LiDAR base, an Arduino microcontroller is used to process the ToF signals, sample the IMU signals and control the MEMS mirror scanning direction. The data are sent to a PC for post-processing and visualization. Since our motivation was use with micro-robots, our maximum detection distance is 4 m with a \(80\%\) albedo object and the minimal resolvable distance is 5 cm. The maximum ToF measurement rate is 400 points/sec. According to the compensation algorithm described in the previous section, the MEMS mirror scanning direction is updated and compensated for motion at 400 Hz. We now describe our experiments. Please see the accompanying video for further clarification. ### _Compensation experiments with zero translation_ To demonstrate the effect of compensation, a visible laser is used instead of the LiDAR IR light to visualize the effect of tracking. We mount the LiDAR MEMS scanner on the UAV, as shown in Figure 5. The MEMS mirror desired scanning angle is set to a single point on the target object (\(0^{\circ}\) by \(0^{\circ}\)) to make it easier for comparing. Here the entire scanning grid \(\{\alpha_{i},\beta_{i},1\}\) consist of one single point only at the projection center. We use the general compensation outline in IV-B4 The UAV together with the LiDAR scanner head is held with hand with random rotational motion in yaw/pitch direction. The upper laser trace comes from the laser rigidly connected to the UAV which indicates the UAV's motion. The lower trace is reflected from the MEMS mirror, which shows the compensated/uncompensated scanning laser. The results are shown in Figure 6. The MEMS scanning laser trace area of the compensated scanning is significantly smaller than the uncompensated scanning trace under similar rotational motion disturbance. The videos of the real-time compensation results is available in the supplementary materials. Then the IR pulse laser is connected to run the LiDAR. An object of interest (in the shape of a +) is placed 2.4 m away from the LiDAR and at the center of the field of view and the background is at 2.8 m, as shown in Figure 7(a). The MEMS mirror performs a raster scanning pattern with an initial field of view of \(-3.5^{\circ}\)\(\sim\)\(+3.5^{\circ}\) in both axes to leave the room for compensation. Each frame has 20 by 20 pixels, and the frame refresh rate is 1 fps. To mimic robot vibration, the tripod is rotated randomly in the directions of yaw (Z-axis) and pitch (Y-axis), and the point clouds are shown in Figures 7(d). Despite the motion of the LiDAR head, the point clouds are quite stable. The differences among all of the point clouds are generally less than 2 pixel in either axis, caused by measurement noise. Figure 7(c) shows the point clouds without compensated scanning, where the relative positions of the target object in the point clouds keep changing. The target object may come out of the MEMS scanning FoV without compensation. With a continuous rotation of 1.5 Hz in the Y-axis, the same structure may appear in multiple positions in the same frame of the point cloud, as shown in the 3rd figure of Figure 7(c). Multiple frames of point cloud are stocked together and shown in the last column of Figure 7. The object can still be identified in the compensated point cloud (Figure 7(f)), but becomes fuzzy caused by the motion jitter when not compensated (Figure 7(e). The videos of the real-time compensation point cloud results is available in the supplementary materials. ## V UAV Experiment Next, we demonstrated the motion compensated LiDAR by flying it on a UAV. The robot pose is from an external motion capture system that tracks the UAV. We vary the robot pose sampling rate and study its effect on the effect of compensation. The UAV is controlled to hover at a designated position with yaw/pitch rotation as motion jitter. Motion compensated LiDAR is set to compensate all the rotational motion, including the controlled rotation and the random motion disturbance. The compensated MEMS scanning laser uses a visible light, and the other visible laser is fixed at a relative higher position on the UAV, as shown in the images in Figure 8b. The target scanning direction is a fixed point on the target. Here, the entire scanning grid \(\{\alpha_{i},\beta_{i},1\}\) consist of 20x20 grid pattern points. We use the aiming compensation outline in IV-B6 We trim about 12 s videos in each experiments while the UAV is flying, and the each frames of the videos are accumulated into an image to track the motion of the UAV and the errors of the compensated scanning. The robot pose sampling rate is set from 1 Hz to 200 Hz to investigate its effect on the compensation results. The controlled UAV rotations are in the yaw and pitch direction. However, the actual motions cause some random motions during the flying. Point clouds are also collected when the UAV is hovering and we overlap several frames. As the robot pose sampling frequency increases from 1Hz, 2Hz to 50Hz, the width of the overlapping area shrink from 10 to 11 points at 1Hz (Fig. 9f), to 6 points at 50 Hz (9d). As the target object in the size of the target object in the point cloud settle down to the a smaller area and the location of the target in the point cloud becomes more certain. ## VI Rotation Compensated LiDAR-Inertial SLAM Design SLAM is a body of fundamental applications for visual sensors. All exisitng SLAM literatures reason about its odometry in the sensor's local frame, sometimes call camera frame. In this work this frame is the robot frame, with world frame orientation \(R_{robot}^{w}\), refer to IV-B1. The basic assumption of the existing SLAM is that visual sensor readings use the robot frame with world rotation \(R_{robot}^{w}\) as their reference. This assumption is untrue for our sensor, because that our sensor readings use the frame with world orientation \(R_{control}R_{robot}^{w}\) as their reference. Through sections IV-B4 to IV-B6, the additional none-zero rotation \(R_{control}\) orients the original scanning grid towards different directions. The existence of \(R_{control}\) breaks the basic assumption of existing SLAM. \(R_{control}\) must be compensated for, in order for the existing SLAM pipelines to work with our sensor. This can be done post-capture, we can use either IV-B4 or IV-B5 to compensate. We details the compensation later in VI-B. Most LiDAR odometry pipelines utilize Iterative Closest Point (ICP) to match consecutive scans and determine the rotation and translation between the poses. Any rotation of the LiDAR relative to the vehicle would cause errors in the ICP's prior. This would directly impact the quality of ICP's point-cloud registration. Although ICP can tolerate certain levels of error in its prior, in Section VI-C2 we will show that it is far from enough when the magnitude of \(R_{control}\) input increases. ### _Motion Compensation for LiDAR SLAM_ In this simulation, we simulate a 360 degree velodyne LiDAR, that can rotate relative to the vehicle it is mounted on, by a universal joint. A universal joint has rotational DOF similar to a MEMS mirror, both limited to 2 DOF. This setup fits into the compensation framework introduced in the special case IV-B5. In this section, we will demonstrate in simulation that such rotation introduces error in an off-the-shelf LiDAR SLAM pipeline. Additionally, We propose a general method to incorporate such rotation into consideration when performing LiDAR-related SLAM. We demonstrate the effectiveness of the framework in a Rotation Compensated LiDAR-Inertial Odometry and Mapping package, which is publicly available on Github. For the ease of integration, our framework proposal does not make large edits in the existing paradigm. It only adds a "rotate" stage right after the de-skew stage in the front end, and before feature extraction stage. This edition can be easily integrated with existing pipeline and future designs. The rotate Fig. 5: The movable LiDAR MEMS scanner head, which include the MEMS mirror, an IMU and a fiber laser collimator. (a) shows the top view and (b) shows the LiDAR scanner head mounted to the bottom of the UAV. Fig. 6: We use a visible laser to compare the effect of motion compensation of our sensor. The upper laser trace indicates UAV motion, and the lower laser trace indicates the compensated/uncompensated scanning laser reflected from the MEMS mirror. The compensated MEMS scanning (right) shows a much smaller laser trace area than the uncompensated MEMS scanning result. ## IV Conclusion Fig. 8: A comparison of the compensation strength versus the robot pose sampling frequencies. All the images are accumulation of 12s of UAV hovering videos. The compensation target scanning direction is a fixed direction. Fig. 7: Motion compensated LiDAR point cloud result with hand held motion disturbance. (a)The target object ”+” placed 2.4 m from the LiDAR, along with (b), its initial point cloud scan. (c) and (d) show uncompensated vs. compensated scanning. Uncompensated shake range was \((-0.2^{\circ},+1.4^{\circ}),(+1.0^{\circ},+1.4^{\circ}),(-1.5^{\circ},+1.7^{ \circ})\) and compensated shake range was \((-1.1^{\circ},+1.4^{\circ}),(+1.2^{\circ},-0.5^{\circ}),(-2.3^{\circ},+0.5^{ \circ})\). (Please see supplementary video). (d and e) The stacking of 5 frame of point cloud of compensated and uncompensated results. stage does one single operation, it rotates the de-skewed point cloud according to the control rotation input to the LiDAR. Our workflow block diagram in shown in Figure 10. ### _The Rotation Stage_ The purpose of this stage of the pipeline, is to rotate the captured LiDAR frame, to a correct position, relative to the LiDAR's base frame of reference. (In this work, the LiDAR's base frame is identical to the vehicle's body frame.) Let the Lidar's base frame have world rotation \(R^{w}_{robot}\in SO(3)\). In a traditional LiDAR that doesn't rotate, all points received in a LiDAR frame are position relative to the LiDAR's base frame, with world rotation \(R^{w}_{robot}\). However, this assumption is incorrect for our device, where the LiDAR frame is positioned relative to the frame with rotation \(R_{control}R^{w}_{robot}\). The LiDAR's head can rotate \(R_{control}\in SO(3)\), relative to its base. This rotation is restricted to azimuth \(\beta\) and elevation directions \(\alpha\). note that in here we analyze a more generalize, special case compensation IV-B5, but the it can be easily extend to full \(SO(3)\) compensation IV-B4, When a LiDAR frame is received, we take the most recently known rotation \(\alpha,\beta\), in this case the most recent known command rotation, and converts them into a rotation matrix: \[R_{control}=\begin{bmatrix}\cos\beta&0&\sin\beta\\ 0&1&0\\ -\sin\beta&0&\cos\beta\end{bmatrix}\begin{bmatrix}\cos\alpha&-\sin\alpha&0\\ \sin\alpha&\cos\alpha&0\\ 0&0&1\end{bmatrix} \tag{10}\] And applies the rotation to each point \(p\in\mathbb{R}^{3}\) in the frame point-cloud: \[p_{rotated}=R_{control}p \tag{11}\] The rotated point-cloud \(p_{rotated}\) now locates at the correct position, relative to the LiDAR's base frame, with world rotation \(R^{w}_{robot}\). The basic assumption of traditional SLAM are now met. ### _Evaluation_ Now we evaluate the sensor in simulation to answer a few questions. First, we want to compare traditional LiDAR SLAM and our Motion Compensated SLAM in terms of the handling change in mirror/universal joint orientation magnitude. Next we investigate the effect of noise in the mirror's orientation (say through a faulty IMU or other sensor) on the robustness of our pipeline. We also show the degree to which our pipeline can tolerate such noise. The proposed SLAM framework should be expected to function, even when the LiDAR users employ control policies Fig. 10: Illustration of our rotating LiDAR SLAM augmentation pipelines. The existing structures are shown in grey Fig. 9: A comparison of the compensation strength verse the IMU sampling frequencies. The images are accumulation of 20s of point cloud video during the UAV hovering. We use a cuboidal object (as seen in Figure 8(a)) as object of interest. The width of the target increases due to compensation inaccuracy as we reduce compensation rate from 50 Hz to 1 Hz demonstrating the utility of high rates of compensation even in such static scenarios that rotate its FOV significantly frame-to-frame. This is Unlike the scenario of running a active stabilization control policy proposed in III-B, Where frame-to-frame variation is minimal. Therefore, in this evaluating section, we use control policy that samples random LiDAR rotation control input from Gaussian distributions at high frequency. We choose LIO-SAM as the traditional SLAM package to compare against, and built our Motion Compensation framework into it, and open-source it on github. LIO-SAM has all the signature point-cloud processing stages shown in Figure 10. It is relatively new and has good SLAM accuracy performance versus State-of-the-Art. We hope through the open-source code we can demonstrate to the community an example of incorporating our framework. For Odometry error evaluation, We calculate Average Translation Error (ATE) which is defined by the KITTI benchmark [11]: \[E_{trans}(\mathcal{F})=\frac{1}{|\mathcal{F}|}\sum_{i,j\in\mathcal{F}}\|\hat{T }_{j}\hat{T}_{i}^{-1}(T_{j}T_{i}^{-1})^{-1}\|_{2} \tag{12}\] Where \(\mathcal{F}\) is a set of frames \((i,j)\), \(T\) and \(\hat{T}\) are the estimated and true LiDAR poses respectively. #### Vi-B1 Experiment Set Up A simulation study is setup in robotics simulator Gazebo, where a LiDAR with similar sensor characteristics to a VLP-32 **[velodyne]** is mounted on a simulated drone. Further, the LiDAR can rotate in azimuth and elevation via a universal joint. The simulated drone iris, is from the PX4's simulation package. Its onboard IMU have noise added to it according to a noise model outlined in Kalibr [10]. The point-cloud messages from the LiDAR, as well as the IMU messages from the drone, are passed into robotics middleware ROS, where the proposed LiDAR SLAM package runs. The drone is commanded to flight in a diamond waypoint pattern, around a environment with different types of resident buildings. The proposed LiDAR-Inertial SLAM package builds on top of LIO-SAM, which employs the powerful PGO backend GT-SAM [8]. We incorporate the compensation described in VI-B into LIO-SAM, here on in refer to as Motion Compensated LIO-SAM. Naturally, we will compare SLAM performance of Motion Compensated LIO-SAM, against the stock version of LIO-SAM. See Figure 11. To control the orientation of the universal joint, angular commands in \(\alpha,\beta\), in degrees, are input to the mirror. Vi-B2 Level of mirror control orientation magnitude tolerable by an unmodified pipeline vs our system The two angular commands are sampled from 1-d Gaussian distributions with standard deviation of various degrees, at 10 Hz. Odometry error vs command rotation's gaussian standard deviation is plotted in figure 12. An Gaussian distribution with 8 degree standard deviation generate input angle within +-,8,16,24 degrees, 68,95 and 99.7 percent of the time respectively. Therefore 99.7 percent of the time, angular input span a range of 48 degrees. In short, by considering mirror rotation, the system can tolerates angular input that span 48 degrees. In contrast, without mirror rotation information, the system can only tolerate angular input that span 12 degrees. Even in the cases where the input spans less than 12 degrees, by considering mirror rotation, SLAM quality improves in comparison. #### Vi-B3 Level of mirror control noise tolerable The two angular commands are sampled from 1-d Gaussian distributions with standard deviation of 3 degrees. Additionally, Noise rotations in both azimuth and elevation are added on top of each channel. Odometry error vs command rotation's noise Gaussian standard deviation is plotted in Figure 13. The system can tolerate mirror input control noise up to 1.6 degree standard deviation, which spans 9.6 degree. ## VII Limitations and Conclusions We have designed an adaptive lightweight LiDAR capable of reorienting itself. We have demonstrated the benefits of such a LiDAR in simulation as well as experiment. We have demonstrated in experiment image stabilization in hardware using an onboard IMU. We have also demonstrated viewing an object of interest using this LiDAR through external robot pose feedback. Please see the supplementary material of this paper for some MEMS-related details, including analysis of robot motion shock on the MEMS as well as preliminary point cloud stitching. We also explain how such a sensor can reduce sensing uncertainty. Finally, our accompanying video shows our experiments in action. We would also like to acknowledge limitations of our study. * We have indirectly compared to software methods using compensation _delay_. This is because, compared to hardware-compensation, any software-compensation will add delay, and therefore delay is a fundamental metric for hardware-software comparison. For future work we will directly compared with software compensation methods. * Our design requires the robot to connected to the heavier sensing components using a tether. This limits the fly range and the detection FoV of the system. Although removing the tether restriction is left to future work, we believe that our design is capable of advancing sensing in microrobots significantly, and will help our community in designing microrobots in the future. * All our results (using IMU as well as Vicon motion capture) are indoor results. We hope to perform future experiments with outdoor effects such as wind. * In our current system design, there are implementation bottlenecks that limit compensated bandwidth. These are caused by the MEMS mirror and by the signal processing. Tightly coupled on-board designs can reduce these. In conclusion, through simulation and a prototype implementation we realize our design shown in Fig 1. We have shown, in simulation on on real hardware experiments, that hardware-compensation using a MEMS mirror improves both reconstruction and mapping. In particular, microrobots which suffer from heavy vibration and motion jitter (such as flapping-wing MAVs [51]) can benefit greatly from the motion compensated MEMS mirror scanning LiDAR for stabilized scene capture. Finally, over the long term, we believe that our design methodology can decouple robot and sensor geometry greatly simplifying robot perception.
2309.16651
Diffusion coefficients preserving long-time correlations: Consequences on the Einstein relation and on entanglement in a bosonic Bogoliubov system
We analytically derive the diffusion coefficients that drive a system of $N$ coupled harmonic oscillators to an equilibrium state exhibiting persistent correlations. It is shown that the main effect of the latter consists in a renormalization of the natural frequencies and the friction coefficients of the oscillators. We find that the Einstein relation may be satisfied at low temperatures with frequency-dependent effective friction coefficients, provided that the physical constraints are fulfilled. We also investigate the entanglement evolution in a bipartite bosonic Bogoliubov system initially prepared in a thermal squeezed state. It is found that, in contrast to what one may expect, strong coupling slows down the entanglement sudden death, and for initially separable states, entanglement generation may occur.
Yamen Hamdouni
2023-09-28T17:54:27Z
http://arxiv.org/abs/2309.16651v2
# Diffusion coefficients preserving long-time correlations: ###### Abstract We analytically derive the diffusion coefficients that drive a system of \(N\) coupled harmonic oscillators to an equilibrium state exhibiting persistent correlations. It is shown that the main effect of the latter consists in a renormalization of the natural frequencies and the friction coefficients of the oscillators. We find that the Einstein relation may be satisfied at low temperatures with frequency-dependent effective friction coefficients, provided that the physical constraints are fulfilled. We also investigate the entanglement evolution in a bipartite bosonic Bogoliubov system initially prepared in a thermal squeezed state. It is found that, in contrast to what one may expect, strong coupling slows down the entanglement sudden death, and for initially separable states, entanglement generation may occur. Introduction Quantum systems out of equilibrium exhibit rich features and are the origin of several intriguing results and concepts in quantum and statistical physics [1]. The significance and complexity of such systems solicited a great deal of interest from different perspectives and backgrounds. In particular, the relaxation of a system to its steady state due to dissipation is a fundamental problem that stimulated the advancement of the theoretical investigation of nonequilibrium phenomena. Historically, the classical study of problems related to dissipation gave rise to two different but equivalent approaches, namely the Langevin and the Fokker-Planck equations [2]. In this regard, the process of friction, which is often introduced phenomenologically, as responsible of the exchange of energy between the system and its surrounding, plays a central role. Generally speaking, friction is incorporated in the equations of motion by introducing the so-called memory-friction kernel. The latter describes the dependence of the dynamics at a given instant on the properties of the relevant variables at earlier times, and is solely related to the stochastic nature of the heat reservoir with which the system is interacting [3]. The Markov approximation neglects the memory effects [4], and as a result the kernel reduces to a constant friction coefficient. This approximation holds well when the coupling of the system to the reservoir is weak. Moreover, dissipation is often associated with transport phenomena, where diffusion plays a central role. The fluctuation-dissipation theorem relates the friction coefficient to an other important parameter, namely, the so-called diffusion coefficient. For example, the diffusion coefficient, through Fick's law, relates the time change of the density of particles to its spatial variation. The extension of the concepts of friction and dissipation to the quantum domain turns out to be nontrivial. In particular, the loss of energy implies that the dynamics is basically nonunitary [5]. A well established and widely adopted approach to deal with this kind of problems, consists in regarding the system of interest as part of a larger system and to apply the usual quantization procedure to the full system [6]. Then, the properties of the target subsystem may obtained by disregarding the other degrees of freedom, which are refereed to the heat bath. This is translated in a mathematical language by taking the partial trace over the reservoir variables. The outcome depends only on the degrees of freedom of the system. The quantity of interest is the reduced density matrix, the derivation of which does not yield in general analytically solvable master equations. Some approximations and simplifications are generally employed among which the Markovian approximation, valid for weak system-reservoir coupling, is the most common one. Lindblad [7; 8] came out with the most general form of Markovian master equation fulfilling the requirements of physically acceptable evolution of the reduced density matrix, notably the requirement of complete positivity. The Lindblad equation provides an axiomatic approach that has been used in many contexts in particular for the harmonic oscillator. [9]-[23] In [22] the multi-dimensional diffusion coefficients for a system of \(N\) coupled harmonic oscillators were derived assuming that the system completely thermalizes. The aim of the present work is to investigate the effect of persistent correlations in the steady state on the diffusion coefficients. In section II we introduce the model Hamiltonian along with the equilibrium state. We then derive analytically the expressions of the diffusion coefficients leading to the steady state. This is followed by a discussion of the validity of the Einstein relation. Section III deals with the evolution of entanglement in a Bogoliubov bosonic system, where the influence of the coupling constant is investigated. We end the paper with a brief conclusion. ## II Effect of persistent steady state correlations on the diffusion coefficients ### System Hamiltonian and equilibrium state We consider a system of \(N\) coupled harmonic oscillators and we denote by \(m_{k}\) and \(\omega_{k}\) the mass and the natural frequency of oscillator number \(k\). The positions and momenta operators of the oscillators satisfy the canonical commutation relations \[[\hat{q}_{k},\hat{p}_{j}]=i\hbar\delta_{kj},\qquad[\hat{q}_{k},\hat{q}_{j}]=[ \hat{p}_{k},\hat{p}_{j}]=0. \tag{1}\] The Hamiltonian of the system is given by \[\hat{H}=\hat{H}_{0}+\hat{H}_{I} \tag{2}\] where \(H_{0}\) describes a set of \(N\) independent harmonic oscillators exhibiting position-momentum coupling with strengths \(\mu_{kk}\): \[\hat{H}_{0}=\sum_{k=1}^{N}\Bigl{(}\frac{\hat{p}_{k}^{2}}{2m_{k}}+\frac{1}{2}m_{k }\omega_{k}^{2}\hat{q}_{k}^{2}+\frac{\mu_{kk}}{2}(\hat{p}_{k}\hat{q}_{k}+\hat{q }_{k}\hat{p}_{k})\Bigr{)}, \tag{3}\] whereas \(H_{I}\) takes into account the coupling between the different subsystems. The latter is assumed bilinear in the positions and momenta operators; explicitly we have \[\hat{H}_{I}=\frac{1}{2}\sum_{k\neq j}^{N}(\nu_{kj}\hat{q}_{k}\hat{q}_{j}+\kappa _{kj}\hat{p}_{k}\hat{p}_{j})+\sum_{k\neq j}^{N}\mu_{kj}\hat{p}_{k}\hat{q}_{j}, \tag{4}\] where \(\mu_{kj}\), \(\nu_{kj}\) and \(\kappa_{kj}\) designate the coupling constants which satisfy the Onsager relations \(\nu_{kj}=\nu_{jk}\) and \(\kappa_{kj}=\kappa_{jk}\)[24]. The aim is to investigate the effect of dissipation on the system of oscillators. To this end we assume that the latter is coupled to a heat reservoir, whose characteristic relaxation time is much smaller than any relevant time constant associated with the evolution of the system of harmonic oscillators. The above assumption is mostly verified in the Markovian approximation for which memory effects of the heat bath are irrelevant. Under the above conditions, the evolution of the density matrix of the system verifies a Markovian master equation in the Lindblad form, namely, \[\frac{d\hat{\rho}(t)}{dt}=-\frac{i}{\hbar}[\hat{H},\hat{\rho}(t)]+\frac{1}{2 \hbar}\sum_{\ell}([\hat{V}_{\ell}\hat{\rho}(t),\hat{V}_{\ell}^{\dagger}]+[\hat {V}_{\ell},\hat{\rho}(t)\hat{V}_{\ell}^{\dagger}]), \tag{5}\] where the operators \(\hat{V}_{\ell}\) depend solely on the degrees of freedom of the system, but not on those associated with the heat bath. It should be stressed that most of the Markovian master equations found in the literature are special cases of the general form (5). We further assume that the system evolves eventually to an equilibrium Gibbs state that retains the position-momentum correlation of each oscillator. More precisely, we take as equilibrium density matrix the state: \[\hat{\rho}_{\rm eq}=\exp(-\beta\hat{H}_{\rm eq})/Z,\qquad Z={\rm tr}\exp(- \beta\hat{H}_{\rm eq}), \tag{6}\] where \[\hat{H}_{\rm eq}=\sum_{k=1}^{N}\Bigl{(}\frac{\hat{p}_{k}^{2}}{2m_{k}}+\frac{1} {2}m_{k}\omega_{k}^{2}\hat{q}_{k}^{2}+\frac{\tilde{\mu}_{kk}}{2}(\hat{p}_{k} \hat{q}_{k}+\hat{q}_{k}\hat{p}_{k})\Bigr{)}, \tag{7}\] and \(Z\) denotes the partition function. The coupling constant \(\tilde{\mu}_{kk}\) entering the expression of the asymptotic state may differ from \(\mu_{kk}\) of the original Hamiltonian, which will be the case in the subsequent discussion unless stated otherwise. We also assume that \(\omega_{k}>\tilde{\mu}_{kk}\), which ensures the existence of a stable equilibrium state. Indeed, from a classical perspective the function \(H_{k}(q_{k},p_{k})=p_{k}^{2}/(2m_{k})+(1/2)m_{k}\omega_{k}^{2}+\tilde{\mu}_{kk} q_{k}p_{k}\) is represented by a two dimensional surface in the phase space \((q_{k},p_{k})\). It has a global minimum at the origin when \(\omega_{k}>\tilde{\mu}_{kk}\), and the equilibrium in this case is stable. It posses a global maximum at the origin when \(\omega_{k}<\tilde{\mu}_{kk}\) and the state is inherently instable, which we will exclude from our discussion. The general expressions of the generators \(\hat{V}_{\ell}\) read \[\hat{V}_{\ell}=\sum_{j}(a_{j}^{\ell}\hat{p}_{j}+b_{j}^{\ell}\hat{q}_{j}),\quad \hat{V}_{\ell}^{\dagger}=\sum_{j}(a_{j}^{\ell*}\hat{p}_{j}+b_{j}^{\ell*}\hat{q }_{j}), \tag{8}\] which are linear in the degrees of freedom of the oscillators. Notice that in the above combinations the coefficients \(a_{j}^{\ell}\) and \(b_{j}^{\ell}\) are complex numbers. Inserting the density matrix \(\hat{\rho}_{\rm eq}\) into the master equation (5) yields \[e^{\beta\hat{H}_{\rm eq}}\hat{H}e^{-\beta\hat{H}_{\rm eq}}-\hat{H}=\frac{1}{2i }\sum_{\ell}\Bigl{(}2e^{\beta\hat{H}_{\rm eq}}\hat{V}_{\ell}e^{-\beta\hat{H}_ {\rm eq}}\hat{V}_{\ell}^{\dagger}-e^{\beta\hat{H}_{\rm eq}}\hat{V}_{\ell}^{ \dagger}\hat{V}_{\ell}e^{-\beta\hat{H}_{\rm eq}}-\hat{V}_{\ell}^{\dagger}\hat{ V}_{\ell}\Bigr{)}. \tag{9}\] Taking into account the Baker-Campbell-Hausdorff formula \[e^{\beta\hat{H}_{\rm eq}}\hat{A}e^{-\beta\hat{H}_{\rm eq}}=\hat{A}-\frac{ \beta}{1!}[\hat{A},\hat{H}_{\rm eq}]+\frac{\beta^{2}}{2!}[[\hat{A},\hat{H}_{ \rm eq}],\hat{H}_{\rm eq}]-\frac{\beta^{3}}{3!}[[[\hat{A},\hat{H}_{\rm eq}, \hat{H}_{\rm eq}],\hat{H}_{\rm eq}]+\cdots\Bigr{\}}, \tag{10}\] we obtain for the operator \(\hat{q}_{k}\): \[e^{\beta\hat{H}_{\rm eq}}\hat{q}_{k}e^{-\beta\hat{H}_{\rm eq}} = \hat{q}_{k}-i\hbar\tilde{\mu}_{kk}\hat{q}_{k}-\frac{i\hbar\beta} {m_{k}}\hat{p}_{k}+\frac{\beta^{2}}{2}(-\hbar^{2}\tilde{\mu}_{kk}^{2}\hat{q}_{ k}-\frac{\hbar^{2}\tilde{\mu}_{kk}}{m_{k}}\hat{p}_{k}+\hbar^{2}\omega_{k}^{2}\hat{q}_{ k}+\frac{\hbar^{2}\tilde{\mu}_{kk}}{m_{k}}\hat{p}_{k}) \tag{11}\] \[- \frac{\hbar^{2}\beta^{3}}{6}(\omega_{k}-\tilde{\mu}_{kk})(i\hbar \tilde{\mu}_{kk}\hat{q}_{k}+\frac{i\hbar\beta}{m_{k}}\hat{p}_{k})+\cdots\] This gives \[e^{\beta\hat{H}_{\rm eq}}\hat{q}_{k}e^{-\beta\hat{H}_{\rm eq}} = \hat{q}_{k}\Bigl{[}\cosh\bigl{(}\hbar\beta\sqrt{\omega_{k}^{2}- \tilde{\mu}_{kk}^{2}}\bigr{)}-\frac{i\tilde{\mu}_{kk}}{\sqrt{\omega_{k}^{2}- \tilde{\mu}_{kk}^{2}}}\sinh\bigl{(}\hbar\beta\sqrt{\omega_{k}^{2}-\tilde{\mu} _{kk}^{2}}\bigr{)}\Bigr{]} \tag{12}\] \[- \frac{i}{m_{k}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}\sinh \bigl{(}\hbar\beta\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\bigr{)}\hat{p}_{k}.\] Similarly, we obtain for \(\hat{p}_{k}\) \[e^{\beta\hat{H}_{\rm eq}}\hat{p}_{k}e^{-\beta\hat{H}_{\rm eq}} = \hat{p}_{k}\Bigl{[}\cosh\bigl{(}\hbar\beta\sqrt{\omega_{k}^{2}- \tilde{\mu}_{kk}^{2}}\bigr{)}+\frac{i\tilde{\mu}_{kk}}{\sqrt{\omega_{k}^{2}- \tilde{\mu}_{kk}^{2}}}\sinh\bigl{(}\hbar\beta\sqrt{\omega_{k}^{2}-\tilde{\mu} _{kk}^{2}}\bigr{)}\Bigr{]} \tag{13}\] \[+ \frac{im_{k}\omega_{k}^{2}}{\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^ {2}}}\sinh\bigl{(}\hbar\beta\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\bigr{)} \hat{q}_{k}.\] From a quantum mechanical point of view the expectation values of the operators \(\hat{q}_{k}\) and \(\hat{p}_{k}\) in the steady state clearly vanish due to the quadratic nature of \(\hat{H}_{\rm eq}\). On the other hand we can write for example: \[{\rm tr}\hat{q}^{2}e^{-\beta\hat{H}_{\rm eq}}={\rm tr}e^{\beta\hat{H}_{\rm eq}} \hat{q}e^{-\beta\hat{H}_{\rm eq}}e^{\beta\hat{H}_{\rm eq}}\hat{q}e^{-\beta\hat {H}_{\rm eq}}e^{-2\beta\hat{H}_{\rm eq}}, \tag{14}\] which must be positive and monotonic with respect to the temperature in the steady state. The hyperbolic functions displayed in equations (12) and (13) should not display oscillatory behavior, and consequently the quantity under the square root sign should be positive, that is \(\omega_{k}>\tilde{\mu}_{kk}\), which corresponds to the stable equilibrium state. ### Analytical expressions Let us now introduce the new parameters \[D_{q_{k}q_{j}} = \frac{\hbar}{2}{\rm Re}\sum_{\ell}a_{k}^{\ell*}a_{j}^{\ell},\quad D _{p_{k}p_{j}}=\frac{\hbar}{2}{\rm Re}\sum_{\ell}b_{k}^{\ell*}b_{j}^{\ell}, \tag{15}\] \[D_{q_{k}p_{j}} = -\frac{\hbar}{2}{\rm Re}\sum_{\ell}a_{k}^{\ell*}b_{j}^{\ell}, \quad\lambda_{kj}=-{\rm Im}\sum_{\ell}a_{k}^{\ell*}b_{j}^{\ell},\] (16) \[\alpha_{kj} = -{\rm Im}\sum_{\ell}a_{k}^{\ell*}a_{j}^{\ell},\quad\eta_{kj}=-{ \rm Im}\sum_{\ell}b_{k}^{\ell*}b_{j}^{\ell}. \tag{17}\] Physically speaking, \(D_{q_{k}q_{j}}\), \(D_{p_{k}p_{j}}\) and \(D_{q_{k}p_{j}}\) denote the diffusion coefficients of the oscillators, whereas \(\lambda_{kj}\) designate the corresponding friction coefficients. Afterwards, inserting equations (12) and (13) into equation (9), and using the coefficients defined above we obtain: \[m_{k}m_{j}\omega_{k}\omega_{j}D_{q_{k}q_{j}}+D_{p_{k}p_{j}} = \frac{1}{2}\omega_{k}\omega_{j}\Phi_{kj}+2\Psi_{kj} \tag{18}\] \[m_{k}\omega_{k}D_{q_{k}p_{j}}+m_{j}\omega_{j}D_{q_{j}p_{k}} = \frac{1}{2}\omega_{k}\omega_{j}\Phi_{kj}-2\Psi_{kj}\] (19) \[-m_{k}m_{j}\omega_{k}\omega_{j}D_{q_{k}q_{j}}+D_{p_{k}p_{j}} = \omega_{k}\Gamma_{kj}+\omega_{j}\Gamma_{jk}\] (20) \[m_{k}\omega_{k}D_{q_{k}p_{j}}-m_{j}\omega_{j}D_{q_{j}p_{k}} = \omega_{k}\Gamma_{kj}-\omega_{j}\Gamma_{jk}, \tag{21}\] where \[\Phi_{kj} = \frac{\hbar}{4}\Bigg{(}\frac{\omega_{k}-\tilde{\mu}_{kk}}{\omega_{k} \sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}\Bigg{)}\Big{[}\frac{1}{\omega_{j}}( \eta_{kj}+\nu_{kj})-m_{k}m_{j}\omega_{k}(\alpha_{kj}+\kappa_{kj}) \tag{22}\] \[+ \frac{m_{k}\omega_{k}}{\omega_{j}}(\lambda_{kj}+\mu_{kj})+m_{j}( \lambda_{jk}-\mu_{jk})\Bigg{]}\coth\Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{k }^{2}-\tilde{\mu}_{kk}^{2}}\Bigg{)}\] \[+ \frac{\hbar}{4}\Bigg{(}\frac{\omega_{j}-\tilde{\mu}_{jj}}{\omega_ {j}\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}}\Bigg{)}\Big{[}\frac{1}{\omega_ {k}}(\eta_{jk}+\nu_{kj})-m_{k}m_{j}\omega_{j}(\alpha_{jk}+\kappa_{kj})\] \[+ \frac{m_{j}\omega_{j}}{\omega_{k}}(\lambda_{jk}+\mu_{jk})+m_{k}( \lambda_{kj}-\mu_{kj})\Bigg{]}\coth\Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_ {j}^{2}-\tilde{\mu}_{jj}^{2}}\Bigg{)},\] and \[\Psi_{kj} = \frac{\hbar}{16}\Bigg{(}\frac{\omega_{k}\sqrt{\omega_{k}^{2}- \tilde{\mu}_{kk}^{2}}}{\omega_{k}-\tilde{\mu}_{kk}}\Bigg{)}\Big{[}-\frac{1}{ \omega_{k}}(\eta_{kj}+\nu_{kj})+m_{k}m_{j}\omega_{j}(\alpha_{kj}+\kappa_{kj}) \tag{23}\] \[+ \frac{m_{j}\omega_{j}}{\omega_{k}}(\lambda_{jk}-\mu_{jk})+m_{k}( \lambda_{kj}+\mu_{kj})\Bigg{]}\coth\Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{k }^{2}-\tilde{\mu}_{kk}^{2}}\Bigg{)}\] \[+ \frac{\hbar}{16}\Bigg{(}\frac{\omega_{j}\sqrt{\omega_{j}^{2}- \tilde{\mu}_{jj}^{2}}}{\omega_{j}-\tilde{\mu}_{jj}}\Bigg{)}\Big{[}-\frac{1}{ \omega_{j}}(\eta_{jk}+\nu_{jk})+m_{j}m_{k}\omega_{k}(\alpha_{jk}+\kappa_{jk})\] \[+ \frac{m_{k}\omega_{k}}{\omega_{j}}(\lambda_{kj}-\mu_{kj})+m_{j}( \lambda_{jk}+\mu_{jk})\Bigg{]}\coth\Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_ {j}^{2}-\tilde{\mu}_{jj}^{2}}\Bigg{)}.\] The quantity \(\Gamma_{kj}\) is given explicitly by: \[\Gamma_{kj} = \frac{\hbar}{8}\Bigg{(}\frac{\omega_{k}-\tilde{\mu}_{kk}}{\omega _{k}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}\Bigg{)}\Big{[}\eta_{kj}+\nu_{ kj}-m_{k}m_{j}\omega_{k}\omega_{j}(\alpha_{kj}+\kappa_{kj}) \tag{24}\] \[+ m_{k}\omega_{k}(\lambda_{kj}+\mu_{kj})-m_{j}\omega_{j}(\lambda_ {jk}-\mu_{jk})\Bigg{]}\coth\Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{k}^{2}- \tilde{\mu}_{kk}^{2}}\Bigg{)}\] \[+ \frac{\hbar}{8}\Bigg{(}\frac{\omega_{j}\sqrt{\omega_{j}^{2}- \tilde{\mu}_{jj}^{2}}}{\omega_{j}-\tilde{\mu}_{jj}}\Bigg{)}\Big{[}\frac{1}{ \omega_{k}\omega_{j}}(\eta_{jk}-\nu_{jk})+m_{k}m_{j}(\alpha_{jk}-\kappa_{jk})\] \[- \frac{m_{k}}{\omega_{j}}(\lambda_{kj}-\mu_{kj})+\frac{m_{j}}{ \omega_{k}}(\lambda_{jk}+\mu_{jk})\Bigg{]}\coth\Bigg{(}\frac{\hbar\beta}{2} \sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}\Bigg{)}.\] The diagonal elements of \(\Psi\), \(\Phi\) and \(\Gamma\) may be derived from the above expressions by making the substitutions: \[\nu_{kk}=m_{k}\omega_{k}^{2},\qquad\kappa_{kk}=\frac{1}{m_{k}},\qquad\alpha_{ kk}=\eta_{kk}=0. \tag{25}\] Then by solving the set of equations (18)-(21), we find for the diagonal diffusion coefficients: \[D_{q_{k}q_{k}} = \frac{\hbar}{2}\Bigg{(}\frac{\lambda_{kk}-\mu_{kk}+\tilde{\mu}_{kk} }{m_{k}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}\Bigg{)}{\rm coth}\Bigg{(} \frac{\hbar\beta}{2}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\Bigg{)} \tag{26}\] \[D_{p_{k}p_{k}} = \frac{\hbar}{2}\Bigg{(}\frac{m_{k}\omega_{k}^{2}(\lambda_{kk}+\mu _{kk}-\tilde{\mu}_{kk})}{\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}\Bigg{)}{ \rm coth}\Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}} \Bigg{)},\] (27) \[D_{p_{k}q_{k}} = -\Bigg{(}\frac{\hbar\lambda_{kk}\tilde{\mu}_{kk}}{2\sqrt{\omega_ {k}^{2}-\tilde{\mu}_{kk}^{2}}}\Bigg{)}{\rm coth}\Bigg{(}\frac{\hbar\beta}{2} \sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\Bigg{)}. \tag{28}\] The off-diagonal elements in coordinates and momenta read \[D_{q_{k}q_{j}} = \frac{\hbar}{4}\Bigg{(}\frac{\lambda_{jk}-\mu_{jk}}{m_{k}\sqrt{ \omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}+\frac{\tilde{\mu}_{kk}(\alpha_{kj}- \kappa_{kj})}{\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}\Bigg{)}\coth \Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\Bigg{)} \tag{29}\] \[+ \frac{\hbar}{4}\Bigg{(}\frac{\lambda_{kj}-\mu_{kj}}{m_{j}\sqrt{ \omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}}-\frac{\tilde{\mu}_{jj}(\alpha_{kj}+ \kappa_{kj})}{\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}}\Bigg{)}\coth \Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}\Bigg{)},\] \[D_{p_{k}p_{j}} = \frac{\hbar}{4}\Bigg{(}\frac{m_{k}\omega_{k}^{2}(\lambda_{jk}+ \mu_{jk})}{\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}-\frac{\tilde{\mu}_{kk} (\eta_{kj}+\nu_{kj})}{\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}\Bigg{)}\coth \Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\Bigg{)}\] (30) \[+ \frac{\hbar}{4}\Bigg{(}\frac{m_{j}\omega_{j}^{2}(\lambda_{kj}+ \mu_{kj})}{\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}}+\frac{\tilde{\mu}_{jj} (\eta_{kj}-\nu_{kj})}{\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}}\Bigg{)}\coth \Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}\Bigg{)}.\] Finally, the elements in mixed coordinates and momenta take the form: \[D_{q_{k}p_{j}} = \frac{\hbar}{4}\Bigg{(}\frac{\eta_{kj}+\nu_{kj}}{m_{k}\sqrt{ \omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}-\frac{\tilde{\mu}_{kk}(\lambda_{kj}+\mu _{kj})}{\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}\Bigg{)}\coth\Bigg{(}\frac{ \hbar\beta}{2}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\Bigg{)} \tag{32}\] \[+ \frac{\hbar}{4}\Bigg{(}\frac{m_{j}\omega_{j}^{2}(\alpha_{kj}- \kappa_{kj})}{\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}}-\frac{\tilde{\mu}_{ jj}(\lambda_{kj}-\mu_{kj})}{\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}}\Bigg{)}\coth \Bigg{(}\frac{\hbar\beta}{2}\sqrt{\omega_{j}^{2}-\tilde{\mu}_{jj}^{2}}\Bigg{)}.\] The latter formulas give the explicit analytical expressions of the diffusion coefficients, ensuring that the state of the system of harmonic oscillators evolves to the Gibbs state (6). It can be seen that the effect of the self momentum-position coupling in the steady state rests in the renormalization of both the frequencies and the phenomenological friction coefficients entering the expressions of the diagonal diffusion coefficients in position and momentum. For the other off-diagonal elements, the coupling constants \(\tilde{\mu}_{kk}\) contribute in a multiplicative manner along with the corresponding friction coefficients. ### Validity of the Einstein relation and quantum mechanical constraints By inspecting equation (27) we see that when the condition \[\frac{\hbar}{2k_{B}T}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\ll 1 \tag{33}\] is satisfied, the latter equation reduces to: \[D_{p_{k}p_{k}}=\Bigg{(}\frac{\omega_{k}^{2}(\lambda_{kk}+\mu_{kk}-\tilde{\mu}_{ kk})}{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}\Bigg{)}m_{k}k_{B}T \tag{34}\] which is Einstein's relation with a frequency-dependent effective friction coefficient. The above condition is met either when the temperature is high enough, so that the behavior of the heat reservoir can be considered to be quasi-classical; it may also be verified when \[\omega_{kk}-\tilde{\mu}_{kk}\ll\frac{k_{B}T}{\hbar}, \tag{35}\] that is when the coupling constant \(\tilde{\mu}_{kk}\) is of the same order of magnitude as the corresponding frequency \(\omega_{k}\). This means that in such physically realizable instances, even though the temperature is low, and the quantum nature of the dynamics is dominant, the Einstein relation, which is usually associated with the classical limit, may still be valid. In that case, the effective friction coefficient becomes very large. Since the latter is essentially positive, we infer that the following constraint holds: \[\lambda_{kk}>\tilde{\mu}_{kk}-\mu_{kk}. \tag{36}\] The transport coefficients displayed in equations (15)-(16) satisfy the quantum-mechanical inequalities: \[D_{q_{k}q_{k}}D_{p_{j}p_{j}}-D_{q_{k}p_{j}}^{2}\geq\frac{\hbar^{2 }}{4}\lambda_{kj}^{2}, \tag{37}\] \[D_{q_{k}q_{k}}D_{q_{j}q_{j}}-D_{q_{k}q_{j}}^{2}\geq\frac{\hbar^{ 2}}{4}\alpha_{kj}^{2},\] (38) \[D_{p_{k}p_{k}}D_{p_{j}p_{j}}-D_{p_{k}p_{j}}^{2}\geq\frac{\hbar^{ 2}}{4}\eta_{kj}^{2}, \tag{39}\] which, mathematically speaking, result from the Cauchy-Schwartz inequality. They are direct manifestation of the fact that the Lindblad master equation preserves the fundamental properties of the density matrix. From condition (37), it follows that \[\lambda_{kk}\geq\frac{|\mu_{kk}-\tilde{\mu}_{kk}|\omega_{k}}{\sqrt{\omega_{k}^{2}- \tilde{\mu}_{kk}^{2}}}\cosh\Biggl{(}\frac{\hbar\beta}{2}\sqrt{\omega_{k}^{2}- \tilde{\mu}_{kk}^{2}}\Biggr{)}. \tag{40}\] The latter condition is always satisfied when \(\mu_{kk}=\tilde{\mu}_{kk}\). Otherwise, in order to ensure the validity of that constraint, the temperature should verify \[T\geq\frac{\hbar\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}{2k_{B}\mathrm{ acosh}\Biggl{(}\frac{\lambda_{kk}\sqrt{\omega_{k}^{2}-\tilde{\mu}_{kk}^{2}}}{ \omega_{k}|\mu_{kk}-\tilde{\mu}_{kk}|}\Biggr{)}}, \tag{41}\] provided that \[\lambda_{kk}\geq\frac{|\mu_{kk}-\tilde{\mu}_{kk}|\omega_{k}}{\sqrt{\omega_{k}^ {2}-\tilde{\mu}_{kk}^{2}}}. \tag{42}\] ## III Entanglement evolution in a Bogoliubov bosonic system In this section we apply the results obtained thus far to the Bogoliubov Hamiltonian. The latter, which describes a system of coupled bosonic modes, has been employed primarily to study Bose-Einstein condensate [25]. Explicitly it reads \[H=\sum_{\ell,m}K_{\ell m}\hat{a}_{\ell}^{\dagger}\hat{a}_{m}+\frac{1}{2}\sum_{ \ell,m}(\Delta_{\ell m}\hat{a}_{\ell}^{\dagger}\hat{a}_{m}^{\dagger}+\Delta_{ \ell m}^{*}\hat{a}_{\ell}\hat{a}_{m}) \tag{43}\] where the matrix \(K\) is hermetian while the matrix \(\Delta\) is symmetric, i.e \(K=K^{\dagger}\), \(\Delta^{T}=\Delta\). The bosonic operators satisfy the canonical commutation relations \([\hat{a}_{\ell},\hat{a}_{m}^{\dagger}]=\delta_{\ell m}\) and \([\hat{a}_{\ell}^{\dagger},\hat{a}_{m}^{\dagger}]=[\hat{a}_{\ell},\hat{a}_{m}]=0\). We map these operators to the canonical operators \(\hat{q}_{\ell}\) and \(\hat{p}_{\ell}\) through the expressions: \[\hat{a}_{\ell} = \sqrt{\frac{K_{\ell\ell}}{2}}\biggl{(}\hat{q}_{\ell}+\frac{i}{ \hbar K_{\ell\ell}}\hat{p}_{\ell}\biggr{)}, \tag{44}\] \[\hat{a}_{\ell}^{\dagger} = \sqrt{\frac{K_{\ell\ell}}{2}}\Bigl{(}\hat{q}_{\ell}-\frac{i}{ \hbar K_{\ell\ell}}\hat{p}_{\ell}\Bigr{)}. \tag{45}\] Inserting the latter expressions into the formula of the Hamiltonian (43) we obtain a new Hamiltonian having the same form as (2), with the identifications \[\omega_{\ell} = \sqrt{K_{\ell\ell}^{2}-(\mathrm{Re}\Delta_{\ell\ell})^{2}}/\hbar,\qquad m_{\ell}=\frac{K_{\ell\ell}}{K_{\ell\ell}-\mathrm{Re}\Delta_{\ell\ell}}, \tag{46}\] \[\mu_{\ell m} = \frac{\mathrm{Im}(\Delta_{\ell m}-K_{\ell m})}{\hbar}\sqrt{\frac{ K_{\ell\ell}}{K_{mm}}},\qquad\nu_{\ell m}=\mathrm{Re}(K_{\ell m}+\Delta_{ \ell m})\sqrt{K_{\ell\ell}K_{mm}},\] (47) \[\kappa_{\ell m} = \frac{\mathrm{Re}(K_{\ell m}-\Delta_{\ell m})}{\hbar^{2}\sqrt{K_ {\ell\ell}K_{mm}}}. \tag{48}\] We shall restrict ourselves to a system of two modes (\(N\)=2), whose initial state is a squeezed state, the covariance matrix of which is given by \[\sigma(0)=\begin{pmatrix}\mathcal{A}(0)&\mathcal{C}(0)\\ \mathcal{C}(0)^{T}&\mathcal{B}(0)\end{pmatrix} \tag{49}\] where \[\mathcal{A}(0)=\begin{pmatrix}\sigma_{q_{1}q_{1}}(0)&\sigma_{q_{1}p_{1}}(0)\\ \sigma_{p_{1}q_{1}}(0)&\sigma_{p_{1}p_{1}}(0)\end{pmatrix}=\begin{pmatrix} \xi_{1}&0\\ 0&\xi_{1}\end{pmatrix}, \tag{50}\] and \[\mathcal{B}(0)=\begin{pmatrix}\sigma_{q_{2}q_{2}}(0)&\sigma_{q_{2}p_{2}}(0)\\ \sigma_{p_{2}q_{2}}(0)&\sigma_{p_{2}p_{2}}(0)\end{pmatrix}=\begin{pmatrix} \xi_{2}&0\\ 0&\xi_{2}\end{pmatrix}, \tag{51}\] whereas \[\mathcal{C}(0)=\begin{pmatrix}\sigma_{q_{1}q_{2}}(0)&\sigma_{q_{1}p_{2}}(0)\\ \sigma_{p_{1}q_{2}}(0)&\sigma_{p_{1}p_{2}}(0)\end{pmatrix}=\begin{pmatrix} \theta&0\\ 0&-\theta\end{pmatrix}. \tag{52}\] In the above equations the variances are defined by \[\sigma_{FG}(t)=\frac{1}{2}\mathrm{tr}\Big{(}\hat{\rho}\{\hat{F}(t),\hat{G}(t) \}\Big{)}-\mathrm{tr}\Big{(}\hat{\rho}\ \hat{F}(t)\Big{)}\mathrm{tr}\Big{(}\hat{\rho}\ \hat{G}(t)\Big{)}. \tag{53}\] The parameters \(\xi_{k}\) and \(\theta\) are given by \[\xi_{1} = n_{1}\cosh^{2}(r)+n_{2}\sinh^{2}(r)+\frac{1}{2}\cosh(2r), \tag{54}\] \[\xi_{2} = n_{2}\cosh^{2}(r)+n_{1}\sinh^{2}(r)+\frac{1}{2}\cosh(2r),\] (55) \[\theta = \frac{1}{2}(n_{1}+n_{2}+1)\sinh(2r), \tag{56}\] with \(r\) being the squeezing parameter, and \(n_{k}\) denotes the mean thermal number of particles in mode \(k\), namely, \(n_{k}=\langle a_{k}^{\dagger}a_{k}\rangle\). Using the dual Lindblad equation for the arbitrary operator \(\hat{A}\), \[\frac{d\hat{A}(t)}{dt}=\frac{i}{\hbar}[\hat{H},\hat{A}(t)]+\frac{1}{2\hbar} \sum_{\ell}([\hat{V}_{\ell}^{\dagger}[\hat{A}(t),\hat{V}_{\ell}]+[\hat{V}_{ \ell}^{\dagger},\hat{A}(t)]\hat{V}_{\ell}]), \tag{57}\] it can be shown that the covariance matrix evolves in the course of the time according to \[\frac{d\sigma(t)}{dt}=M\sigma(t)+\sigma(t)M^{T}+2D, \tag{58}\] where \[M = \tag{59}\] \[\left(\begin{array}{ccc}-\lambda_{11}+\frac{\mathrm{Im}\Delta_{11} }{\hbar}&\frac{K_{11}-\mathrm{Re}\Delta_{11}}{K_{11}}&-\lambda_{12}+\frac{ \mathrm{Im}(\Delta_{12}-K_{12})}{\hbar}\sqrt{\frac{K_{11}}{K_{22}}}&-\alpha_{12 }+\frac{\mathrm{Re}(K_{12}-\Delta_{12})}{\sqrt{K_{11}K_{22}}}\\ -K_{11}(K_{11}+\mathrm{Re}\Delta_{11})/\hbar^{2}&-\lambda_{11}-\frac{\mathrm{ Im}\Delta_{11}}{\hbar}&\eta_{12}-\frac{\mathrm{Re}(K_{12}+\Delta_{12})}{\hbar^{2}} \sqrt{K_{11}K_{22}}&-\lambda_{21}-\frac{\mathrm{Im}(\Delta_{21}-K_{21})}{ \hbar}\sqrt{\frac{K_{22}}{K_{11}}}\\ -\lambda_{21}+\frac{\mathrm{Im}(\Delta_{21}-K_{21})}{\hbar}\sqrt{\frac{K_{22} }{K_{11}}}&\alpha_{12}+\frac{\mathrm{Re}(K_{12}-\Delta_{12})}{\sqrt{K_{11}K_{ 22}}}&-\lambda_{22}+\frac{\mathrm{Im}\Delta_{22}}{\hbar}&\frac{K_{22}-\mathrm{ Re}\Delta_{22}}{K_{22}}\\ -\eta_{12}-\frac{\mathrm{Re}(K_{12}+\Delta_{12})}{\hbar^{2}}\sqrt{K_{11}K_{ 22}}&-\lambda_{12}-\frac{\mathrm{Im}(\Delta_{12}-K_{12})}{\hbar}\sqrt{\frac{K_ {11}}{K_{22}}}&-K_{22}(K_{22}+\mathrm{Re}\Delta_{22})/\hbar^{2}&-\lambda_{22}- \frac{\mathrm{Im}\Delta_{22}}{\hbar}\end{array}\right),\] and the diffusion matrix is given by: \[D=\begin{pmatrix}D_{q_{1}q_{1}}&D_{q_{1}p_{1}}&D_{q_{1}q_{2}}&D_{q_{1}p_{2}}\\ D_{p_{1}q_{1}}&D_{p_{1}p_{1}}&D_{p_{1}q_{2}}&D_{p_{1}p_{2}}\\ D_{q_{2}q_{1}}&D_{q_{2}p_{1}}&D_{q_{2}q_{2}}&D_{q_{2}p_{2}}\\ D_{p_{2}q_{1}}&D_{p_{2}p_{1}}&D_{p_{2}q_{2}}&D_{p_{2}p_{2}}\end{pmatrix}. \tag{60}\] The solution for the covariance matrix then reads as \[\sigma(t)=\exp(Mt)(\sigma(0)-\tilde{\sigma})\exp(Mt)^{T}+\tilde{\sigma}, \tag{61}\] provided that the following condition is satisfied: \[M\tilde{\sigma}+\tilde{\sigma}M^{T}+2D=0. \tag{62}\] Therefore, the state of the system remains Gaussian at later times, and its covariance matrix retains its form, namely \[\sigma(t)=\begin{pmatrix}\mathcal{A}(t)&\mathcal{C}(t)\\ \mathcal{C}(t)^{T}&\mathcal{B}(t)\end{pmatrix}. \tag{63}\] This makes it possible to use the separability criteria for Gaussian states [26, 27] in order to study the entanglement evolution of the system. We use the so called logarithmic negativity as a measure of separability, which is defined by [28] \[E(\sigma(t))=-\log_{2}\!\left(\det\mathcal{A}(t)+\det\mathcal{B}(t )-2\det\mathcal{C}(t)\right.\] \[\left.-\sqrt{\left(\det\mathcal{A}(t)+\det\mathcal{B}(t)-2\det \mathcal{C}(t)\right)^{2}-4\det\sigma(t)}\right) \tag{64}\] The initial state (49) is entangled whenever the squeezing parameter \(r\) exceeds the critical value \(r_{c}\) fulfilling \(\cosh r_{c}=\dfrac{(n_{1}+1)(n_{2}+1)}{n_{1}+n_{2}+1}\)[29]. We also suppose that the steady is characterized by the new coupling constants \(\tilde{\Delta}_{\ell\ell}\). In figure 1, we display the evolution of the logarithmic negativity in the course of the time for an initially entangled state for some particular values of the model. It can be seen that the fastest rate with which it suddenly vanishes occurs when the interaction coupling strengths \(\tilde{\Delta}_{11}\) and \(\tilde{\Delta}_{22}\) vanish in the steady state. The stronger these coupling constants are, the slower the sudden death of entanglement. We also notice that for small values, the profile is nearly linear, but for larger values of these constants, the logarithmic negativity may even exceed its initial value, and the shape of the curve representing it deviates considerably from the linear form. Figure 2 represents the logarithmic negativity for an initially separable state. It turns out that for a squeezing parameter \(r\) smaller, but sufficiently close to the threshold value \(r_{c}\), the state becomes entangled, the amount of which grows steadily with the increase of the coupling strengths in the equilibrium state. However, entanglement sudden death occurs in all cases, but with a slower rate for strong coupling. ## IV Conclusion We have used the Lindblad master equation, under the assumption of linear dissipation, to derive the explicit analytical expressions of the diffusion coefficients leading a system of coupled harmonic oscillators, weakly coupled to a heat bath, to a steady state that retains the position-momentum correlations of each oscillator. It turns out that the main effect of the latter consists in renormalizing the frequencies and the friction coefficients of the subsystems. When the physical constraints are fulfilled, we find that the validity of the Einstein relation may extend to low temperatures. We investigated the evolution of the entanglement in a bipartite Bogoliubov bosonic system initially prepared in a thermal squeezed state, where we find that the stronger the coupling constants are the slower the decay and sudden death occur. Entanglement generation is shown to take place for squeezing parameter lower but sufficiently close to the critical value. These results reveal clearly that the intrinsic correlations of each subsystem, that persist in the equilibrium state, affect considerably the evolution of the other subsystems in the course of the evolution of the total system. An interesting extension of the model may consists in investigating the effect of inter-subsystem correlations and whether they have positive or detrimental effect on e.g. entanglement evolution.
2309.09771
Detecting High-Energy Neutrinos from Galactic Supernovae with ATLAS
We show that ATLAS, a collider detector, can measure the flux of high-energy supernova neutrinos, which can be produced from days to months after the explosion. Using Monte Carlo simulations for predicted fluxes, we find at most $\mathcal{O}(0.1-1)$ starting events and $\mathcal{O}(10-100)$ throughgoing events from a supernova 10 kpc away. Possible Galactic supernovae from Betelgeuse and Eta Carinae are further analyzed as demonstrative examples. We argue that even with limited statistics, ATLAS has the ability to discriminate among flavors and between neutrinos and antineutrinos, making it an unique neutrino observatory so far unmatched in this capability.
Alex Y. Wen, Carlos A. Argüelles, Ali Kheirandish, Kohta Murase
2023-09-18T13:52:25Z
http://arxiv.org/abs/2309.09771v2
# Detecting High-Energy Neutrinos from Galactic Supernovae with ATLAS ###### Abstract We show that ATLAS, a collider detector, can measure the flux of high-energy supernova neutrinos, which can be produced from days to months after the explosion. Using Monte Carlo simulations for predicted fluxes, we find at most \(\mathcal{O}(0.1-1)\) starting events and \(\mathcal{O}(10-100)\) throughgoing events from a supernova 10 kpc away. Possible Galactic supernovae from Betelgeuse and Eta Carinae are further analyzed as demonstrative examples. We argue that even with limited statistics, ATLAS has the ability to discriminate among flavors and between neutrinos and antineutrinos, making it an unique neutrino observatory so far unmatched in this capability. ## I Introduction The discovery of high-energy astrophysical neutrinos from outside the solar system, first reported by IceCube in 2013 [1; 2], opened a new window to the Universe and marked the start of an era of multimessenger astrophysics with high-energy neutrinos. Cosmic neutrinos are valuable probes of astrophysical processes [3] and neutrino physics [4; 5]. However, small neutrino cross sections [6] and the observed falling energy spectra [7] have so far limited their study to very large volume detectors proposed or built in naturally occurring media such as glaciers [8; 9; 10], lakes [11; 12], oceans [13; 14; 15], or mountains [16; 17; 18]. These detectors are sparser, and have relatively poor energy and angular resolution, and particle identification capabilities, compared to more densely instrumented detectors used in collider and neutrino physics. Even with limited statistics, IceCube measurements of the astrophysical neutrino flavor composition have already yielded some of the strongest constraints on long-range forces [19], quantum-gravity operators [20; 21], the neutrino lifetime [22; 23; 24; 25], and ultralight dark matter interactions [26; 27; 28], to name a few of many models [29; 30; 31]. Further information can be obtained if astrophysical neutrinos observed are detected by collider detectors, and transient neutrino sources may provide unique opportunities [32; 33; 34]. In particular, the next Galactic supernova (SN) has been expected to yield a large detectable neutrino signal not only in the MeV range but also in the GeV-TeV range, and neutrino detection with large statistics at multienergies is possible [35]. In this _Letter_, we show that large collider detectors serve as a unique astrophysical neutrino telescope, which enables, among other things, the precise measurements of the flavor ratio of astrophysical neutrinos. To demonstrate this, we consider the ATLAS detector [36; 37], a barrel-shaped multi-purpose particle detector situated at the Large Hadron Collider (LHC) at CERN, primarily designed to study reactions originating at a central beam collision point. The sensitive, heavy hadronic (or tile) calorimeter [36; 38; 39; 40] is both massive and instrumented, making it a viable fiducial volume for energetic neutrino events to be measured. Moreover, ATLAS also has a sophisticated muon spectrometer [41; 42; 43; 36; 44] surrounding the calorimeters, capable of identifying muon tracks and measuring their momenta. This detector combination makes neutrino detection viable. ## II High-energy neutrino emission from supernovae Neutrinos play a critical role in the dynamics of a SN explosion. In addition to the known [45; 46; 47] and detected [48; 49] prompt flux of MeV neutrinos, core-collapse SNe have also been predicted to be promising sources of high-energy neutrinos [35]. Recent SN observations especially in the optical band provided strong evidence that interaction with dense, confined circumstellar material (CSM) transiently occurs as the SN shock wave propagates outwards [50; 51; 52; 53; 54; 55]. SN remnants (with ages of \(10^{2}-10^{3}\) yr), which are much older, have been established as cosmic-ray accelerators [56; 57]. Interacting SNe can also accelerate cosmic rays, in which they should be efficient sources of high-energy neutrinos and gamma rays [58]. For the next Galactic SN, even ordinary SNe like Type II-P SNe would lead to sufficiently large fluxes of neutrinos that are detectable to many terrestrial neutrino detectors such as IceCube [35], and even mini-bursts from nearby galaxies could be observed [59, 60]. The time window of neutrino signals is predicted to be the 10-100 day time range following an explosion [35]. Type II-P and IIn SN make up approximately 50% and 3-7%, respectively, of all core-collapse SNe [61]. The result for SNe II-P may also hold for other SN types (II-L, IIb) as long as they have a sufficiently dense CSM [62, 63], so the focus on these two SN types would be representative of the majority of SNe with confined CSM. We note that predicted neutrino fluxes from SNe have some uncertainties, which primarily depend on CSM properties. The CSM density is written as \(\rho_{\rm cs}=Dr^{-2}\) for a wind-like density profile, where \(D\equiv 5\times 10^{16}\,{\rm g\,cm^{-1}}D_{*}\) is the CSM parameter and \(r\) is the radius from the center of the SN explosion. To take into account the impacts, we consider the range of \(0.01<D_{*}<1.0\) for SN II-P, and \(0.1<D_{*}<1.0\) for SN IIn. This is sufficient for the purpose of this work to demonstrate the feasibility of ATLAS-like detectors for studies on astrophysical neutrinos, and other parameters such as the spectral index only moderately affect the overall detectability or have degeneracies with \(D_{*}\). See Refs. [35, 59] for details. ## II High-energy neutrino events in ATLAS High-energy neutrinos may create signatures as an interaction within the detector itself (_starting_ events), or the detection of a muon originating from an interaction within the Earth (_throughgoing_ events). For starting events, a charged-current (CC) or neutral-current (NC) deep inelastic scattering (DIS) would leave an energetic hadronic recoil within the ATLAS hadronic calorimeter. A muon, or tau decaying muonically, may instead be detected by the ATLAS muon spectrometer. For thoughgoing events, signals can only come from \(\nu_{\mu}\) CC interactions in surrounding bedrock, with a subdominant contribution from \(\nu_{\tau}\) (for simplicity, however, the \(\tau\) component is not taken into account); ATLAS may detect these muons as they traverse the muon spectrometer. The expected number of starting events induced by neutrinos, \(\mathcal{N}\), in volume of mass \(m\), is given by \[\mathcal{N}=\int dE_{\nu}\int dt\;\dot{\phi}_{\nu}(E_{\nu},t)\,\sigma_{\nu-{\rm nuc }}(E_{\nu})N_{\rm nuc}(m), \tag{1}\] where \(\dot{\phi}_{\nu}(E_{\nu},t)=(dN_{\nu}/dE_{\nu}dt)/(4\pi d^{2})\) is the all flavor neutrino flux, \(dN_{\nu}/dE_{\nu}dt\) depends on models (e.g., via \(D_{*}\)), \(d\) is the distance to a SN, \(\sigma_{\nu-{\rm nuc}}(E_{\nu})\) is the neutrino-nucleon cross section, and \(N_{\rm nuc}(m)\) is the number of nucleon targets in the fiducial volume. For high-energy neutrinos, the total cross section \(\sigma_{\nu-{\rm nuc}}\) is dominated by DIS, and we assume that matter is made of iso-scalar targets. Thus the cross sec Figure 1: _Event rates and observation significance of high-energy supernovae neutrinos in ATLAS._ The throughouting events rates represent the maximum number for a source that is always below the horizon. Below each panel, a plot of p-values for rejecting the atmospheric neutrino background-only hypothesis is shown. The estimated rates for Betelgeuse-like (B) and Eta Carinae-like (EC) SNe scenarios are shown in red bars. tion is averaged over the neutrino-proton and neutrino-neutron values. The integral in Eq. 1 is taken over the energy range \([10^{2},\,10^{6}]\,\mathrm{GeV}\). We expect the detection of starting events to be analogous to existing ATLAS studies [64; 65] heavily utilizing the missing transverse energy trigger, which is only most efficient above \(200\,\mathrm{GeV}\)[64; 65]. Therefore, our energy range is chosen to reflect that. At the high-energy end, \(E_{\nu}\dot{\phi}_{\nu}\) approximately falls with \(E_{\nu}^{-1}\), which yields a negligibly small rate above \(10^{6}\,\mathrm{GeV}\). Indeed for all of the fluxes considered in this work, approximately \(90\%\) of all neutrinos are below \(10^{4}\,\mathrm{GeV}\). The integration over time depends on SN type. We conservatively take 100 days for SNe IIn and 10 days for SNe II-P, based on the signal-to-background calculation in Ref. [35] as indicative of the characteristic time windows to search for neutrino signals. Finally, neutrinos and antineutrinos are computed separately owing to their distinct cross sections. For the ATLAS experiment, we assume the hadronic calorimeter mass \(m=4000\) metric tons, and include both CC and NC contributions in \(\sigma_{\nu-\mathrm{nuc}}(E_{\nu})\) when computing the number of starting events. Throughgoing events are estimated with a Monte Carlo method using techniques described in Ref. [66]. Neutrino interactions are considered in a rock column of 10 km long preceding the detector. A large number \(N^{\prime}_{\mathrm{sim}}\) of \(\nu_{\mu}\) CC interactions are simulated using LeptonInjector[66] as the event generator, using an energy distribution with the shape \(\int dt\,\dot{\phi}_{\nu}(E_{\nu},t)\sigma_{\nu-\mathrm{nuc}}(E_{\nu})\) with the appropriate flux, with muon flavor only. Muons with momenta pointing to the detector are propagated through the rock using PROPOSAL[67]. The number, \(n\), of muons that are energetic enough to reach the detector are recorded as part of the throughgoing signal. To get the number of expected events, the total number of CC interactions expected in the entire rock column is calculated with Eq. (1) (using only the CC fraction of the total \(\sigma_{\nu-\mathrm{nuc}}(E_{\nu})\)), and scaled with the factor \(n/N^{\prime}_{\mathrm{sim}}\). For more details, see Supplemental Methods and Tables. The dominant background consists of atmospheric neutrinos. We estimate the background separately for starting and throughgoing events with the same method described above. The atmospheric flux, \(\dot{\phi}_{\mathrm{bkg}}\), is obtained using the NuFlux[68] interface, which interpolates the neutrino flux computed by MCEq[69] assuming the Hillas-Gaisser H3a cosmic ray model [70], and the Sybill2.3c hadronic interaction model [71]. The solid angle required to calculate the atmospheric flux is based on the conservative angular resolution estimated in Ref. [72], namely \(17^{\circ}\) for starting events and \(5^{\circ}\) for throughgoing events, integrated for the same time interval as the corresponding SN. ## III Results We evaluate the event rates and the significance of observing high-energy neutrinos from two major representative types of core-collapse SNe (IIn and II-P) in ATLAS. The expected numbers of signal events varying with distance are shown for starting and throughgoing events in the top panels of Fig. 1. In the bottom panels of Fig. 1, we also show the p-values for rejecting the background-only hypothesis. Starting events for SNe II-P (IIn) would only constitute a significant signal if the SN was closer than approximately \(0.6\) kpc (3 kpc), which is small compared to a \(\sim 25\) kpc size of our Galaxy. However, throughgoing events are produced by a larger effective volume of the target, provided that the source in the sky is below the horizon for a sufficient period of time for neutrinos to interact in the bedrock around the detector. Opti Figure 2: _Throughgoing and starting event energy distributions._ Top: Neutrino energy \(E_{\nu}\) distribution of starting events in ATLAS for Type IIn and II-P SN with \(D_{*}=1\) at distance \(d=10\,\mathrm{kpc}\) for 100 (dark blue) and 10 days (light blue) of data taking. Corresponding background from atmospheric neutrinos are shown as shaded gray regions. Bottom: Distribution of the muon momentum, \(p_{\mu}^{\mathrm{det}}\), at the detector for throughgoing events. The shape of the spectrum is due to the consideration of \(100\,\mathrm{GeV}\) neutrino events and above, which produces a flatter distribution of muons at lower energies. mistically, for a source that is always below the horizon, throughgoing events enable the detection horizon for SNe II-P (IIn) up to around 4 kpc (20 kpc). Cases of the two massive stars Betelgeuse [73, 74, 75] and Eta Carinae [76, 77] as prospective Type II-P and IIn SNe are especially intriguing and shown as demonstrative examples, as they are relatively close by to enable a large event rate. Neutrino energy distributions for starting events at the interaction point for are shown in the top panel of Fig. 2. This spectra adopt the same shape for both starting and throughgoing events, although they would not be measurable for throughgoing events due to muon energy losses. The estimated background caused by atmospheric neutrinos is also shown in the same figure, integrated over both 10 and 100 days to directly compare between the corresponding SN cases. In addition, we show the muon momentum \(p_{\mu}^{\rm det}\) spectrum of throughgoing muons at the detector location in the bottom panel of Fig. 2. The relation between this spectrum to the commonly-measured transverse momentum \(p_{T}\) will depend on the orientation and position of the detector relative to the direction of the incoming neutrino flux; we have assumed that the flux arrives sideways on (perpendicular to the beam axis). The momentum spectrum also gives an idea of the distribution of muon _sagitta_ that should be expected in the magnetized part of detector, which will be discussed further in the next section. A key characteristic of this signal is the directionality of the muons, from below the horizon; this would not be produced by cosmic muons, and only a small background is produced by atmospheric muon neutrinos. This background is also shown in Fig. 2 (bottom). With an assumed throughgoing angular resolution of \(5^{\circ}\), a signal should be well-correlated to a SN point source. For the throughgoing events presented in both Fig. 1(b) and 2 (bottom), we have assumed a SN source that is always below the horizon over the course of the 10 or 100 day observation period. However, this will not be the case for every SN event, given that the ATLAS detector, at a latitude of \(46.2^{\circ}\), will only see throughgoing events 100% the time from objects in the celestial sky with a declination of \(\delta<-43.8^{\circ}\). We define the visibility factor, \(v\), of a celestial coordinate to be the fraction of time that it is below the horizon at the ATLAS latitude; hence any object with \(\delta<-43.8^{\circ}\) will have \(v=1\). The value of \(v\) will decrease until \(\delta>43.8^{\circ}\), where \(v=0\). Figure 3 shows the value of \(v\) in galactic coordinates; in order to determine an event estimate for throughgoing events, the event number must be multiplied by \(v\). We also consider a couple of specific cases to illustrate a more concrete scenario of a hypothetical SN explosion. We consider a Betelgeuse-like (B) and a Eta Carinae-like (EC) SN explosion which occur at distances of 0.22 kpc and 2.3 kpc respectively. For (B) and (EC), we use \(0.01<D_{*}<1.0\) (assuming a SN II-P) and \(0.1<D_{*}<1.0\) (assuming a SN IIn), respectively. The results on these hypothetical signals are indicated in Fig. 1: for (B) we anticipate 15-150 (300-2,600) starting (throughgoing) events, and for (EC) we anticipate 6-21 (170-800) starting (throughgoing) events. The throughgoing signal for (B) is multiplied by a factor of \(v=0.46\) due to its location in the sky, which reduces its time below the horizon. The celestial positions of (B) and (EC) are shown in Fig. 3, mapped to a corresponding visibility \(v\). ## III Discussion and Conclusion We demonstrated the feasibility of ATLAS as a unique detector for astrophysical high-energy neutrinos. We also anticipate comparable capabilities for similar detectors like CMS [78]. Any kiloton-scale or larger, densely-instrumented, present or future detector may consider the prospect of detecting high-energy neutrinos from Galactic SNe. As a previous effort to characterize ATLAS as a viable detector of natural neutrinos, Ref. [72] focused on the precision measurement of atmospheric neutrinos at a lower energy. Although the expected sample size was too small to yield interesting physics, it highlights the Figure 3: _Throughgoing events visibility for ATLAS._ Skymap, in Galactic coordinates, showing the visibility, \(v\), of throughgoing events depending on the position in the sky. The locations of Betelgeuse (B) and Eta Carinae (EC) are marked by red stars. The dashed line indicates the celestial equator. advantages of using a precision collider detector for neutrino physics. Given ATLAS' unique instrumentation often unseen in dedicated neutrino detectors, it may be possible to discriminate between all three neutrino flavors. Consider a benchmark scenario with 88 (22 NC and 66 CC) starting events, which is roughly the expected signal from (B) with \(D_{*}=0.1\). We can broadly consider three distinguishable signal channels: (1) one hadronic shower, (2) one hadronic shower plus muon, and (3) two hadronic showers. Each flavor of starting events will contribute to these channels in varying amounts, allowing us to estimate the expected signal in each channel and infer the flavor ratio (\(f_{e},f_{\mu},f_{\tau}\)). In Fig. 4 we show the allowed flavor ratios when assuming a \((1,1,1)\) ratio flavor composition at Earth. In the same figure, we also show a more pessimistic 2-channel case assuming no sensitivity to channel (3) events. Such a flavor ratio measurement is expected to be of comparable or better performance to current measurements by dedicated experiments [79, 24]. A better understanding of the detector efficiency for throughgoing muons is required to incorporate throughgoing signal (muon-flavor only) into the flavor ratio measurement. Another advantage of ATLAS is a superior energy resolution compared to that of larger, dedicated neutrino detectors. The ATLAS hadronic calorimeter energy resolution for jet events is approximately given by \(\sigma/E=50\%/\sqrt{E/\text{GeV}}\oplus 3\%\)[39], translating to approximately \(1.6\%\oplus 3\%\) at \(1\,\text{TeV}\). This can be contrasted to, for example, the IceCube energy resolution of \(\sim\)15% for shower events [80], around an order of magnitude worse. Finally, ATLAS is expected to have the capability for neutrino-to-antineutrino separation. Assuming a typical path length of around \(5\,\text{m}\) through the ATLAS muon spectrometer barrel region, which is magnetized at approximately \(0.5\,\text{T}\), a \(1\,\text{TeV}\) muon track will have a sagitta of approximately \(500\,\text{\SIUnitSymbolMicro m}\)[81], well above the \(\sim\)30-\(40\,\text{\SIUnitSymbolMicro m}\) muon spectrometer alignment accuracy and the \(\sim\)80-\(90\,\text{\SIUnitSymbolMicro m}\) detector single hit resolution quoted in Refs. [81, 41]. Only at approximately \(5\,\text{TeV}\) will the sagitta approach \(\sim\)100\(\,\text{\SIUnitSymbolMicro m}\), a length scale limited by the detector and alignment resolutions. Since the bulk of muons from both starting and throughgoing events are expected to be less energetic, it is likely that ATLAS hardware can determine the charge of most muons that traverse it. If successful, ATLAS might be the only neutrino detector that can measure the ratio of neutrinos to antineutrinos, which can be used to discriminate between different production mechanisms at the source. In conclusion, the event rates and estimated hardware capabilities of ATLAS make it a promising high-energy neutrino telescope. We hope that our findings spur the development of new triggers and analyses to enable a precise measurement of the next nearby SN event. ## Acknowledgements We thank Austin Schneider and Nicholas Kamp for their help with LeptonInjector. We thank Masahiro Morii and Melissa Franklin for insightful discussions about the capabilities of ATLAS. We thank the KITP for being an engaging space to work on physics, and this research was supported in part by the National Science Foundation under Grants No. NSF PHY-1748958 and PHY-2309135. CAA is supported by the Faculty of Arts and Sciences of Harvard University and the Alfred P. Sloan Foundation. AK is supported by the NASA Grant 80NSSC23M0104. AYW is supported by the Harvard Physics Department Purcell Fellowship and the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number PGSD-577971-2023. KM is supported by the NSF Grant No. AST-2108466 and No. AST-2108467, and KAKENHI No. 20H01901 and No. 20H05852. Figure 4: _Expected flavor triangle allowed region by ATLAS._ The flavor triangle measured at Earth by ATLAS using only starting events from a single close by SN explosion, similar to Betelgeuse (B). The dark blue lines correspond to performing an analysis using all 3 signal channels described in the text, whereas the light blue lines only use channels (1) and (2). Solid and dashed lines indicate \(1\sigma\) and \(3\sigma\) significance, respectively.
2309.06324
Rotation Sensing using Tractor Atom Interferometry
We investigate a possible realization of an ultracold-atom rotation sensor that is based on recently proposed tractor atom interferometry (TAI). An experimental design that includes generation of a Laguerre-Gaussian-beam-based "pinwheel" optical lattice and multi-loop interferometric cycles is discussed. Numerical simulations of the proposed system demonstrate TAI rotation sensitivity comparable to that of contemporary matter-wave interferometers. We analyze a regime of TAI rotation sensors in which nonadiabatic effects may hinder the system's performance. We apply quantum optimal control to devise a methodology suitable to address this nonadiabaticity. Our studies are of interest for current efforts to realize compact and robust matter-wave rotation sensors, as well as in fundamental-physics applications of TAI.
Bineet Dash, Michael H Goerz, Alisher Duspayev, Sebastian C. Carrasco, Vladimir S. Malinovsky, Georg Raithel
2023-09-12T15:37:33Z
http://arxiv.org/abs/2309.06324v1
# Rotation sensing using tractor atom interferometry ###### Abstract We investigate a possible realization of an ultracold-atom rotation sensor that is based on recently proposed tractor atom interferometry (TAI). An experimental design that includes generation of a Laguerre-Gaussian-beam-based "pinwheel" optical lattice and multi-loop interferometric cycles is discussed. Numerical simulations of the proposed system demonstrate TAI rotation sensitivity comparable to that of contemporary matter-wave interferometers. We analyze a regime of TAI rotation sensors in which nonadiabatic effects may hinder the system's performance. We apply quantum optimal control to devise a methodology suitable to address this nonadiabaticity. Our studies are of interest for current efforts to realize compact and robust matter-wave rotation sensors, as well as in fundamental-physics applications of TAI. ## I Introduction Recent progress in atom interferometry (AI) has raised promising prospects in fundamental physics [1; 2; 3; 4; 5], precision measurements [6; 7; 8; 9] and practical applications [10] including geodesy, seismology and inertial sensing with atomic acceleration and rotation sensors. Focusing on rotation, the interferometric measurement relies on the Sagnac phase \(\phi_{s}=2EA/hc^{2}\) arising between wave-packets of energy \(E\) that are counter-rotating around an area \(A\). Since their first demonstration in 1913 [11], optical Sagnac interferometers have achieved sensitivities beyond \(10^{-10}\,\mathrm{rad/s}\) in fiber-optic gyroscopes (FOGs) and large-area pinwheel laser-based setups. The motivation to design Sagnac atom interferometers stems from the potential orders-of-magnitude enhancement in sensitivity that scales inversely with the associated de Broglie wavelength [12]. Previous experiments and proposals for the realization of Sagnac AIs include free-space [13; 14; 15; 16] and point-source interferometers [17; 18; 19], where atomic fountains or dropped atomic clouds propagate freely along interfering paths, as well as guided-wave AIs [20; 21; 22; 23]. Despite their much smaller particle flux and interferometric areas, these designs have recently surpassed the sensitivity of FOGs. However, free-space AIs can be space- and power-intensive, as their sensitivity scales as the interrogation time squared [12], fueling a push to increasing drop heights and apparatus sizes in earth-based experiments. In order to achieve higher sensitivity combined with compact setups, multi-pass guided-wave designs have been proposed based on trapped ions [24], weak magnetic traps [25; 20; 26], time-averaged adiabatic potentials [27; 28; 29; 30], toroidal optical traps [31] and optical waveguide formed by collimated laser beams [23]. The performance of free-space and atom-guide AIs is often limited by the dispersion of the atomic wave functions along unconfined degrees of freedom, inefficient closure of interferometric paths, and Landau-Zener tunneling in spinor implementations [32; 33; 34]. Tractor atom interferometry (TAI) [35], a recently proposed technique, seeks to address these issues by uninterrupted three-dimensional confinement and transport of atomic wave packets along programmable trajectories using optical or other traps. Robust AI implementations for acceleration sensing using deep, spin-dependent optical potentials and optical tweezers have been explored in recent proposals [36; 37; 38]. In this paper, we present investigations on a possible realization of a rotation sensor using TAI. Our azimuthal optical lattice and its matter-wave Hamiltonian are outlined in Sec. II. Aspects of the interferometer operation and its matter-wave dynamics are explained in Sec. III. From our numerical quantum-dynamics simulations presented in Sec. IV we infer the sensitivity and confirm agreement with semiclassical predictions that apply in the adiabatic limit. In Sec. V we then quantify and discuss possible nonadiabatic excitations during operation of the TAI interferometer. Finally, results that incorporate the application of optimal control theory to minimize detrimental nonadiabatic effects are presented in Sec. VI. The paper is concluded in Sec. VII. ## II Pinwheel optical lattice design As depicted in Fig. 1, the principles of spinor-TAI [35] can be leveraged for rotation sensing by designing circular trajectories along which spin-dependent potentials carry trapped atomic wave-function components in opposite directions. The potentials must be designed to strongly confine the trapped wave functions in all spatial dimensions to minimize nonadiabatic effects and dispersion. The trajectory pairs are closed and cover a half- or full-integer number of loops in each trajectory. This can be realized by a pair of deep, spin-dependent, counter-rotating "pinwheel" optical lattices. Such lattices can be created using co-propagating Laguerre-Gaussian (LG) beams [39, 40], interference of Gaussian and hollow beams with a quadrupole magnetic moment [41], or interference of LG beams with plane waves in the presence of a conical magnetic field [42] for twisted boundary conditions. Here we focus on the first approach, which is an all-optical technique suitable to create both bright (red-detuned) and dark (blue-detuned) lattices with several, widely tunable parameters. The electric field of an LG beam with azimuthal index \(l\), zero radial index, frequency \(f_{l}\) and wave vector \(k_{l}\) propagating along the positive \(z\)-direction is given in phasor notation and cylindrical coordinates \(\mathbf{r}\equiv(r,\theta,z)\) as \[LG_{l}(\mathbf{r},t)=\frac{\mathcal{E}_{l}(r,z)}{\sqrt{c\epsilon_{0}}}\,e^{i \left[2\pi f_{l}t+\Phi_{l}(z)+l\theta-k_{l}\left(z+\frac{r^{2}}{2R(z)}\right) \right]}\,, \tag{1}\] where \(c\) and \(\epsilon_{0}\) are the speed of light and vacuum permittivity, respectively, and with the amplitude \[\mathcal{E}_{l}(r,z)=\sqrt{\frac{4P}{\pi|l|!w(z)^{2}}}\left(\frac{\sqrt{2}r}{ w(z)}\right)^{|l|}e^{-\frac{r^{2}}{w(z)^{2}}}\,. \tag{2}\] \(P\) is the laser beam power and \(w(z)=w_{0}\sqrt{1+(z/z_{R})^{2}}\) is the beam-waist parameter with the Rayleigh range \(z_{R}=\pi w_{0}^{2}/\lambda\). The radius of the phase front's curvature is \(R(z)=z\left(1+(z_{R}/z)^{2}\right)\), and \(\Phi_{l}(z)=(|l|+1)\arctan{(z/z_{R})}\) is the Gouy phase. Along the \(z\)-axis, the LG beam has an optical vortex line featuring a phase singularity and vanishing intensity. Due to the azimuthal (\(\theta\)) phase dependence, the interference of two co-propagating LG beams with modes \(l_{1}\) and \(l_{2}=l_{1}+m\) and frequency \(f_{1}\) and \(f_{2}=f_{1}-\Delta f\) results in an intensity distribution \[\begin{split}|\mathcal{E}|^{2}(\mathbf{r},t)&= \mathcal{E}_{l_{1}}^{2}(r,z)+\mathcal{E}_{l_{2}}^{2}(r,z)\\ &\qquad+2\mathcal{E}_{l_{1}}(r,z)\mathcal{E}_{l_{2}}(r,z)\cos \left[2\pi(\Delta f)t-\Delta\Phi(z)-\frac{2\pi(\Delta f)z}{c}-\frac{\pi r^{2} }{c}\left(\frac{f_{1}}{R_{1}(z)}-\frac{f_{2}}{R_{2}(z)}\right)-m\theta\right] \,.\end{split} \tag{3}\] The salient feature of this interference pattern is the sinusoidal modulation of intensity in the azimuthal (\(\theta\)) direction, in the very last term. In experimentally relevant cases, e.g., for a pinwheel optical lattice of radius \(10-100\,\mathrm{\SIUnitSymbolMicro m}\) rotating at \(10-1000\,\mathrm{Hz}\), the terms proportional to \(\Delta f/c\) and \(r^{2}/c\) inside the cosine are negligible. The difference in Gouy phase, \(\Delta\Phi(z)=m\arctan(z/z_{R})\), in principle twists the pinwheel azimuthals as a function of \(z\). However, as described in the following, the atoms are further trapped along the \(z\) direction by a separate, far off-resonant one-dimensional static optical lattice with lattice planes extending transverse to \(z\). The twisting angle due to the variation of \(\Delta\Phi(z)\) within one spatial period of the static \(z\)-lattice is typically less than \(1\,\mathrm{mrad}\) and is therefore negligible. With these approximations, the optical potential near \(z=0\) reads \[V(\mathbf{r},t)\approx V_{1}(r)+V_{2}(r)\cos\left[2\pi(\Delta f)t-m\theta \right]\,, \tag{4}\] with \(V_{1}(r)=-\frac{\alpha}{2c\epsilon_{0}}\left(\mathcal{E}_{l_{1}}^{2}(r,0)+ \mathcal{E}_{l_{2}}^{2}(r,0)\right)\) and \(V_{2}(r)=-\frac{\alpha}{c\epsilon_{0}}\mathcal{E}_{l_{1}}(r,0)\mathcal{E}_{l _{2}}(r,0)\), where \(\alpha\) is the polarizability of the selected atomic state. Therefore, a pair of LG beams with a small detuning of \(\Delta f\) and with \(l\)-indices differing by \(m\) effectively create a pinwheel optical lattice with \(m\) azimuthal lattice sites, rotating at a tunable angular velocity \(\omega=2\pi\Delta f/m\). A counter-rotating pinwheel lattice of similar size can be obtained from a second pair of LG beams with opposite detuning that are superimposed over the first pair. The pinwheel lattices can be made spin-selective by tuning the wavelengths of the beam pairs forming the lattice so that they trap different atomic spin states. A particular example of such spin states are the \(\left|5S_{1/2},F=1,m_{F}=0\right\rangle\) and \(\left|5S_{1/2},F=2,m_{F}=0\right\rangle\) states of \({}^{87}\)Rb, with the wavelengths of the respective pinwheel lattices set near the \(D_{1}\) line (\(\approx 795\,\mathrm{nm}\)), as described in Ref. [37]. Trapping in such lattices requires blue-detuned light, where the atoms are trapped near intensity minima. In that case, the photon scattering rate of the atoms in the lattice light is minimal, thereby minimizing both photon Figure 1: Concept of TAI as a Sagnac rotation sensor. Split azimuthal atomic trapping potentials \(V_{+}(\theta)\) and \(V_{-}(\theta)\) containing coherently-split wave-packet components \(\Psi_{\pm}(\theta)\), represented here as well-localized dots, are counter-rotated in the instrument’s rest frame at tunable angular velocities \(\pm\omega(t)\) along counter-wound circular trajectories of radius \(R\). This occurs in the presence of a background angular velocity \(\Omega\) of the instrument’s rest frame against an inertial frame, in which the rotation speeds are \(\Omega\pm\omega(t)\). The value of \(\Omega\) is to be measured. scattering-induced decoherence of the interferometer as well as decoherence caused by trap-laser intensity fluctuations. The beam parameters for \(LG_{l_{1}}\) and \(LG_{l_{2}}\)-modes in Eq. (4) must be chosen carefully to create a sufficiently deep and tightly confined radial potential in order to suppress wave-function dynamics in the radial direction. At the same time, this potential must go through a zero-intensity minimum to trap atoms with minimal coherence loss due to photon scattering. This can be achieved when both LG beams have similar maximum intensity, with the radial intensity maxima separated by more than one FWHM of the radial intensity distributions. For a Gaussian beam waist ratio \(\eta=w_{0,2}/w_{0,1}\) and a power ratio \(P_{2}/P_{1}=\eta^{2}\sqrt{l_{2}/l_{1}}\), the radial intensity maxima are similar and separated by \(\Delta r\approx w_{0,1}\left(\eta\sqrt{l_{1}}-\sqrt{l_{2}}\right)/\sqrt{2}\) near the focus. Since the FWHM of \(LG_{l_{1}}\) is on the order of \(w_{0,i}\), for a given \(l_{1}\) and \(l_{2}\), the waist ratio \(\eta\) should be chosen such that \(\Delta r\gtrsim(1+\eta)w_{0,1}\). The number of desired azimuthal wells in the pinwheel lattice, \(m\), has an implicit effect on the best choice for \(\eta\) because \(m=|l_{2}-l_{1}|\). We have found that \(\eta\sim 1.3-1.8\) works for most \(l_{1}\) and \(l_{2}<50\). For the example of \({}^{87}\)Rb, Fig. 2(a) demonstrates the superposition of two \(795\,\mathrm{nm}\) laser beams with modes \(LG_{20}\) and \(LG_{28}\). In this case, \(\eta=1.67\) leads to an ideal pinwheel optical lattice with \(8\) sites, as shown in the optical potential in the transverse plane in Fig. 2(b). The trapping potential is about \(4.4\,\mathrm{MHz}\) deep and perfectly sinusoidal along the azimuthal direction, acting as a lattice with periodic boundary conditions. Ultracold \({}^{87}\)Rb atoms can be trapped in these potential minima with minimal photon scattering [37]. As described below, this potential is sufficiently deep to prevent wave-function dispersion or tunneling between the lattice sites. Along the radial direction, the potential is about \(274\,\mathrm{MHz}\) deep, and the radial trap frequency approximately equals \(5\) times the azimuthal trap frequency, \(\omega_{r}\approx 5\omega_{\theta}\). Next, we discuss TAI confinement in the axial (\(z\)) direction. Other experiments [41, 42] on ring-like traps have reported axial confinement using lattices created by counter-propagating laser modes. In the case of rotating pinwheel lattice, superposition of detuned counter-propagating LG beams can lead to unwanted axial movement. Therefore, we here suggest co-propagating LG beam pairs to form the pinwheel lattice, and to use a separate, far off-resonant one-dimensional optical lattice along the \(z\)-direction using counter-propagating Gaussian beams of a sufficiently large beam waist. This allows robust, all-optical axial confinement of the atoms on the pinwheel. For example, a \(300\,\mathrm{kHz}\)-deep optical lattice can be created by counter-propagating \(\approx\)\(1\,\mathrm{W}\)\(532\,\mathrm{nm}\)-wavelength Gaussian beams, focused to a waist of \(100\,\mathrm{\SIUnitSymbolMicro m}\). This will generate an axial stack of many pinwheel lattices spaced by an axial lattice period of \(266\,\mathrm{nm}\). The structure of the pinwheel lattices, as shown in Fig. 2(b)-(d), remains largely constant over an axial range of about \(5\,\mathrm{\SIUnitSymbolMicro m}\) from the focus, suggesting that several tens of near-identical pinwheel lattices with tight 3D confinement can be stacked. With the radial and axial degrees of freedom being essentially frozen, the pinwheel optical lattices can be approximated as 1D lattices with periodic boundary conditions. Assuming that there is no linear background acceleration, the Hamiltonian of the system in a suitable inertial frame can be written as \[H_{\pm}(\theta,t)=-\frac{\hbar^{2}}{2I}\frac{\partial^{2}}{\partial\theta^{2}} +V_{0}\cos\left(m\theta+\phi_{\pm}(t)\right)\,. \tag{5}\] This expression is in the coordinate representation, and the labels "\(+\)" and "\(-\)" refer to the two atomic spin states (which are rotated in opposite directions). The kinetic term contains the effective moment of inertia \(I=m_{\mathrm{Rb}}R^{2}\) of a \({}^{87}\)Rb atom (atomic mass \(m_{Rb}\)) rotating on a pinwheel of radius \(R\). The radius \(R\) is defined as the center of mass of the tightly-confined radial wave function. The cosine potential with \(m\) sites moves with phases \[\phi_{\pm}(t)=\int_{0}^{t}\omega_{\pm}(t^{\prime})\,dt^{\prime}=\int_{0}^{t} \left(\Omega\pm\omega(t^{\prime})\right)\,dt^{\prime}\,. \tag{6}\] The phases are controlled via the tunable angular velocity \(\omega(t)\). The goal of the present TAI scheme is to measure the constant rotation rate \(\Omega\) of the instrument's rest frame ("lab frame") against the inertial frame. The Hamiltonian in Eq. (5) includes the effect of the Euler force, while other non-inertial forces like centrifugal and Coriolis force can be neglected in this scheme. Figure 2: An \(8\)-site pinwheel \({}^{87}\)Rb optical lattice is created from \(\mathrm{LG}_{20}\) and \(\mathrm{LG}_{28}\) laser modes with \(\lambda=795\,\mathrm{nm}\) and respective Gaussian waists of \(w_{0,1}=5.78\,\mathrm{\SIUnitSymbolMicro m}\) and \(w_{0,2}=9.66\,\mathrm{\SIUnitSymbolMicro m}\), and powers of \(P_{1}=5.38\,\mathrm{mW}\) and \(P_{2}=17.78\,\mathrm{mW}\). The beams are assumed to be blue-detuned for an AC polarizability of \(-0.16\,\mathrm{Hz/(V/m)^{2}}\). (a) Optical potential in the \(r-z\) plane. (b) Optical potential in the transverse plane at the focus \(z=0\). (c) Potential along the radial direction for \(z=0\) and \(\theta=0\) and \(\pi/8\). (d) Potential along the azimuthal direction at \(R=25.46\,\mathrm{\SIUnitSymbolMicro m}\) and \(z=0\). During interferometer operation, the lattices are rotated much slower (\(\omega_{\pm}\lesssim 10^{2}\,\pi/\)s) than the radial trap frequency (\(\omega_{r}\sim 10^{4}\,\pi/\)s). In this regime, the relative displacement due to the centrifugal force, \(\Delta R/R\simeq(\omega_{\pm}/\omega_{r})^{2}\), is negligible, and the lattice radius \(R\) can be assumed to be constant throughout. As shown in Sec. IV, in the desired adiabatic regime the wave packets are at rest in frames that co-rotate with the pinwheel lattices, and therefore do not experience any Coriolis effect. ## III Operation The interferometer is initialized by co-aligning the axes and azimuthal minima of the spin-dependent pinwheel lattices for the pair of utilized spin states. The internal spin states could correspond, e.g., to the \(\ket{+}=\ket{5S_{1/2},F=1,m_{F}=0}\) and \(\ket{-}=\ket{5S_{1/2},F=2,m_{F}=0}\) states in the ground-state manifold of \({}^{87}\)Rb. At \(t=0\), the wave function is prepared in the local ground state \(\Psi_{0}(\theta)\) of one particular site in the "+"-lattice, that is, \(\ket{\Psi(\theta,t=0)}=\Psi_{0}(\theta)\ket{+}\). The shape and the width of the eigenstate depend on the radius implicit in the effective moment of inertia \(I\), the number of lattice sites, \(m\), and the \(V_{0}\) of the trapping potential. For a given \(m\) and a sufficiently deep potential (large \(V_{0}\)), the wave packet \(\Psi_{0}(\theta)\) will be close to the eigenstate of a quantum harmonic oscillator with frequency \[\omega_{\theta}=m\sqrt{V_{0}/I}\,, \tag{7}\] obtained by a Taylor expansion of the potential in Eq. (5) at the first site. In the remainder of the paper, we will consider optical lattices with a radius of \(R=25.46\,\mathrm{\SIUnitSymbolMicro m}\) and \(m=8\) lattice sites, which corresponds to a lattice period of \(10\,\mathrm{\SIUnitSymbolMicro m}\) in the azimuthal direction. Without loss of generality, we choose lattice phases such that the initial wave packet is centered at \(\theta_{0}=\pi/8\) at the first site. Driving a momentum-transfer-free optical Raman transition at a suitable Rabi frequency \(\Omega_{\pm}\), we implement a \(\pi/2\)-pulse, \[\hat{\mathsf{0}}_{\pi/2}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&i\\ i&1\end{pmatrix}\,, \tag{8}\] between the two spin components. This acts as a beamsplitter and creates an equal superposition of the two spin states. Thus, the wave packets in the two spin-dependent potentials immediately after the \(\pi/2\)-pulse are \[\Psi_{+}(\theta,0)=\frac{1}{\sqrt{2}}\Psi_{0}(\theta)\,,\quad\Psi_{-}(\theta, 0)=\frac{i}{\sqrt{2}}\Psi_{0}(\theta)\,. \tag{9}\] The duration of the \(\pi/2\)-pulse typically is negligible compared to the overall duration of the interferometer sequence. An experimentally suitable choice for the duration of the \(\pi/2\)-pulse could be, for instance, \(1.4\,\mathrm{\SIUnitSymbolMicro s}\), corresponding to a Rabi frequency of \(\Omega_{\pm}=\pm 2\pi\times 178\,\mathrm{kHz}\). After splitting, the two wave packets \(\Psi_{\pm}(\theta,t)\) evolve independently (i.e., without spin coupling) under the Hamiltonian in Eq. (5) with counter-rotating time-dependent angular velocities \(\pm\omega(t)\). For the time being, we assume that \(\omega(t)\) varies sufficiently slowly for the wave-packet evolution to be adiabatic, i.e., the \(\Psi_{\pm}(\theta,t)\) remain in the ground state of the local lattice site potential at all times. For the azimuthal ramp of the pinwheel lattices, here we first choose the smoothly varying function \[\omega(t)=\begin{cases}\omega_{0}\sin^{2}\left(\frac{\pi t}{2t_{r}}\right)&0 \leq t<t_{r}\\ \omega_{0}&t_{r}\leq t<t_{r}+t_{\mathrm{loop}}\\ \omega_{0}\cos^{2}\left(\frac{\pi t^{\prime}}{2t_{r}}\right)&T-t_{r}\leq t \leq T\end{cases}\,, \tag{10c}\] with \(t^{\prime}=t-t_{r}-t_{\mathrm{loop}}\) and the total duration \(T=2t_{r}+t_{\mathrm{loop}}\). During the ramp-up time \(t_{r}\), the angular speeds of the lattice potentials in the instrument frame are accelerated from \(\pm\omega(0)=0\) to \(\pm\omega(t_{r})=\pm\omega_{0}\), and subsequently remain constant for a duration of \(t_{loop}\). We first assume that \(t_{r}\) is sufficiently large to result in adiabatic dynamics. After the loop time \(t_{loop}\), the lattices are decelerated from \(\pm\omega_{0}\) to \(\omega(T)=0\) by running the ramp-up control backwards. At final time \(T\), the two lattice potentials and, thus, the final wave packets must coincide (both in the instrument frame (lab frame) and in the lattice rest frames). This is achieved by adjusting \(t_{\mathrm{loop}}\) such that \[\int_{0}^{T}\omega(t)\,dt=n\,\pi \tag{11}\] for an interferometer with \(n\) "cycles". The effective area of the TAI then equals \(n\times\pi R^{2}\). Some exemplary dynamics for adiabatic evolution under Eq. (10) are shown in Fig. 3. The interferometer has \(n=10\) cycles, as can be seen in panel (a). The expectation value of the momentum, seen in panel (b), follows exactly the movement of the potential, controlled by \(\omega(t)\). In the moving frames, defined here as the rest frames of the rotating pinwheel lattices, the wave packets remain perfectly stationary, see panels (c, d). The interferometric scheme is completed at final time \(T\) by an inverse \(\pi/2\)-pulse, denoted \(\hat{\mathsf{0}}_{\pi/2}^{\dagger}\), see Eq. (8), to recombine the two spin-dependent components. For a non-zero constant background rotation \(\Omega\) in Eq. (6), the wave packets \(\Psi_{+}(\theta,T)\) and \(\Psi_{-}(\theta,T)\) accumulate a differential phase \(\Delta\Phi\) that is reflected in the recombined state \[\hat{\mathsf{0}}_{\pi/2}^{\dagger}\ket{\Psi(T)}=c_{+}(\theta,T)\ket{+}+c_{-}( \theta,T)\ket{-} \tag{12}\] with the populations \[\ket{c_{\pm}}^{2}=\frac{1}{2}\pm\frac{1}{2}\mathrm{Re}\left[\eta e^{-i\Delta \Phi}\right] \tag{13}\] and the overlap of the final-time wave-packet components \[\eta=\bra{\Psi_{-}(\theta,T)}\ket{\Psi_{+}(\theta,T)}. \tag{14}\] For a closed interferometric path and adiabatic time evolution, \(\Psi_{-}(\theta,T)=\Psi_{+}(\theta,T)=\Psi_{0}(\theta)\), and thus \(\eta=1\). In this case, Eq. (13) simplifies to \[|c_{-}|^{2} =\frac{1}{2}-\frac{\cos\Delta\Phi}{2}=\sin^{2}\left(\frac{\Delta \Phi}{2}\right)\,, \tag{15a}\] \[|c_{+}|^{2} =1-|c_{-}|^{2}=\cos^{2}\left(\frac{\Delta\Phi}{2}\right)\,. \tag{15b}\] Up to an offset of an integer multiple of \(\pi\), the value of \(\Delta\Phi\) can be derived from a measurement of the population in at least one of the two spin states. ## IV Numerical simulation ### Quantum methods The Crank-Nicolson (CN) method [43; 44; 26] has commonly been employed for simulations of wave packet dynamics in the position representation, including cases with moving potentials [35]. The method requires a computationally expensive (\(\mathcal{O}(N_{x}^{3})\)) matrix inversion. In practice, for systems with 1D scalar potentials and non-PBC, this is usually reduced to \(\mathcal{O}(N_{x})\) due to the tridiagonal structure of the Hamiltonian in position space. In the context of the azimuthal optical lattice, periodic boundary conditions introduce additional corner entries in the Hamiltonian matrix, necessitating a generalized Crout reduction, as explained in Appendix A. Here, we use CN simulations to study a TAI in a pinwheel optical lattice as a function of \(\Omega\). The results of CN simulations performed in the inertial frame according to the Hamiltonian in Eq. (3) are shown in Fig. 4 (a) as the points labeled "CN", with the parameters listed in the figure caption. A time-step \(\Delta t=$50\,\mathrm{ns}$\) and \(3200\) spatial grid points for the full range of \(\theta\in[0,2\pi]\) have been used. From the simulations, we verify that for parameters as in Fig. 4, a \(h\)\(\times\)\(4.4\,\mathrm{MHz}\) deep pinwheel lattice effectively prevents any tunneling between the lattice sites. Consequently, the wave-packet dynamics in the co-rotating frames of reference is confined within one lattice site, or equivalently within a \(\theta\)-range of only \(2\pi/m\) in width. Exploiting the localization of the wave-packet components in their respective lattices, in the present case the spatial grid can be reduced in width by a factor of \(m=8\) to the region \([0,\pi/4]\) of a single lattice site by applying the transformation \[\begin{split}\hat{U}_{\pm}(t)&=\exp\left(\frac{-i \hat{L}_{z}\phi_{\pm}(t)}{\hbar}\right)\\ &\equiv\exp\left(-\int_{0}^{t}\!\omega_{\pm}(t^{\prime})\,dt^{ \prime}\,\frac{\partial}{\partial\theta}\right)\end{split} \tag{16}\] into the lattices' rest frames, in which \(\theta\) is relative to the moving lattice potentials. Applying the transformation in Eq. 16 on the Hamiltonian in Eq. (5), one finds the Hamiltonian in the lattice rest frames, \[\tilde{H}_{\pm}(t)=-\frac{\hbar^{2}}{2I}\frac{\partial^{2}}{\partial\theta^{ 2}}+V_{0}\cos\left(m\theta\right)-i\hbar\omega_{\pm}(t)\frac{\partial}{ \partial\theta}\,. \tag{17}\] To simulate the dynamics under the Hamiltonian in Eq. (17), we have found the simple split-propagator method [45; 46] to be effective. The results of such simulations, which use \(1024\) spatial grid points to represent the wave packets in the range \([0,\pi/4]\) and a time resolution of \(\Delta t=$1\,\mathrm{\SIUnitSymbolMicro s}$\), are shown in Fig. 4 as the points labeled "SP". An excellent agreement with the inertial-frame results from the CN method is observed. We have also verified the precision of the split-propagator method by comparing it to a Chebychev propagation [47], which is exact to machine precision, but slower by about a factor of four. It is noted in Fig. 4 that the implementation with faster ramps, cf. panel (b), is still adiabatic. The faster ramp allows a longer loop time, accommodating \(n=10\) cycles instead of just \(2\) within the same interferometer time \(T\), and thus results in a higher sensitivity. Figure 3: Adiabatic wave-function evolution in a rotating pinwheel optical lattice using \(\omega(t)\) from Eq. (10) with \(V_{0}=h\times 4.4\,\mathrm{MHz}\), \(t_{r}=t_{\mathrm{loop}}=$100\,\mathrm{ms}$\), \(\omega_{0}=$50\,\pi/\mathrm{s}$\), and \(\Omega=0\). (a) Expectation value of the azimuthal displacement of the wave packets relative to the initial position \(\theta_{0}=\pi/8\) of the ground state of the selected pinwheel-lattice well, \(\Delta\theta=\langle\theta\rangle-\theta_{0}\), as measured in the instrument’s rest frame (“lab frame”) for both counter-rotating states \(\Psi_{+}(t)\) and \(\Psi_{-}(t)\). The lattice has \(R=$25.46\,\mathrm{\SIUnitSymbolMicro m}$\) and \(m=8\) lattice sites. (b) Momentum expectation value in the lab frame, in units of effective moment of inertia \(I=m_{\mathrm{Rb}}R^{2}\) of a \({}^{87}\)Rb atom rotating at \(1\,\pi/\mathrm{s}\). The control function for the angular velocity of the pinwheel-lattices, \(\omega(t)\), is shown as the dotted black curve. (c) Azimuthal-angle expectation value of the wave packet components in the lattices’ moving frames,i.e., the frames in which the lattices are stationary. (d) Angular-momentum expectation value in the moving frame, i.e., average angular momentum relative to \(\pm I\omega(t)\). The shaded regions in panels (b–d) indicate the standard deviations of the respective wave-function densities, that is, the widths of the wave packets. ### Path integral method The interferometric response closely follows a semi-classical model based on path-integral propagators. The propagator phase of a wave packet equals \(\exp(iS(\mathbf{x}_{0}(t))/\hbar)\), where \(S(\mathbf{x}_{0}(t))\) is the action of the classical trajectory, \(\mathbf{x}_{0}(t)\), followed by the centroid of the wave packet. Consequently, the phase difference \(\Delta\Phi_{S}\) between our relevant pair of wave packets in spin states \(|-\rangle\) and \(|+\rangle\), arises from the difference of corresponding actions, \[\Delta\phi_{S}=\frac{1}{\hbar}\int\left(\mathcal{L}(\mathbf{x}_{-},\dot{ \mathbf{x}}_{-},t)-\mathcal{L}(\mathbf{x}_{+},\dot{\mathbf{x}}_{+},t)\right)dt\,. \tag{18}\] where \(\mathbf{x}_{\pm}\) are the paths followed by the centroids of the split wave-function components, and \(\mathcal{L}(\mathbf{x}_{\pm},\dot{\mathbf{x}}_{\pm},t)\) are the corresponding Lagrangians. In TAI, the predetermined lattice trajectories serve as the classical paths since the atomic wave functions remain tightly trapped at the minima of the relatively slowly-moving lattice potentials, and the wave functions possess zero degrees of freedom. That is, the \(\mathbf{x}_{\pm}\) are simply given by the locations of the selected sites of the optical lattices for \(|+\rangle\) and \(|-\rangle\), and forces of constraint cause no significant alterations. The Lagrangians for the states \(|+\rangle\) and \(|-\rangle\) differ in the presence of a non-zero background angular velocity \(\Omega\) due to the different lattice angular speeds in the inertial frame. In the semi-classical path-integral picture, Eq. (18) leads to the well-known Sagnac phase, \[\Delta\Phi_{S}=\frac{4m_{\mathrm{Rb}}\Omega A}{\hbar}\,,\quad A=\frac{R^{2}}{ 2}\int_{0}^{T}\omega(t^{\prime})\,dt^{\prime}\,. \tag{19}\] The final recombined population in the state \(|-\rangle\) on its respective potential, Eq. (15a) with \(\Delta\Phi=\Delta\Phi_{S}\), is shown in Fig. 4 as the solid curve. Figure 4 (a) shows the result for \(\omega_{0}=10\,\pi/\mathrm{s}\) and two cycles (the minimum number of cycles possible for \(t_{r}=100\,\mathrm{ms}\) and \(t_{\mathrm{loop}}>0\), for a total of \(T=300\,\mathrm{ms}\)). The parameters for Fig. 4 (b) match those for Fig. 3. The close agreement of the semi-classical results with the full quantum simulations (CN and SP) in both Figs. 4 (a) and (b) validates the principles of TAI in the adiabatic limit, in which unwanted spin couplings, wave-packet excitation and tunneling on the spin-dependent lattice potentials do not affect the interferometric phase of the TAI. ### Rotation sensitivity Assuming that a phase resolution of \(2\pi/100\) can be experimentally achieved [35], in Fig. 4 (a) the rotation sensitivity can be inferred to be about \(5\,\mathrm{mrad/s}\). This can be improved by increasing the lattice angular velocity for a given duration of the interferometric scheme. Fig. 4 (b) shows the response for \(n=10\) cycles, achieved by increasing \(\omega_{0}\) to \(50\,\pi/\mathrm{s}\). In Fig. 4 (b) the sensitivity is improved five-fold to roughly \(1\,\mathrm{mrad/s}\). The interferometer sensitivity can be enhanced further by increasing both the lattice radius \(R\) and the angular velocity \(\omega_{\pm}\) to an extent where centrifugal force and nonadiabatic excitation still remain negligible. In a lattice of depth \(h\times 4.4\,\mathrm{MHz}\) the orbital radius suffers \(<0.1\%\) increase when rotated at \(\omega_{\pm}\sim 1000\,\pi/\mathrm{s}\). Nonadiabatic excitations are better explained in the co-rotating lattice frames given by Eq. (16). The first two terms of the Hamiltonian \(\tilde{H}_{\pm}(t)\) in Eq. (17) describe stationary optical lattices, in which we initialize the wave functions in the respective ground states. As the lattices are accelerated, the last term, which is proportional to \(\omega_{\pm}(t)\tilde{L}_{z}\), may cause nonadiabatic transitions into excited vibrational states within the initially populated lattice wells. In shallow lattices, modified tunneling behavior may occur (Bloch oscillations and Wannier-Stark localization). In the following, we develop an estimate as to what rotation sensitivities may be possible under these constraints. The departure from perfect adiabaticity can be quantitatively estimated in the momentum picture by exploiting the spatial periodicity of the Hamiltonian \(\tilde{H}_{\pm}(t)\). Following the well-known Bloch formalism, any eigenstate of \(\tilde{H}_{\pm}(t)\) can be characterized by quasi-angular momentum \(\ell\) and band index \(n\) as \(|\psi_{\ell}^{n}\rangle=e^{i\ell\theta}\,|u_{\ell}^{n}\rangle\) where \(u_{\ell}^{n}(\theta+2\pi)=u_{\ell}^{n}(\theta)\). Then the effective Hamiltonian for \(|u_{\ell}^{n}\rangle\) is given by \[\mathcal{H}_{\ell,\pm(t)}=\frac{(L_{z}+\hbar\ell)^{2}}{2I}+V_{0}\cos\left(m \theta\right)+\omega_{\pm}(L_{z}+\hbar\ell)\,, \tag{20}\] where \(\mathcal{H}_{\ell,\pm(t)}u_{\ell}(t)=E_{\ell}^{n}(t)u_{\ell}(t)\). Scaling the Hamiltonian by and effective recoil energy \(E_{R}=\hbar^{2}m_{Rb}^{2}/2I\) gives a dimensionless eigenvalue equation in terms of \(\Theta=m\theta\) Figure 4: Interferometric response of TAI in a pinwheel optical lattice to a constant background rotation \(\Omega\) for \(V_{0}=h\times 2.2\,\mathrm{MHz}\), \(t_{r}=100\,\mathrm{ms}\), and \(T=300\,\mathrm{ms}\) (other parameters see text). (a) Population in \(|-\rangle\) for a recombination after \(n=2\) cycles, with \(\omega_{0}=10\,\pi/\mathrm{s}\). The Sagnac curve is analytically calculated from Eq. (15a) with \(\Delta\Phi=\Delta\Phi_{S}\), Eq. (19). The “CN” and “SP” points are obtained from simulations of the quantum dynamics with the Crank-Nicolson and split-propagator methods, respectively (see text for details). (b) Population in \(|-\rangle\) after n=10 cycles with \(\omega_{0}=50\,\pi/\mathrm{s}\), cf. the dynamics in Fig. 3. \[\left[-\frac{d^{2}}{d\Theta^{2}}-\frac{2i\ell}{m}\frac{d}{d\Theta}+\left(\frac{ \ell}{m}\right)^{2}+\frac{V_{0}}{E_{R}}\cos(\Theta)-\frac{2I\omega_{\pm}}{m \hbar}\left(-i\frac{d}{d\Theta}+\frac{\ell}{m}\right)\right]u_{\ell}^{n}( \omega_{\pm},\Theta)=\frac{E_{\ell}^{n}}{E_{R}}u_{\ell}^{n}(\omega_{\pm}, \Theta)\,. \tag{21}\] Eq. (21) offers an estimate of the relative magnitudes of different terms in the Hamiltonian in the co-rotating frames. First, we consider tunneling effects for a ground-state wave function \(u_{0}^{0}(0,\theta)\) trapped in a static lattice (i.e., \(\ell=\omega_{\pm}=0\)). In this simple case, tunneling-induced wave-function delocalization is suppressed when \(V_{0}\gg E_{R}\). When a lattice of such potential depth is rotated at a constant angular velocity \(\omega_{\pm}\), the final term on the left-hand side in Eq. (21) mixes the ground state of the stationary lattice with excited states from higher bands. This mixing can be minimized if the lattice depth \(2V_{0}\) is much larger than the scale of the lattice-rotation-induced perturbation \(m\hbar\omega_{\pm}\). For the pinwheel lattice under consideration, the effective recoil energy \(E_{R}\) is \(\sim h\,\times\,250\,\mathrm{Hz}\) and the scale of the lattice-rotation-induced perturbation at \(\omega_{\pm}\simeq 500\,\pi/\mathrm{s}\) is \(\sim h\,\times\,80\,\mathrm{kHz}\). Therefore, a lattice depth of \(h\times\,4.4\,\mathrm{MHz}\) can adequately suppress delocalization of the ground-state wave function and support a maximum angular velocity up to \(\omega_{\pm}\sim 500\,\pi/\mathrm{s}\) with minimal nonadiabatic excitations. Pinwheel lattices of radius \(2.5\,\mathrm{mm}\) rotated at \(1000\,\pi/\mathrm{s}\) can potentially improve the sensitivity of Fig. 4 (b) by six orders of magnitude to \(1\,\mathrm{nrad}/\mathrm{s}\) for an operation time of \(1\,\mathrm{s}\). The signal-to-noise ratio can be enhanced by loading a larger number of atoms into the lattices. This can be achieved, for instance, by creating pinwheel lattices with more sites (i.e., larger \(m\)) and stacking several pinwheel lattices axially on a linear array of \(z\)-lattice sites. The axial stacking is limited to a range \(|z|\ll z_{R}\), where the locations of the radial minima stay similar enough to avoid excessive inhomogeneous broadening of the TAI phase \(\Delta\phi\). For example, in Fig. 2 (a), the position of the radial minima changes by less than \(0.01\%\) over \(z=\pm 1.5\,\mathrm{\SIUnitSymbolMicro m}\). Therefore, for a stack of 10 pinwheel lattices separated by the lattice period \(266\,\mathrm{nm}\) one can expect the average TAI fringe contrast to remain large for \(\Delta\phi\) up to several \(100\times 2\pi\). In experimental realizations, LG beams with larger beam waists and Rayleigh ranges would allow a higher degree of axial stacking to improve signal-to-noise. ## V Nonadiabatic effects in lattice spin-up and -down In order to further optimize the gyroscope sensitivity and to increase the dynamic range in rotation sensing, the time \(t_{r}\) in Eq. (10) during which the lattices are accelerated should be reduced. Additionally, the ability of the interferometer to operate with shallower lattices, which will accommodate laser power constraints and minimize signal loss due to photon scattering, has to be explored. When entering the nonadiabatic regime, the split wave function in each spin-dependent potential deviates from the ground state \(\Psi_{0}(\theta)\) in the lattice rest frames. We have studied the nonadiabatic effects numerically by simulating the time evolution under the Hamiltonian in Eq. (17). In Fig. 5, we show the fidelity under the smoothly-varying ramp function \(\omega(t)\) in Eq. (10a), which drives the ground state \(\Psi_{0}(\theta,t=0)\) in the initially selected optical lattice well into a state \(\Psi(\theta,t_{r})\). The fidelity is given by the magnitude-square of the overlap between \(\Psi(\theta,t_{r})\) and the desired target state, \(\Psi_{\text{tgt}}(\theta,t_{r})\), which is the ground state of the potential rotating at the terminal constant speed \(\pm\omega_{0}=50\,\pi/\mathrm{s}\). The point marked by the red square in the top-right corner corresponds to the fully adiabatic time evolution shown in Fig. 3. We observe a transition from adiabatic to nonadiabatic evolution for a separation time between \(1\,\mathrm{ms}\) and \(100\,\mathrm{\SIUnitSymbolMicro s}\), depending on the depth of the potential. It is thus confirmed that the lattice acceleration conditions in Fig. 3 are deep in the adiabatic regime, allowing several orders of magnitude increase in acceleration before nonadiabatic effects actually become substantial. To gain a better understanding of the separation failure mode for small \(t_{r}\) and \(V_{0}\) and of the effects of nonadiabaticity on the overall interferometric scheme, we show in Fig. 6 (a)-(e) the dynamics for \(t_{r}=150\,\mathrm{\SIUnitSymbolMicro s}\) and \(V_{0}=0.2\,\mathrm{MHz}\), marked with the red diamond in Fig. 5. Looking first at the initial separation phase, see left insets in Fig. 6 (c)-(e), we can see that the wave packet is not readily accelerated to terminal speed by the accelerating optical lattice. The lab-frame momentum, shown in panel (c), shows very little initial acceleration of the wave packet. Unlike in the adiabatic case in Fig. 3 (b), where \(\left\langle p\right\rangle_{+}\) readily reaches \(\omega_{0}=50\,\pi/\mathrm{s}\) at \(t=t_{r}\), in Fig. 6 (c) it does not even come Figure 5: Fidelity of the initial splitting operation for varying separation time \(t_{r}\) and \(V_{0}\) of the trapping potential. The separation fidelity is the overlap of the state \(\left|\Psi(t_{r})\right\rangle\) resulting from the evolution under Eq. (10a) with the ground state of the moving potential at \(t_{r}\). close. In the moving frame (lattice rest frame), shown in Fig. 6 (d) and (e), both position and momentum are far from zero, which also contrasts against the adiabatic case in Fig. 3 (c) and (d). In fact, initially the signs of momentum and position in the moving frame are opposite to that of the acceleration: as the lattice is accelerated to the left (counter-clockwise), the wave packet in the moving frame is displaced to the right (clockwise). To explain the observations in the previous paragraph, we first note that during the short separation time of \(150\,\mathrm{\SIUnitSymbolMicro s}\) the actual displacement is very small: note the scale factor of \(10^{-3}\) on the y-axis in Fig. 6 (d). In the subsequent \(199.85\,\mathrm{ms}\), when the optical lattices loop at constant counter-rotating speeds \(\pm\omega_{0}\), the atoms eventually respond to the force that the trapping potential imparts on them. As a consequence, in the lattice rest frames the wave packets oscillate around zero, as seen in the panels (d) and (e), and around momentum \(\pm\omega_{0}\) in the lab frame, as seen in panel (c). Visual inspection of Fig. 6 (c) reveals that the oscillation period is close to that of the harmonic \(\omega_{H}\) in Eq. (7), which is \(660\,\mathrm{\SIUnitSymbolMicro s}\) for \(V_{0}=0.2\,\mathrm{MHz}\). However, the atoms oscillate in the anharmonic regions of the cosine potential in Eq. (5), for this value of \(V_{0}\). As a result, we also find a breathing of the oscillation, i.e., the oscillation amplitude diminishes while the width of the wave packet increases, see the first and last \(1.5\,\mathrm{ms}\) in panels (c) and (e). The breathing is absent when the atoms remain confined to the near-harmonic sections of the cosine function, e.g., for potentials with larger values of \(V_{0}\). The physical picture that summarizes and underlies these observations is that for \(t_{r}\to 0\) and \(V_{0}\) sufficiently large, the lattices instantaneously speed up to \(\omega_{0}\) underneath the atoms. If the corresponding kinetic energy in the lattice frame is less than \(2V_{0}\), the atoms subsequently undergo a sloshing oscillation in the lattice frame. At longer times, the oscillation exhibits collapse and quantum revival phenomena caused by the anharmonicity of the potential. In Figs. 6 (a-e), the ramp-down of the pinwheel lattice from \(\omega_{0}\) to a position at rest, in the lab frame, behaves fundamentally the same as the initial ramp-up: the wave packet does not slow down with the rapidly decelerating potential. Instead, the location of the oscillating wave packet at the time instant when the deceleration hits determines the wave packet's state within the lattice well after the lattice slowdown is complete. The final state may range from less excited to more highly excited than before the deceleration. In Fig. 6 (d) and (e) the latter is the case. Overall, the lack of fidelity seen in Fig. 5, and the resulting oscillatory dynamics have a detrimental effect on the contrast of the full TAI interferometric scheme. First, as shown in Fig. 6 (b), the interferometer fails to close perfectly. Second, prior to the final recombination \(\pi/2\) pulse the wave packets no longer match the ground state \(\Psi_{0}(\theta)\), neither in position nor in momentum, and nor in width, as seen in panels (d) and (e). Thus, the magnitude of the overlap \(\eta\) in Eq. (14) typically is much less than 1, and the contrast of the resulting populations \(|c_{\pm}|^{2}\) in Eq. (13) is correspondingly diminished. This result is shown in Fig. 6 (a). For the given parameters, the achieved contrast is only \(24\%\). This falls well short of the contrast of the path-integral Sagnac curve, shown as the black dotted line, which matches exactly Fig. 4 (b). Because for small \(t_{r}\) the overall process approximates the physics of two impulsive kicks applied to a wave-packet in a well, there also is an erratic dependence of the AI contrast on fine details. The phase of the wave-packet sloshing motion at the time instant of the second kick largely determines visibility. The simplified two-pulse picture becomes more accurate at shorter \(t_{r}\); the picture essentially applies in the left third of Fig. 5. An additional factor that plays a role is that at small \(V_{0}\) and short \(t_{r}\), the effective two-pulse wave-packet drive may excite the wave packet partially into the continuum, causing further contrast loss. In the next section, we will attempt to correct these unwanted behaviors using methods of optimal control. ## VI Optimal control Having observed the detrimental effect of a separation time \(t_{r}\) that is too short, we consider the use of optimal control to improve the fidelity in Fig. 5 for moderate values of \(t_{r}\). Specifically, we seek to find an \(\omega(t)\) that is an alternative to the analytical shape in Eq. (10a) such that \(\Psi_{\pm}(\theta,t)\) reaches \(\Psi_{\pm,\mathrm{tgt}}(\theta,t=t_{r})\), that is, the ground state in the lattice rest frames. The optimized \(\omega(t)\) must maintain the boundary conditions \(\omega(0)=0\) and \(\omega(t_{r})=\omega_{0}\). To this end, we parametrize \[\omega_{\mathrm{opt}}(t)=\omega(t)+S(t)\delta\omega(t)\,, \tag{22}\] where \(\omega(t)\) is the original shape given by Eq. (10a), \(\delta\omega(t)\) is a correction to be optimized, and \(S(t)\in[0,1]\) is a fixed shape with \(S(0)=S(t_{r})=0\) to enforce the boundary conditions. Here, we use a shape that smoothly switches on and off with a Blackman shape during the first and last \(20\%\) of the time window. The initialization for \(\delta\omega(t)\) is \(\delta\omega(t)=0\). An optimized correction \(\delta\omega(t)\) can be obtained using any of the standard gradient-based quantum control methods, including GRAPE [48] or Krotov's method [49]. Here, we have used the Krotov.jl package [50] within the QuantumControl Julia framework [51]. Within \(300\) iterations, using a square-modulus functional [52], we can bring the separation error from \(0.648\), see Fig. 5, to \(1.26\times 10^{-5}\). The resulting optimized \(\omega_{\mathrm{opt}}(t)\) is shown in the left inset of Fig. 6 (h), with the full resulting dynamics for the entire interferometric scheme in panels (g)-(k). The optimized control function for the ramp-down is the time inverse of the ramp-up one, see the right inset of panel (h). We observe a "throw and catch" behavior. The field ramps up rapidly to a relatively high (but still achievable) speed of \(252\,\pi/\mathrm{s}\), but then slows down and tem porarily switches direction, before returning to the target speed of \(50\,\pi/\mathrm{s}\). The lab frame momentum does not follow this rapid motion, but smoothly accelerates from \(0\) to \(50\,\pi/\mathrm{s}\), as can be seen in the inset of Fig. 6 (h), and very much mimics the adiabatic dynamics in Fig. 3 (b). Likewise, the lattice-frame position, seen in panel (i), initially lags behind the accelerating lattice potential, but then smoothly catches up to the equilibrium position within the lattice frame. The subsequent dynamics while the optical lattices loop at constant speed \(\omega_{0}\) are near-identical with the adiabatic case in Fig. 3: both position and momentum are zero in the moving frame, see panels (i) and (k), and follow the position and momentum of the trapping potential in the lab frame, see panels (g) and (h). The ramp-down inverts the dynamics during the ramp-up, leaving the wave function in a state that is very close to the ground state of the lattice potential at rest. This results in near-ideal interferometric response following Eq. (15), as shown in Fig. 6 (f). This implies that the interferometric path in panel (g) is now perfectly closed, in contrast to the open path in panel (b). In principle, a "throw and catch" optimal control solution can be found for even shorter \(t_{r}\). However, the shorter \(t_{r}\), the larger the amplitude that the control function \(\omega_{opt}(t)\) will need to reach during the ramp-up and ramp-down phases. Hence, the maximum experimentally achievable angular control velocity will determine how far one may push to the left in Fig. 5. A more general approach to accelerate the ramp-up and ramp-down phases is to exploit the dependency of the boundary between adiabatic and nonadiabatic behavior on \(V_{0}\) (see Fig. 5). For instance, starting from the point marked by the red diamond in Fig. 5, one may _temporarily_ increase the depth of the potential, move up along the dashed line in Fig. 5, in combination with tuning \(\omega(t)\). However, an eigenstate of a shallower lattice well will not be an eigenstate of a deeper one, resulting in a breathing motion of the wave packet after the compression. To counter this, one would have to add another layer of "throw and catch" to suppress the breathing, or introduce additional control over the shape of the potential. The control functions we have obtained here are already quite simple and can be readily implemented. As an alternative or an augmentation to the numerically optimized controls, one may in the future explore _analytic_ control schemes under the umbrella of "shortcuts to adiabaticity" [53]. ## VII Conclusion In summary, we have presented the design of a rotation sensor based on the principles of tractor atom interferometry [35, 37]. An experimental setup can be realized using readily available instrumentation. The parameters for the pinwheel lattice, which is at the heart of the envisioned devices, can be obtained follow Figure 6: Dynamics of the TAI interferometer for a nonadiabatic separation time \(t_{r}=$150\,\mathrm{\SIUnitSymbolMicro s}$\) and a shallow potential with \(V_{0}=$0.2\,\mathrm{MHz}$\), as marked by the red diamond in Fig. 5. All other parameters are as in Fig. 3. Panels (b–f) show the results under the analytic drive function \(\omega(t)\) given by Eq. (10), and panels (g–k) show the results for an optimized field \(\omega_{\mathrm{opt}}(t)\) (see text for details). Panels (a, f) show the interferometric response of the un-optimized and the optimized drive functions to a constant background rotation \(\Omega\). For comparison, the semiclassical Sagnac curve from Fig. 4 (b) is included in panel (a). ing the considerations discussed in Sec. II. In the adiabatic limit, quantum-dynamics simulations of the sensor's single- and multi-loop operation agree well with semi-classical path-integral predictions, in which one simply enters the known tractor trajectories into the applicable Lagrangian. Our proof-of-principle simulations allow rotation sensitives of about 1 mrad/s. We have provided a discussion and concrete examples that illustrate the utility of quantum control to realize fast beam splitters and ramps to prepare coherently-split wave-function components that counter-rotate at high rotation speeds, allowing higher sensitivity. Estimates that extrapolate pinwheel area and rotation speed to reasonable limits predict sensitivities approaching 1 nrad/s, at a 1-second measurement time. Future investigations may further explore the benefits of quantum entanglement [54, 55, 56, 57] for increasing the sensitivity-bandwidth product, reducing sensor size etc. Moreover, optimal control theory techniques, as utilized here to reduce the splitting time and to alleviate the influence of nonadiabatic effects and decoherence caused by photon scattering, may present a viable pathway to improve the performance of matter-wave interferometers, including future experiments at the International Space Station [58, 59]. ###### Acknowledgements. We thank Ansh Shah for useful discussions and initial computational work. The work at the University of Michigan was supported by the Army Research Office and DEVCOM Army Research Laboratory under Cooperative Agreement Number W911NF-2220155, and by the NSF Grant No. PHY-2110049. AD acknowledges support from the Rackham Predoctoral Fellowship at the University of Michigan. MHG and SCC acknowledge support by the DEVCOM Army Research Laboratory under Cooperative Agreement Number W911NF-16-2-0147 and W911NF-21-2-0037, respectively. VSM is grateful for support by a Laboratory University Collaboration Initiative (LUCI) grant from OUSD.
2309.09017
Triple Regression for Camera Agnostic Sim2Real Robot Grasping and Manipulation Tasks
Sim2Real (Simulation to Reality) techniques have gained prominence in robotic manipulation and motion planning due to their ability to enhance success rates by enabling agents to test and evaluate various policies and trajectories. In this paper, we investigate the advantages of integrating Sim2Real into robotic frameworks. We introduce the Triple Regression Sim2Real framework, which constructs a real-time digital twin. This twin serves as a replica of reality to simulate and evaluate multiple plans before their execution in real-world scenarios. Our triple regression approach addresses the reality gap by: (1) mitigating projection errors between real and simulated camera perspectives through the first two regression models, and (2) detecting discrepancies in robot control using the third regression model. Experiments on 6-DoF grasp and manipulation tasks (where the gripper can approach from any direction) highlight the effectiveness of our framework. Remarkably, with only RGB input images, our method achieves state-of-the-art success rates. This research advances efficient robot training methods and sets the stage for rapid advancements in robotics and automation.
Yuanhong Zeng, Yizhou Zhao, Ying Nian Wu
2023-09-16T15:11:34Z
http://arxiv.org/abs/2309.09017v1
# Triple Regression for Camera Agnostic Sim2Real Robot Grasping and Manipulation Tasks ###### Abstract Sim2Real (Simulation to Reality) techniques have gained prominence in robotic manipulation and motion planning due to their ability to enhance success rates by enabling agents to test and evaluate various policies and trajectories. In this paper, we investigate the advantages of integrating Sim2Real into robotic frameworks. We introduce the Triple Regression Sim2Real framework, which constructs a real-time digital twin. This twin serves as a replica of reality to simulate and evaluate multiple plans before their execution in real-world scenarios. Our triple regression approach addresses the reality gap by: (1) mitigating projection errors between real and simulated camera perspectives through the first two regression models, and (2) detecting discrepancies in robot control using the third regression model. Experiments on 6-DoF grasp and manipulation tasks (where the gripper can approach from any direction) highlight the effectiveness of our framework. Remarkably, with only RGB input images, our method achieves state-of-the-art success rates. This research advances efficient robot training methods and sets the stage for rapid advancements in robotics and automation. ## I Introduction Simulation plays a pivotal role in the development and validation of robotic systems by providing a virtual platform that closely mirrors real-world scenarios. In industrial applications, the reliability of robotic systems is often ascertained through multiple simulation tests. This process encompasses (1) the creation of a _digital twin_ representing the actual manufacturing environment; (2) the planning and execution of varied trajectories within the simulation; (3) a thorough assessment of each trajectory's outcomes; and (4) the deployment of the most effective trajectory in a real-world setting. The focus of this research is to enhance efficiency by automating the aforementioned stages. Navigating simulation-to-reality (Sim2Real) applications presents two pivotal challenges, often referred to as the _reality gap_. First, how to build an accurate enough digital twin that captures the real-world scenes in a simulation using sensor (RGB or RGBD camera) data? Second, how to seamlessly apply the simulation result in real-world situations? The first challenge is a fundamental problem in multi-view geometry. While recent advancements have tackled this using 6D pose estimation [1, 2] or 3D scene reconstruction [3, 4], there are limitations. The former method struggles to generalize to unfamiliar objects. While the latter can intricately map complex scenes, it often obscures specific object details, making it hard to discern between interactive and stationary objects. The second challenge arises from the unreal nature of simulated physics and dynamics. Today, even state-of-the-art robot simulators fall short of accurately modeling physical phenomena, such as friction, impact, and deformation. To tackle the challenges above in Sim2Real and digital twin robotic applications, we introduce the triple regression framework for cameras with undetermined intrinsic properties (eg. focal length). The initial regression aligns the cameras in the real world and those in the simulated environment. Leveraging image segmentation, we can estimate the target object's location, and subsequently place the approximated object into the simulation space. This transformation is derived from the second regression. Finally, the third regression is employed to compensate for errors originating from the simulated robotic trajectory and dynamics. Employing the triple regression framework, we propose a conceptual framework for text-to-action tasks as shown in Fig 1. The process begins with using semantic grounding to generate a joint task representation, breaking down the task into multiple grab-and-place actions. Using the segmentation modules [5], the dimensions and positions of these 3D objects are determined and integrated into a simulation engine to create a real-time digital twin. Within the simulation, various trajectories are created. A Visual Question Answering (VQA) model ascertains task success. Subsequently, successful simulations are implemented in the real world. This methodology offers an automated Sim2Real workflow, notably requiring only a standard RGB camera and rudimentary site measurements. We conducted an experiment of picking up a jar at a random position and pouring its water to a cup at another random position. This experiment involves the robot's control of rigid bodies and fluids, as well as planning to overcome random factors. Experimental results show that our method Fig. 1: **Conceptual text-to-action framework: Tasks described language are converted into a joint representation which records spatial, and temporal information of the scene and task. A digital twin is created based on semantic grounding and camera observation. Motions are planned in the simulator. VQA is used to ascertain the success of the performed action** ology is able to provide an offline and zero-shot solution (no training for the robot in real space) and reduce the robot's time to complete the task by at least \(50\%\) compared to the state-of-the-art approach. Moreover, our methodology is able to improve the success rate by \(75\%\) under conditions of randomized object placement. Our contribution can be summarized as (1) we present a systematic framework to tackle the reality gap stemming from constructing real-time digital twins and migrating simulated trajectory to reality; (2) we propose a Sim2Real solution for the text-guided robot manipulation tasks; (3) we demonstrate its effectiveness through physical experiments on a Kinova Gen 3 robot when comparing our method with other state-of-the-art solutions. ## II Related works **Sim2Real.** Recent breakthroughs in simulation-to-reality (Sim2Real) transitions now allow robots to train on intricate tasks within simulations and then replicate those abilities in the actual world. For example, recent works have employed Sim2Real techniques to enhance a robot's proficiency in grasping objects [6], and maneuvering autonomous vehicles [7]. These strides forward are attributed to the application of machine learning methods that initially train robots within simulations, which are later refined in real-world settings. Consequently, the Sim2Real methodology is rapidly gaining traction as a favored strategy for crafting agile and adaptable robots suited for dynamic, real-world scenarios. **Robot motion planning.** The motion planning problem can involve geometrically generating the path (path planning), or more broadly, outputting interactive trajectories that take into account kinetics, control features, robot poses, and movements of nearby objects. Classical motion planning algorithms use graph search (eg. \(A^{*}\)), random sampling (eg. rapidly exploring random tree and probabilistic roadmap method), and curve interpolation methods. Recent advancements purposes planners based on supervised learning [8] and reinforcement learning [9, 10]. These developments emphasize enhancing the reliability of plans amidst uncertainties and safeguarding human interactions within co-working spaces [10]. The overarching aim of these initiatives is to empower robots to operate autonomously and adeptly in fluctuating and unpredictable settings. **Digital-twin.** Most Sim2Real approach learns a policy by exploring the environment and then transferring the learned outcome to reality. These applications use a simulator as a playground and do not benefit from up-to-date information. In comparison, digital twin applications attempt to create a true-to-life replicate in the simulator based on up-to-date data. Most applications [11] of digital twins involve predicting the problems and errors for the future states of its physical twin, providing us an opportunity to plan the systems accordingly. **Grasp and manipulation.** Modern approaches to implementing learning-based robotic agents that are capable of performing grasp and manipulation tasks typically fall into these three categories. (1) The classical robotic grasping agent [12, 13] relies on object pose estimation and grasping dataset matching [14, 15]. While these methodologies demonstrate effectiveness, the method cannot generalize to grasping previously unseen objects. (2) Exploration-based learning agents[16] have demonstrated success in robotic grasp and manipulation tasks in simulated environments and generalize well to unseen objects. However, the reality gap can lead to overfitting to the simulated environment [17], inaccurate results, and unstable behaviors. (3) More recent discoveries in end-to-end models [18, 19] that map scene images to action sequences are capable of performing complex tabletop robotic manipulation. These methods use an action-centric approach, meaning that they do not use any explicit representations of object poses, but they are known to require copious amounts of data and are mostly task-specific. ## III Method ### _Joint task planning module_ The spatial-temporal-causal And-Or graph (STC-AOG) is used as the task planning model to represent the goal as well as the current cognitive state of the robot. This structure allows us to associate spatial, temporal, and causal relationships within the task description. The task of _pouring water from a jar at a random position to a cup at another random position_, is showcased in Figure 2. In general, an And-Or Graph consists of nodes and edges. The set of nodes includes Or node, And node, and Terminal node. Each **Or node** specifies the Or relation: only one of its children nodes would be performed at a given time. An **And node** represents the And relation and is composed of several children nodes. Each **Terminal node** represents a set of entities that cannot be further decomposed. The edge represents the top-down sampling process from a parent node to its children nodes. The root node of the And-Or tree is always an And node connected to a set of And/Or nodes. Each And-node represents a sub-task which can be further decomposed into a series of sub-tasks or atomic actions. **Causal relation.** Causal knowledge represents the pre-conditions and the post-effects of atomic actions. We define this knowledge as fluent changes caused by an action. Fluent can be viewed as some essential properties in a state that can change over time, e.g., the temperature in a room and the status of a heater. For each atomic action, there are pre-conditions characterized by certain fluents of the states. For Fig. 2: **Representation of pouring water from a jar to a cup with STC-AOG. The data structure reduces a sentence into subgoals for task planning.** example, the water in the experiment should initially be in the jar and then transferred to the cup. **Temporal relation.** Temporal knowledge encodes the schedule for an agent to finish each sub-task. It also contains the temporal relations between atomic actions in a low-level sub-task. The sub-task of preparing in this study, for example, consists of _picking up jar_ and _pouring water from the jar to the cup_. **Spatial relation.** Spatial knowledge represents the physical configuration of the environment that is necessary to finish the task. In our case, to pick up the jar, the robot needs to know the location and the rotation jar. This is determined by the following triple regression framework. ### _Triple Regression for Matching Simulation and Reality_ The triple regression framework is designed to align the virtual environment with the real world. The workflow is shown in Fig 3. The framework addresses reality gaps that are caused by (1) intrinsic discrepancies between the real and simulated camera, (2) errors while comprehending scenes and determining object locations, and (3) differences in executing planned trajectories between simulated robot and real robot. Equation 1 gives a formal definition of aligning the parse graph in simulation, \(pg^{s}\), with the parse graph in reality \(pg^{r}\), by aligning the camera, scene, and robot control separately. \[P(pg^{s}|pg^{r})=\prod_{i\in\{\text{camera, scene, robot}\}}P(pg^{s}_{i}|pg^{r}_{i}) \tag{1}\] We define our symbols as follows. for a point \(\textbf{p}=(x,y,z)\) and its homogenous representation \(\mathbf{\tilde{p}}=(x,y,z,1)\), we define \(\textbf{u}=(x_{r},y_{r})\) to be the point in image captured by the real camera and \(\textbf{v}=(x_{s},y_{s})\) to be the point in image captured by simulated camera. Their representations in homogeneous coordinate is \(\mathbf{\tilde{u}}=(x_{r},y_{r},1)\) and \(\mathbf{\tilde{v}}=(x_{s},y_{s},1)\). **Camera alignment.** The goal of the first regression is to align real and simulated cameras. In other words, we aim to find a projection from **v** to **u**. In linear cameras, this projection can be modeled as in equation 2, where \(M\) is defined in equation 3. \[\mathbf{\tilde{u}}=M\mathbf{\tilde{v}} \tag{2}\] \[M=\begin{bmatrix}A_{2\times 2}&b_{2\times 1}\\ c_{1\times 2}&1\end{bmatrix} \tag{3}\] To estimate \(M\), we construct a reference scene (table and camera) to mirror reality. 2D locations of multiple reference points (eg. table corner and table center) captured in both simulated and real cameras are recorded. \(M\) can then be derived with constrained multivariate regression. Besides projective transformation, we also consider the 2D scaling and affine transformation. Table I provides the final comparative analysis of the three regression methods. **Camera calibration and object allocation** The goal of this phase is to estimate the intrinsics of the simulated camera. Using this we can determine the 3D location of a point with 2D pixel locations. Camera in the simulation environment can be modeled as a matrix \(C\) formed by multiplying intrinsic \(I\) and extrinsic \(E\) defined in equation 4 and 5. \[I=\begin{bmatrix}f_{x}&0&c_{x}&0\\ 0&f_{y}&c_{y}&0\\ 0&0&1&0\end{bmatrix} \tag{4}\] \[E=\begin{bmatrix}R_{3\times 3}&t\\ 0_{1\times 3}&1\end{bmatrix} \tag{5}\] With this model, capturing an image can be modeled as transforming point world position \(\mathbf{\tilde{p}}\) to pixel position \(\mathbf{\tilde{v}}\) as \[\mathbf{\tilde{v}}=IE\mathbf{\tilde{p}}=C\mathbf{\tilde{p}} \tag{6}\] The rotation matrix \(R\) and 3D translation \(t\) in equation 5 can be obtained in the simulation environment. By sampling multiple points in the simulation and camera, \(I\) can be estimated by solving constrained least squares problems. In our setup, all points were placed on planes (eg. table) with known normal \(\textbf{n}=(a,b,c)\). Combining this constraint, we can solve for \(\mathbf{\tilde{p}}\) given \(\mathbf{\tilde{v}}\) in equation 6 to determine the 3D location of a point. Segmentation modules are used to place the object. Anchor points and bounding boxes are generated with pattern matching or text-to-image grounding models [20]. Contour lines of the objects are extracted using [5] and key points are subsequently determined for cuboid and cylindrical objects. We use these key points to align objects in simulation and reality. **Regression to accommodate miscellaneous errors.** Another source of error is control discrepancy and inaccuracy in constructing reference objects in the simulation. We employ another regression model (Equation 7) to correct for robot movements. We collect points \(\textbf{p}_{s}\) and manually adjust their _should-be_ position \(\textbf{p}_{r}\) during robot execution. Coefficient \(D\) can be obtained by solving regression problems. Note that \(\mathbf{\tilde{p}}_{s}\) and \(\mathbf{\tilde{p}}_{r}\) are the homogeneous representation of \(\textbf{p}_{s}\) and \(\textbf{p}_{r}\) \[\mathbf{\tilde{p}}_{s}=D\mathbf{\tilde{p}}_{r} \tag{7}\] ### _Validating Sim2Real matching_ Our approach to aligning simulation with reality during robot execution involves a multi-step strategy. Firstly, we emphasize the dynamic nature of conditions, termed as _fluents_, which can evolve over time. To capture these evolving states, we continuously track changes in fluent at multiple time points to ensure synchronization between the simulated and real-world environments. Secondly, we introduce a structured questionnaire that prompts users to provide binary true-false responses pertaining to the observed fluents. This questionnaire serves as a reliable ground truth reference, enabling us to quantitatively assess the correctness of simulation outputs. The questionnaire consists of four questions. (1) Is the robot ready to pick up the jar? (2) Has the robot already picked up the jar? (3) Is the jar above the cup? (4) Is the robot pouring water into the cup? Lastly, we leverage advanced Visual Question Answering (VQA) models to analyze and interpret the questionnaire responses. This model takes in an image of each fluent and answers the aforementioned prompts. An example of fluent matching is shown in 4 \[P(pg^{r}|pg^{s})=\exp(-\sum_{i}|F_{i}^{r}-F_{i}^{s}|) \tag{8}\] Given the parse graph in simulation \(pg^{s}\), its discrepancy with the parse graph, in reality, \(pg^{r}\) is estimated quantitatively by the hamming distance between fluents in reality and fluent in simulation as shown in equation 8. \(F_{i}^{r}\) (fluent in reality) and \(F_{i}^{s}\) (fluent in simulation) are the VQA query results (true/false) for each question \(i\) in the questionnaire above. ## IV Experiment The experiment section is organized into three parts: (1) ablation studies on the triple regression framework, (2) grasp-and-pour experiments on a physical robot, and (3) comparisons of VQA models. ### _Ablation study on the triple regression framework_ In this experiment, we validate the necessity of using the projective transformation model perspective discrepancy between real and simulated cameras. A consumer-grade Microsoft RGB camera is mounted in some known location and direction above the table. The table is modeled in Nvidia Omniverse corresponds to the real dimension, and the camera is placed with similar extrinsic that mirror the reality. We align the cameras using three types of regression: full projective transformation, 2D scaling (\(A=0\) and \(c=0\) in Equation 3), and affine (\(c=0\) in Equation 3) A cube with known dimension is placed at known locations both in reality and in simulation. We then compare the pixel location of the 6 corners of the cube captured both in simulation and reality. Figure I summarizes the correlation between the degree of freedom and the achieved Mean Square Error (MSE), signifying the necessity of applying the projective transformation. Using this approach, errors from Sim2Real point position mapping can be controlled as low as 2.7 pixels on average. ### _Grasp and pour experiment_ In this section, we first introduce our experiment settings. Subsequently, we explore multiple baseline models for robot planning and control. We then compare our method with the baseline and human performance, and finally, we summarize the results and failure cases. (1) _Setup_: We conducted pick-and-pour experiments using a Kinova Gen3 7-dof robot, equipped with a Robotiq 2f-85 gripper (jaw length 8.5cm), mounted on a table. The setup included a transparent jar (side length 6.5cm) and a blue cup (radius 8cm). For simulation purposes, we employed the Isaac Sim platform powered by the PhysX engine, where we replicated the table and robotic arm. To align the cameras, we compared their relative position in reality to that in Isaac Sim. Lula implementation of RRT (Rapidly-exploring random tree) [21] was used to generate multiple collision-free paths. RMPflow was employed [22] to generate smooth policies to control the robot both in the real world and in simulation. Following this, we input the command "Pour water from the transparent jar into the blue cup" into our Isaac Sim extension and monitored the outcome. (2) _Baseline models_: We introduce two state-of-the-art models that serve as the robot motion planning module. The **6D-CLIPT[19]**, which deals with multi-view observations and language input and output a sequence of 6 degrees of freedom (DoF) actions, infers the position and rotation of the target object given the vision and language inputs. Another reinforcement learning-based model planning (**RL**) [23], Fig. 3: **Workflow of triple regression framework.** The framework creates digital twins by 1) camera shots, 2) Semantic-based segmentation and contour extraction, 3) key point identification, and 4) object placement in simulation. Plans are generated and simulated. Coordinates of successful plans are corrected and executed in reality. predicts the position and pose of the robot gripper (end-effector) given the vision inputs and the target object name. We also consider the _heuristic_ results, i.e. to control the robot motion with the aid of human experts. This human-guided heuristic not only serves as a benchmark of discernment but also offers a benchmark to compare human mastery and our automated Sim2Real process. (3) _Result_: The analysis of our experiment's results reveals a compelling advancement in both the realms of task execution speed and task success rate. As shown in Figure 5, our method is remarkably time-saving, reducing the temporal requirements by a significant margin. Specifically, compared by state-of-the-art RL [23] and 6D Cliport models [19], our methods exhibit a time reduction of \(54.5\%\). Even compared with the human-guided heuristic robot control, our methods exhibit a noteworthy \(43.4\%\) reduction in time requirements. This substantial efficiency gain proves the efficacy of our approach in robotic decision-making and manipulation. In terms of task success rate (see Table II), the 6D-CLIPort model exhibits a success rate of \(25\%\) for both _Pick up_ and _Pour water_ tasks. Similarly, the RL model showcases slightly improved results, with success rates of \(35\%\) for picking up and \(40\%\) for pouring water. In contrast, our proposed method showcases a notable leap in performance, achieving a remarkable \(75\%\) success rate for picking up and \(70\%\) for pouring water. This heightened performance substantiates the robustness and efficacy of our approach in executing these intricate maneuvers. Furthermore, focusing on the _Success (soft)_ (allowing a little bit of water leak) and _Success (hard)_ (not allowing water leak) metrics, our method again achieves a success rate of \(70\%\) and \(60\%\), respectively. While the CLIPort and RL models exhibit success rates ranging from \(20\%\) to \(40\%\) in these categories, our approach consistently demonstrates a superior ability to accomplish the task's objectives, even under more stringent conditions. However, it's crucial to acknowledge the human-guided Fig. 4: **Query fluent matching in robot execution. We select four crucial moments during the execution of the robot in both simulation and reality and report GPT-V’s VQA query answers for the questionnaire: (1) Is the robot ready to pick up the jar? (2) Has the robot already picked up the jar? (3) Is the jar above the cup? (4) Is the robot pouring water to the cup? Output from VQA models and ground truths are shown** Fig. 5: **Task execution speed comparison. We compare our method with 6D-CLIPort, RL-based robot motion planning, and human-guided heuristic control, and we measure the average execution time for the whole task.** heuristic method's exceptional performance, which stands out as a benchmark in this evaluation. With success rates of \(95\%\) for _Pick up_ and \(85\%\) for _Pour water_, as well as \(85\%\) and \(80\%\) success rates for _Success (soft)_ and _Success (hard)_. (4) _Failure cases_: The major failure case (5 of 8) is when the cup or jar lies beyond the robot's configuration space. In such case, no plan generated inside the simulation environment reaches the goal and thus no actions will be performed in the real world. Another failure case (3 of 8) is when the segmentation module cannot output smooth contours for subsequent determination of key points, causing our key point identification algorithm to generate inaccurate results. In summation, the experiment results underscore our proposed method's substantial advancements in achieving successful task execution, positioning it as a promising contender against established models. While the human-guided heuristic approach remains the gold standard, our approach showcases remarkable potential. ### _Comparison of VQA models_ We delve into three VQA models to get query results from the questionnaire. The Vision-and-Language Transformer (**ViLT**) [24] stands at the intersection of visual and linguistic understanding, exhibiting a remarkable capability to seamlessly process and comprehend both visual and textual information. On another model **MiniGPT-4**[25] emerges as a compact yet potent architecture consisting of the vision transform and large language models, showcasing impressive text generation prowess within a more resource-efficient framework. Meanwhile, GPT-4 [26], the latest iteration of the renowned GPT series, continues to push the boundaries of large-scale language modeling. Boasting an extensive knowledge base, GPT-4 demonstrates an unparalleled aptitude for natural language understanding, generation, and manipulation, thus serving as a pivotal milestone in the progression of AI-driven language processing and generation. We apply the visual input API (**GPT-V**) of the GPT-4 for inference tasks. Table III presents a comprehensive comparison of precision, recall, and F1 scores for query fluent results across real scenes, simulation, and overall performance. Notably, GPT-V exhibits the highest performance across multiple metrics, boasting an impressive precision, recall, and F1 score in both real scene and simulation scenarios. ViLT and MiniGPT-4 also showcase varying degrees of performance, with GPT-V demonstrating superior consistency in matching simulation with reality, as evidenced by its remarkable 96.9% consistency score. These results underline the efficacy of GPT-V in achieving a harmonious alignment between reality and simulation during robot execution. ## V Conclusion To tackle the enduring challenges of connecting simulation to reality in robotic tasks, we introduce the triple regression framework. This framework is designed to rectify the discrepancies arising from differences in camera parameters, projections, and control dynamics between simulated and real-world settings. Utilizing this framework, we propose an innovative Sim2Real technique for managing robotic grasping and manipulation. The robustness of our approach is demonstrated through our grasp and pour experiments. Additionally, our integration of large vision models, And-Or graphs for task planning, and vision question and answering opens up new doors for advancing future robotic applications.
2302.14452
An Effective Crop-Paste Pipeline for Few-shot Object Detection
Few-shot object detection (FSOD) aims to expand an object detector for novel categories given only a few instances for training. However, detecting novel categories with only a few samples usually leads to the problem of misclassification. In FSOD, we notice the false positive (FP) of novel categories is prominent, in which the base categories are often recognized as novel ones. To address this issue, a novel data augmentation pipeline that Crops the Novel instances and Pastes them on the selected Base images, called CNPB, is proposed. There are two key questions to be answered: (1) How to select useful base images? and (2) How to combine novel and base data? We design a multi-step selection strategy to find useful base data. Specifically, we first discover the base images which contain the FP of novel categories and select a certain amount of samples from them for the base and novel categories balance. Then the bad cases, such as the base images that have unlabeled ground truth or easily confused base instances, are removed by using CLIP. Finally, the same category strategy is adopted, in which a novel instance with category n is pasted on the base image with the FP of n. During combination, a novel instance is cropped and randomly down-sized, and thus pasted at the assigned optimal location from the randomly generated candidates in a selected base image. Our method is simple yet effective and can be easy to plug into existing FSOD methods, demonstrating significant potential for use. Extensive experiments on PASCAL VOC and MS COCO validate the effectiveness of our method.
Shaobo Lin, Kun Wang, Xingyu Zeng, Rui Zhao
2023-02-28T09:56:45Z
http://arxiv.org/abs/2302.14452v2
# An Effective Crop-Paste Pipeline for Few-shot Object Detection ###### Abstract Few-shot object detection (FSOD) aims to expand an object detector for novel categories given only a few instances for training. However, detecting novel categories with only a few samples usually leads to the problem of misclassification. In FSOD, we notice the false positive (FP) of novel categories is prominent, in which the base categories are often recognized as novel ones. To address this issue, a novel data augmentation pipeline that Crops the Novel instances and Pastes them on the selected Base images, called CNPB, is proposed. There are two key questions to be answered: (1) How to select useful base images? and (2) How to combine novel and base data? We design a multi-step selection strategy to find useful base data. Specifically, we first discover the base images which contain the FP of novel categories and select a certain amount of samples from them for the base and novel categories balance. Then the bad cases, such as the base images that have unlabeled ground truth or easily confused base instances, are removed by using CLIP. Finally, the same category strategy is adopted, in which a novel instance with category \(n\) is pasted on the base image with the FP of \(n\). During combination, a novel instance is cropped and randomly down-sized, and thus pasted at the assigned optimal location from the randomly generated candidates in a selected base image. Our method is simple yet effective and can be easy to plug into existing FSOD methods, demonstrating significant potential for use. Extensive experiments on PASCAL VOC and MS COCO validate the effectiveness of our method. ## 1 Introduction In recent years, we have witnessed the great progress of object detection [3, 4, 20, 27, 28]. However, the impressive performance of these models relies on a large amount of annotated data. The detectors cannot generalize well to novel categories, especially when the annotated data are scarce. In contrast, humans can learn to recognize or detect a novel object with only a few labeled examples. Few-shot object detection (FSOD), which simulates this way, has attracted increasing attention. In FSOD, an object detector that is trained using base categories with sufficient data (base images) can learn to detect novel categories using only a few annotated data (novel images). The accuracy of a FSOD model is decided by the true positive (TP) and false positive (FP) of novel categories, since the novel categories are the main concern. The main problem of FSOD is misclassification [29], in which novel categories are often recognized as base categories (the FP of base categories). However, the FP of novel categories is previously ignored, which is more important in FSOD. The FP of novel categories means the base categories and background regions that are recognized as the novel categories. We analyze the FP ratio of novel categories by using three representative FSOD methods, TFA [31], FSCE [29], and DeFRCN [25], as shown in Figure 1. We test the trained few-shot model on the base dataset and calculate the FP Figure 1: The FP ratio of novel categories in the base dataset of PASCAL VOC split 1. We test TFA, FSCE and DeFRCN, which are three representative FSOD methods. The reported FP ratio is the average of multiple few-shot settings, including 1-shot, 3-shot, 5-shot and 10-shot. The threshold is used to count FP images whose confidence score on a novel category is higher than the threshold. Using our CNPB, the FP ratio of all methods decreases up to 20%. ratio of novel categories. The FP ratio of FSCE is up to 20\(\%\), and the FP ratio of DeFRCN is up to 60\(\%\), which are prominent and can not be overlooked. We think that current few-shot models have high FP ratio because of the small size of training dataset. A weak knowledge is formed in the learned model, which can only recognize the key features of an object. However, the key features of two categories may be the same, such as the sofa and chair both have the backrest. This will lead to high FP ratio. To reduce the FP, we wish to introduce the direct comparison between the TP and FP into the training phase. To this end, one can simply introduce the base images that contain the FP into the novel dataset. However, directly introducing many base images causes the badly data-imbalance issue, and makes the base categories dominate the training process. In this work, we adopt cutmix [36], an image-mixing data augmentation approach. The naive use of cutmix ignores the FP of novel categories and causes the over-fitting problem in favor of the base categories, as it generates more base samples than novel ones due to the fewer novel categories [35]. Our key idea is to combine the novel images and the base images that contain the FP of novel categories. Specifically, we can Crop the FP region in the Base image and Paste it on a Novel image (CBPN), or we can Crop the Novel instance and Paste it on a Base image which contains FP (CNPB). CBPN uses a few novel images as the background which has the over-fitting problem, since the novel instances are repeated with the same image contexts. CNPB is superior because it can over-sample novel categories and leverage different FP images as the background. As illustrated in Figure 1, CNPB can decrease the FP ratio of all methods up to 20\(\%\). In CNPB, we investigate two key questions: (1) How to select useful base images? and (2) How to combine novel and base data? **A multi-step selection strategy for base images** is proposed for our CNPB. First, we select the base images containing the FP of novel categories. Second, a small part of FP images are randomly selected to prevent the data-imbalance between the base and novel categories. Third, the base images containing the unlabeled ground truth or the FP that is easily confused with the novel instances (hard cases) are removed by CLIP [26]. CLIP is a zero-shot recognition model which can use natural language to retrieve related images based on the CLIP score. We remove the hard cases of novel instances because we find using these cases reduces the accuracy of novel categories due to the weak knowledge of the few-shot model. For example, some special chairs are very similar to the sofa. If we use these chairs as negative samples, this will destroy the learned weak knowledge of the model, such as the backrest or the feet of sofa. At last, in order to balance TP and FP in each training iteration, the same category (SC) strategy is proposed. For example, the novel instance "cow" is pasted on the selected base image containing the FP of "cow". **When combining the novel instances and the selected base images**, the novel instances are over-sampled by cropping and random downsampling. We wish to avoid the base instances from being occluded. By randomly generating many candidate locations in a selected image, we can paste a processed novel instance at the assigned optimal location, which is determined by the minimum Intersection Over Union (IOU) between the candidates and the bounding boxes of original base instances and the FP regions in the selected base image. Our key contributions can be summarized as: (1) We propose CNPB, a novel data augmentation pipeline that Crops the Novel instances and Pastes them on the selected Base images, for FSOD. (2) Our method is simple yet effective, and can be easily integrated into existing FSOD methods without any architectural changes or complex algorithms. (3) Our CNPB significantly improves multiple baselines and achieves state-of-the-art performance on PASCAL VOC and MS COCO. ## 2 Related Works ### Few-shot Object Detection FSOD is an important yet unsolved task in computer vision. Some works use meta-learning [8, 15, 32, 35, 18], where a meta-learner is introduced to acquire class-agnostic meta-knowledge which is transferred to novel classes. These methods extract meta-knowledge from a set of auxiliary tasks via the episode-based strategy [30], where each episode contains \(C\) classes and \(K\) samples of each class, i.e., \(C\)-way \(K\)-shot. With the help of a meta learner that takes the support images as well as the bounding box annotations as inputs, the feature re-weighting modules are applied to a single-stage object detector (YOLOv2) [15] and a two-stage object detector (Faster R-CNN) [35]. CME [18] uses a class margin equilibrium (CME) approach, with the aim to optimize both feature space partition and novel class reconstruction in a systematic way. Transformation Invariant Principle (TIP) [17] is proposed for various meta-learning models by introducing consistency regularization on predictions from the transformed images. TFA [31] is a simple two-stage fine-tuning approach, which significantly outperforms the earlier meta-learning methods. FSCE [29] proposes a simple yet effective approach to learning contrastive-aware object proposal encodings that facilitate the classification of detected objects. FADI [2] uses a two-step fine-tuning framework via association and discrimination, which builds up a discriminative feature space for each novel class with two integral steps. DeFRCN [25] extends Faster R-CNN by using gradient decoupled layer for multi-stage decoupling and prototypical calibration block for multi-task decoupling. There are few FSOD methods are data-related. MPSR [34] adopts multi-scale positive sample refinement to handle scale variance problem. Pseudo-Labelling [16] is proposed to obtain high-quality pseudo-annotations for novel categories from the training dataset. Their method can find previously unlabelled instances. DetectorGAN [22] uses a GAN [10] to generate new images, and jointly optimizes the GAN model and a detector. Different from them, we find the FP of the novel categories, and propose a new data-augmentation method for combining the novel instances and the base images which contain the FP of novel categories. ### Related Data Augmentation Techniques In computer vision, data augmentation strategies have been widely adopted, such as mixup [37], cutout [5], cutmix [36] and mosaic [1], etc. Among them, cutmix and mosaic can be used to do combination for novel and base categories in FSOD. In cutmix [36], the patches are cut and pasted among training images where the ground truth labels are also mixed proportionally, which is used for image classification. Mosaic [1] mixes four training images into one image for small object detection. The most similar method to our CNPB pipeline is cutmix. However, there are several important differences. First, we use the idea of cutmix to solve a new problem of FSOD. Second, the specific objects of cropping and pasting should be designed, in which we crop the novel instance and paste it on the selected base image. Furthermore, our selection strategies for base images are critical. Without our designed strategies, existing copy-paste methods don't work in FSOD, such as mosaic in yolov4 [1]. There are also some recent efforts that employ cutmix to tackle the long-tail classification [24] and instance segmentation [9]. Unlike them, we consider the problem of FP by pasting a novel instance on the base image containing the FP and leverage a multi-step selection strategy for the base images in FSOD. ## 3 Method ### Preliminary In FSOD, given a labeled base dataset \(D_{B}\), there are \(C_{B}\) base classes with sufficient images in each class. Novel dataset \(D_{N}\) with novel classes \(C_{N}\) consists of a few samples in each class. \(C_{B}\) and \(C_{N}\) do not have overlapping categories. The number of objects for each class in \(C_{N}\) is \(K\) for K-shot detection. There are two stages in FSOD methods [18, 29, 31]. In the pre-training stage, the model is trained on base classes to obtain a robust feature representation. In the fine-tuning stage, the pre-trained model is then fine-tuned on a balanced few-shot set which includes both base and novel classes (\(C_{B}\cup C_{N}\)). ### Our Pipeline We propose CNPB, a novel data augmentation pipeline that Crops the Novel instances and Pastes them on the selected Base images. There are two main steps in our CNPB. **Step1: Data Preparation.** For **base data**, first, the fine-tuned few-shot model is used to test the base dataset, then the inference results are obtained. We select the base images as the background by using a multi-step selection strategy. In detail, the base images whose results have the FP of novel instances with higher confidence scores than the threshold are chosen. For each base category, we randomly Figure 2: We use a simple crop and paste method to build our CNPB pipeline. First, the base set are processed by the trained few-shot model. Then, the muti-step selection strategy is applied for selecting the useful base images which contain the FP of the novel category. After that, the novel instance from the novel set is combined with the selected base image. At last, the location of pasting the novel instance into the base image is decided by the minimum IOU between the random candidates and the FP /base instances in the selected base image. select a certain amount (3 in our CNPB) of base samples from above FP images for all few-shot settings, indicating great transferable ability. After that, the base images that contain the unlabeled ground truth or the FP that is easily confused with the novel instances (hard cases) are removed by CLIP [26]. CLIP is a zero-shot recognition model which can utilize natural language to retrieve related images based on the CLIP score. CLIP gets the input of a textual description of a novel category and an image, and then computes the distance between them. The template of the input text is "a" with the name of a novel category. The input image is the cropped instance which is predicted to have the novel category by the few-shot model in the base dataset. After inputting the text and current instance into the CLIP model, we can obtain the scores of all novel categories for the current image. If the maximum score is more than 0.5, this image is identified as a bad case which should be removed. Some samples removed by CLIP are shown in Figure 3 (a)(/(b) in section 4.4.5. Finally, to balance TP and FP in each training iteration, we select the base image that has the FP of the current pasted novel instance, which is a strategy of same category (SC). For example, the novel instance "cow" is pasted on the selected base image containing the FP of "cow". For **novel data**, the novel instances come from the few-shot novel set. **Step2: Combination.** We take "cow" as an example. Figure 2 shows how to combine the novel instance and the selected base image. Specifically, we crop and randomly downsize a novel instance \(I_{n}\) of category \(n\), and paste it on a selected base image. The optimal location for pasting is determined by the minimum IOU between the randomly generated candidates and the bounding boxes of original base instances \(B_{o}=[b_{o1},b_{o2},..]\) and the FP regions \(B_{f}=[b_{f1},b_{f2},..]\). The considered boxes \(B=B_{o}\cup B_{f}\). If there are more than one considered boxes, the final IOU is calculated via summation. After getting the final location for pasting a novel instance, a new bounding box can be obtained. The formulation of this process is depicted below: \[S=argmin(IOU(G(CD(I_{n})),B)) \tag{1}\] \(CD\) denotes cropping and down-sampling for the novel instance \(I_{n}\). \(G\) is generating random candidates for pasting the novel instance. \(IOU\) calculates the IOU between all candidates and the considered bounding boxes in a base image. \(S\) is the final selected location. **The detail of combination.** In the step2 of CNPB, we design a majority-based and a minority-based combination methods, since the number of novel images and the number of the select base images may be different. Majority-based combination is duplicating the images, but minority-based combination means deleting the redundant images. For example, if the novel data is 10-shot and the number of the selected base images is 3, majority-based combination duplicates the base images to get 10 base images for pasting each novel instance on each base image. Minority-based combination deletes 7 novel images for pasting the remaining 3 novel instances on the 3 base images. Whether using majority-based combination or minority-based combination, the diversity of novel categories does not change. Because the combined images will be added to the original few-shot set to build a new dataset for fine-tuning. The experimental results show that minority-based combination is better. The reason is that majority-based combination duplicates the base images which are the background of the pasted novel instances. The context with low diversity causes over-fitting problem. ## 4 Experiments ### Datasets and Evaluation Protocols We evaluate our methods on PASCAL VOC [6, 7] and MS COCO [21]. In PASCAL VOC, we adopt the common strategy [27, 28] that using VOC 2007 test set for evaluating while VOC 2007 and 2012 train/val sets are used for training. Following [35], 5 out of its 20 object categories are selected as the novel classes, while the remaining as the base classes. We evaluate with three different novel/base splits from [35], named as split 1, split 2 and split 3. Each split contains 15 base categories with abundant data and 5 novel categories with K annotated instances for K = 1, 3, 5, 10. Following [29, 31, 35], we use the mean average precision (mAP) of novel categories at 0.5 IoU threshold as the evaluation metric and report the results on the official test set of VOC 2007. When using MS COCO, 20 out of 80 categories are reserved as novel classes, the rest 60 categories are used as base classes. The detection performance with COCO-style AP, AP\({}_{50}\), and AP\({}_{75}\) for K = 10 and 30 shots of novel categories are reported. ### Implementation Details Our baselines are TFA [31], FSCE [29] and DeFRCN [25], which are representative methods in FSOD. These methods all use Faster R-CNN [28] with ResNet-101 [14], which are the same as almost all FSOD methods. The training strategies of our methods follows the selected baselines. The difference is that we pre-process the few-shot set using our CNPB pipeline. There are some hyper-parameters in our pipeline. The threshold for selected base images is 0.6 for TFA and DeFRCN, and 0.8 for FSCE. Resize ratio of the novel instances is randomly sampled from 1/5 to 1/2. The number of candidates for pasting the novel instances on a base image is 1000. ### Comparison with State-of-the-art Methods We compare our approach to several competitive FSOD methods. The results are shown in Table 1 and Table 2. We adopt our CNPB for all baselines to prove the effectiveness of our pipeline. Following [2, 12, 16, 18, 34], we use a single run with the same training images to get the results of different shots. #### 4.3.1 Results on PASCAL VOC and MS COCO. Following [29, 31, 35], we provide the AP\({}_{50}\) of the novel classes on PASCAL VOC with three splits in Table 1. By using CNPB, our methods all can outperform the baselines in almost all few-shot settings and achieve state-of-the-art performance. The most obvious improvement is up to 9.6\(\%\). We report the COCO-style AP, AP\({}_{50}\), and AP\({}_{75}\) of the 20 novel classes on MS COCO in Table 2. By using CNPB, our method sets a new state-of-the-art record for 10-shot setting, exceeding the second method (Pseudo-Labelling) by 3\(\%\) on average. The performance of our method on 30 shot setting is only 1\(\%\) lower than the best performance. Therefore, the overall performance of our method is the best. However, the improvement is not as significant as that of PASCAL VOC, since there are higher shots in the setting of MS COCO, which declines the influence of the combined images. \begin{table} \begin{tabular}{l l c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{split 1} & \multicolumn{4}{c}{split 2} & \multicolumn{4}{c}{split 3} \\ \hline \multicolumn{2}{c}{Methods / Shots} & 1 & 3 & 5 & 10 & 1 & 3 & 5 & 10 & 1 & 3 & 5 & 10 \\ \hline FRCN+\(h\)[35] & ICCV2019 & 11.9 & 29 & 36.9 & 36.9 & 5.9 & 23.4 & 29.1 & 28.8 & 5.0 & 18.1 & 30.8 & 43.4 \\ FRCN+\(h\)-full[35] & ICCV2019 & 13.8 & 32.8 & 41.5 & 45.6 & 7.9 & 26.2 & 31.6 & 39.1 & 9.8 & 19.1 & 35 & 45.1 \\ FR [15] & ICCV2019 & 14.8 & 26.7 & 33.9 & 47.2 & 15.7 & 22.7 & 30.1 & 40.5 & 21.3 & 28.4 & 42.8 & 45.9 \\ MetaDet [32] & ICCV2019 & 18.9 & 30.2 & 36.8 & 49.6 & 21.8 & 27.8 & 31.7 & 43 & 20.6 & 29.4 & 43.9 & 44.1 \\ Meta R-CNN[35] & ICCV2019 & 19.9 & 35 & 45.7 & 51.5 & 10.4 & 29.6 & 34.8 & 45.4 & 14.3 & 27.5 & 41.2 & 48.1 \\ TFA [31] & ICML2020 & 39.8 & 44.7 & 55.7 & 56.0 & 23.5 & 34.1 & 35.1 & 39.1 & 30.8 & 42.8 & 49.5 & 49.8 \\ MPSR [34] & ECCV2020 & 41.7 & 51.4 & 55.2 & 61.8 & 24.4 & 39.2 & 39.9 & 47.8 & 35.6 & 42.3 & 48 & 49.7 \\ CME [18] & CVPR2021 & 41.5 & 50.4 & 58.2 & 60.9 & 27.2 & 41.4 & 42.5 & 46.8 & 34.3 & 45.1 & 48.3 & 51.5 \\ FSCN [19] & CVPR2021 & 40.7 & 46.5 & 57.4 & 62.4 & 27.3 & 40.8 & 42.7 & 46.3 & 31.2 & 43.7 & 50.1 & 55.6 \\ HallucF-Det [39] & CVPR2021 & 47 & 46.5 & 54.7 & 54.7 & 26.3 & 37.4 & 37.4 & 41.2 & 40.4 & 43.3 & 51.4 & 49.6 \\ FSCE [29] & CVPR2021 & 44.2 & 51.4 & 61.9 & 63.4 & 27.3 & 43.5 & 44.2 & 50.2 & 37.2 & 47.5 & 54.6 & 58.5 \\ UPE [33] & ICCV2021 & 43.8 & 50.3 & 55.4 & 61.7 & 31.2 & 41.2 & 42.2 & 48.3 & 35.5 & 43.9 & 50.6 & 53.5 \\ QA-FewDet [11] & ICCV2021 & 42.4 & 55.7 & 62.6 & 63.4 & 25.9 & 46.6 & 48.9 & 51.1 & 35.2 & 47.8 & 54.8 & 53.5 \\ Meta fast-recran[12] & AAAI2021 & 43 & 60.6 & 66.1 & 65.4 & 27.7 & 46.1 & 47.8 & 51.4 & 40.6 & 53.4 & **59.9** & 58.6 \\ FADI [2] & NIPS2021 & 50.3 & 54.2 & 59.3 & 63.2 & 30.6 & 40.3 & 42.8 & 48 & 45.7 & 49.1 & 55 & 59.6 \\ DeFRCN [25] & ICCV2021 & 53.6 & 61.5 & 64.1 & 60.8 & 30.1 & 47.0 & 53.3 & 47.9 & 48.4 & 52.3 & 54.9 & 57.4 \\ FCT [13] & CVPR2022 & 49.9 & 57.9 & 63.2 & **67.1** & 27.6 & 43.7 & 49.2 & 51.2 & 39.5 & 52.3 & 57. & 58.7 \\ KFSOD [38] & CVPR2022 & 44.6 & 54.4 & 60.9 & 65.8 & 37.8 & 43.1 & 48.1 & 50.4 & 34.8 & 44.1 & 52.7 & 53.9 \\ Pseudo-Labelling [16] & CVPR2022 & 54.5 & 58.8 & 63.2 & 65.7 & 32.8 & 50.7 & 49.8 & 50.6 & 48.4 & 55.0 & 59.6 & 59.6 \\ \hline \multirow{3}{*}{CNPB-TFA} & Ours & 48 & 52.8 & 58.9 & 59.1 & 25.9 & 42.1 & 38.7 & 43 & 35.4 & 44.1 & 50.5 & 52.3 \\ & Improve & +8.2 & +8.1 & +3.2 & +2.9 & +2.4 & +8 & +3.6 & +3.9 & +4.6 & +1.3 & +1 & +2.5 \\ \cline{1-1} & Ours & 50.9 & 54.4 & 62.4 & 63 & 32.4 & 46.4 & 49.6 & **53.5** & 40.1 & +8 & 53.4 & 57.8 \\ \cline{1-1} & Improve & +6.7 & +3 & +0.5 & -0.4 & +5.1 & +2.9 & +5.4 & +3.3 & +2.9 & +0.5 & -1.2 & -0.7 \\ \cline{1-1} & Ours & **57.2** & **63** & **66.2** & 66.6 & **39.7** & **51.8** & **54.7** & 53.1 & **51** & **56.9** & 57 & **60.7** \\ \cline{1-1} & Improve & +3.6 & +1.5 & +2.1 & +5.8 & +9.6 & +4.8 & +1.4 & +5.2 & +2.6 & +4.6 & +2.1 & +3.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with state-of-the-art few-shot object detection methods on VOC2007 test set for novel classes of the three splits. **black** indicates state-of-the-art. Red is the improvement compared to the baseline. CNPB-TFA means applying our CNPB to TFA. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{novel AP} & \multicolumn{2}{c}{novel AP\({}_{50}\)} & \multicolumn{2}{c}{novel AP\({}_{75}\)} \\ \hline \multicolumn{2}{l}{Methods / Shots} & 10 & 30 & 10 & 30 & 10 & 30 \\ \hline FR [15] & 5.6 & 9.1 & 12.3 & 19 & 4.6 & 7.6 \\ Meta R-CNN [35] & 8.7 & 12.4 & 19.1 & 25.3 & 6.6 & 10.8 \\ TFA [31] & 10 & 13.7 & - & - & 9.3 & 13.4 \\ MSPR [34] & 9.8 & 14.1 & 17.9 & 25.4 & 9.7 & 14.2 \\ CME [18] & 15.1 & 16.9 & 24.6 & 28 & 16.4 & 17.8 \\ FSCN [19] & 11.3 & 15.1 & 20.3 & 29.4 & - & - \\ FSCE [29 ### Ablation Study Following [29, 35], we do ablation studies on PASCAL VOC split 1. In most cases, we use FSCE as the baseline. #### 4.4.1 Trained with CNPB There are pre-training and fine-tuning steps in FSOD. CNPB is used to train a new model based on the fine-tuned model from the fine-tuning stage. The training strategies when using CNPB are the same as that of the fine-tuning step. In Table 3, using CNPB can achieve the best performance. The results of training for three steps without CNPB (+ft) show that fine-tuning more iterations can not bring obvious improvement. In Table 3, we also see that using CNPB can not hurt the performance of base classes, and even achieves some improvement in most of the settings. #### 4.4.2 The Multi-step Selection Strategy In Table 4, we show the results without our selection strategies, including the novel instance is pasted on the base image without FP or the selected base image without the same category (SC), the different number of the select base images, and using the selected base images without removing the bad cases. First, compared to CNPB, CNPB without FP causes severe accuracy degradation, however it is still superior to the baseline. CNPB without SC is better than CNPB without FP, and CNPB achieves the best performance. The results demonstrate that the selected FP images is necessary, and applying SC can bring further improvement. Second, using a small number of selected base images, such as 3 or 1, can achieve better performance, because more base images with repeatedly novel samples may cause the over-fitting on the novel categories. Since we use the minority-based method, the results of one base with one novel (1-shot) are equal to more base with one novel. Given 3 selected base images, minority-based is better than majority-based in all settings. However, majority-based is still better than the baseline, which means our CNPB is robust to the combination methods. Third, CNPB without removing can bring slight improvement than the baseline, but using remove operation can obtain much higher performance. CNPB-n uses the minority-based combination with threshold 0.7 to 0.8 for higher performance without removing operation. The threshold of other experiments is 0.8. #### 4.4.3 The Methods of Combination We study different types of combination methods for our few-shot setting in Table 5. Addition is no combination at the image level, just puts all selected base images into the novel dataset for fine-tuning. Addition _w/o sel_ means without using our selection strategy and directly using all FP images whose confidence scores on a novel category are higher than 0.8. Mosaic [1] uses one novel image and three selected base images to obtain a final image in our case. CP is using the original setting of cutmix for our tasks, in which the novel and base images are randomly selected. CBPN crops the FP regions in the selected base images and pastes them on the novel images. CNPB crops the novel instances and pastes them on the selected base images. In Table 5, our CNPB can obtain the best performance in almost all settings. Addition without selection shows a poor performance, and Addition with selection can not improve the accuracy. Using mosaic can bring modest improvement. CP is better than CBPN, which uses few novel images as the context, causing the over-fitting on these novel images. #### 4.4.4 Other Factors In Table 6, we explore the influence of some important factors in our CNPB pipeline, such as threshold, data augmentation, the number of novel candidates, and resize ratio. Threshold is used to select the base image whose predicted score of a novel category is higher than the threshold. Data augmentation is applying common data augmentation methods on pasted novel instances. The number of novel \begin{table} \begin{tabular}{l c c c c} \hline \hline Setting / Shot & 1 & 3 & 5 & 10 \\ \hline FSCE(Our impl.) [29] & 44.7 & 50.2 & 59 & 61.7 \\ FSCE+ft & 43.5 & 49.9 & 59.4 & 61.7 \\ \hline CNPB _w/o_ FP & 45.4 & 51.4 & 58.2 & 62.2 \\ CNPB _w/o_ SC & 47.5 & 53.2 & 61.3 & 61.9 \\ CNPB & **50.9** & **54.4** & **62.4** & **63** \\ \hline CNPB-1 & **46.8** & **54.3** & 59.3 & 62.1 \\ CNPB-3 & - & 49.6 & **59.7** & **62.5** \\ CNPB-5 & - & - & 58.4 & 62 \\ CNPB-10 & - & - & - & 60.6 \\ \hline Majority-based & 47.5 & 53.4 & 61 & 61.1 \\ Minority-based & **50.9** & **54.4** & **62.4** & **63** \\ \hline CNPB _w/o_ remove & 43.6 & 50.7 & 61.1 & 61.5 \\ CNPB & **50.9** & **54.4** & **62.4** & **63** \\ \hline \hline \end{tabular} \end{table} Table 4: The necessary of our selection strategies. _w/o_ FP or _w/o_ SC means the novel instance can be pasted to the random base image or the random FP image of the selected base image, respectively. CNPB-n means CNPB with n selected base image(s). - means the results are equal to that of the upper row. _w/o_ remove does not remove the bad cases. \begin{table} \begin{tabular}{l c c c c} \hline \hline Methods / Shots & 1 & 3 & 5 & 10 \\ \hline TFA (Our impl.) [31] & 40/79.4 & 47.779.4 & 56/79.3 & 58.179.5 \\ TFA+ft & 41.979.4 & 48/79.2 & 55.779.1 & 58.279.2 \\ TFA+CNPB & **48**/79 & **52.87**/9.4 & **58.97**/9.4 & **59.17**/9.4 \\ \hline FSCE (Our impl.) [29] & 44.778.3 & 50.275.4 & 59/75.8 & 61.775.7 \\ FSCE+ft & 43.577.5 & 49.974.9 & 59.477.3 & 61.775.5 \\ FSCE+CNPB & **50.9**/77.9 & **54.4**/76.7 & **62.4**/76 & **63**/76.8 \\ \hline \hline \end{tabular} \end{table} Table 3: The results of using CNPB or not. The AP\({}_{50}\) of novel and base categories (separated by /) of TFA and FSCE are reported. ft is fine-tuning the model without CNPB. candidates means the number of randomly generated locations for novel instances. Resize ratio is the downsize ratio of the novel instance. To sum up, setting the threshold as 0.8 to 1 achieves relatively better results. Using data augmentation on novel instances is not useful. The best number of novel candidates is 1000. The random resize ratio is better than a fixed ratio. #### 4.4.5 Visualization The visualization of some bad cases removed by CLIP is shown in Figure 3. (a) Bad cases from PASCAL VOC include some unlabeled ground truth, such as the sofa in the first two images. The rest images of (a) are some hard cases for the few-shot model. For example, the chair/bed is like a small/large version of sofa, and the last boat has the appearance of bus. These targets can hurt the performance of few-shot model, since it just starts to have the ability to recognize these obvious features. (b) is some bad cases in MS COCO removed by CLIP, such as car, chair, bottle, plotted plant, and dining table. These are all unlabeled novel categories of MS COCO. ## 5 Conclusion In this paper, we propose a novel data augmentation pipeline, called CNPB, that crops the novel instances and pastes them on the selected base images. In detail, we design a multi-step strategy to select the base images. Our method can be easily integrated into existing FSOD methods. Extensive experiments on the few-shot object detection datasets, _i.e_., PASCAL VOC and MS COCO, validate the effectiveness of our method.
2309.04533
Unveiling Dark Matter free-streaming at the smallest scales with high redshift Lyman-alpha forest
This study introduces novel constraints on the free-streaming of thermal relic warm dark matter (WDM) from Lyman-$\alpha$ forest flux power spectra. Our analysis utilises a high-resolution, high-redshift sample of quasar spectra observed using the HIRES and UVES spectrographs ($z=4.2-5.0$). We employ a Bayesian inference framework and a simulation-based likelihood that encompasses various parameters including the free-streaming of dark matter, cosmological parameters, the thermal history of the intergalactic medium, and inhomogeneous reionization, to establish lower limits on the mass of a thermal relic WDM particle of $5.7\;\mathrm{keV}$ (at 95\% C.L.). This result surpasses previous limits from the Lyman-$\alpha$ forest through reduction of the measured uncertainties due to a larger statistical sample and by measuring clustering to smaller scales ($k_{\rm max}=0.2\;\mathrm{km^{-1}\,s}$). The approximately two-fold improvement due to the expanded statistical sample suggests that the effectiveness of Lyman-$\alpha$ forest constraints on WDM models at high redshifts are limited by the availability of high-quality quasar spectra. Restricting the analysis to comparable scales and thermal history priors as in prior studies ($k_{\rm max}<0.1\;\mathrm{km^{-1}\,s}$) lowers the bound on the WDM mass to $4.1\;\mathrm{keV}$. As the precision of the measurements increases, it becomes crucial to examine the instrumental and modelling systematics. On the modelling front, we argue that the impact of the thermal history uncertainty on the WDM particle mass constraint has diminished due to improved independent observations. At the smallest scales, the primary source of modeling systematic arises from the structure in the peculiar velocity of the intergalactic medium and inhomogeneous reionization.
Vid Iršič, Matteo Viel, Martin G. Haehnelt, James S. Bolton, Margherita Molaro, Ewald Puchwein, Elisa Boera, George D. Becker, Prakash Gaikwad, Laura C. Keating, Girish Kulkarni
2023-09-08T18:00:07Z
http://arxiv.org/abs/2309.04533v2
# Unveiling Dark Matter free-streaming at the smallest scales with high redshift Lyman-alpha forest ###### Abstract This study introduces novel constraints on the free-streaming of thermal relic warm dark matter (WDM) from Lyman-\(\alpha\) forest flux power spectra. Our analysis utilises a high-resolution, high-redshift sample of quasar spectra observed using the HIRES and UVES spectrographs (\(z=4.2-5.0\)). We employ a Bayesian inference framework and a simulation-based likelihood that encompasses various parameters including the free-streaming of dark matter, cosmological parameters, the thermal history of the intergalactic medium, and inhomogeneous reionization, to establish lower limits on the mass of a thermal relic WDM particle of 5.7 keV (at 95% C.L.). This result surpasses previous limits from the Lyman-\(\alpha\) forest through reduction of the measured uncertainties due to a larger statistical sample and by measuring clustering to smaller scales (\(k_{\rm max}=0.2\) km\({}^{-1}\) s). The approximately two-fold improvement due to the expanded statistical sample suggests that the effectiveness of Lyman-\(\alpha\) forest constraints on WDM models at high redshifts are limited by the availability of high-quality quasar spectra. Restricting the analysis to comparable scales and thermal history priors as in prior studies (\(k_{\rm max}<0.1\) km\({}^{-1}\) s) lowers the bound on the WDM mass to 4.1 keV. As the precision of the measurements increases, it becomes crucial to examine the instrumental and modelling systematics. On the modelling front, we argue that the impact of the thermal history uncertainty on the WDM particle mass constraint has diminished due to improved independent observations. At the smallest scales, the primary source of modeling systematic arises from the structure in the peculiar velocity of the intergalactic medium and inhomogeneous reionization. ## I Introduction The Lyman-\(\alpha\) forest is the main manifestation of the high-redshift intergalactic cosmic-web. It is visible in the spectra of quasars (QSOs) and produced by the scattering of the background photons with the neutral hydrogen atoms along the line-of-sight [1; 2]. The Lyman-\(\alpha\) forest is a unique probe of geometry and the dynamical state of the Universe, probing diffuse matter around galaxies and in the intergalactic medium (IGM) in regimes which are not covered by other observables, both in terms of redshifts and scales. In the last decade we have witnessed tremendous progress in the cosmological investigation of the Lyman-\(\alpha\) forest, mainly along two different directions which are connected to fundamental physics. For example, the discovery of Baryonic Acoustic Oscillations in the 3D correlation function of the transmitted flux has offered the possibility to constrain new physics beyond the standard cosmological model, in the context of allowing curvature or an evolution of the equation of state for dark energy [3]. Another important research line, following the work of [4; 5], has focussed on the 1D flux power spectrum used to probe the growth of structure down to the smallest scales to see to which extent dark matter free streaming could be constrained. In this work we will investigate this second aspect and present new results based on a new set of simulations which incorporate the most important physical ingredients [6; 7], and a new comprehensive analysis of high-resolution high-redshift data down to the smallest scales. A key goal is to disentangle the different roles of the physical processes able to affect the 1D flux power: the thermal broadening, which is a 1D effect acting along the line-of-sight and is sensitive to the instantaneous gas temperature, and two 3D effects, the gas pressure smoothing that depends on the whole thermal history of the IGM and the dark matter (DM) free streaming. The possibility of constraining the nature of DM by using the Lyman-\(\alpha\) forest has motivated a series of works which were able to constrain the models further, explore different particle physics dark matter candidates, and combine likelihoods with other experiments able to constrain the nature of dark matter with strong lensing or flux ratio anomalies [8; 9]. One of the main reasons to explore warm dark matter (WDM) models was to solve or ease putative problems of cold dark matter at small scales [10; 11]. However most of these tensions must be discussed also in the context of baryonic physics [12], with processes like galactic feedback playing a major role. Moreover, it appears that minimal extensions of the standard model of particle physics could also accommodate particles like sterile neutrinos or a scalar field [13; 14; 15; 16], which could suppress or erase power at small scales, effectively acting as WDM. For thermal WDM masses in the keV range, the power suppression happens at the small non-linear scales sampled by the Lyman-\(\alpha\) forest. In particular, QSO data sets with different resolution and signal-to-noise properties have been used in order to tighten the constraints. The low resolution SDSS and BOSS data sets [17; 5; 18], the medium resolution X-Shooter sample [19] and the high resolution and high signal-to-noise Keck/HIRES and UVES/VLT QSO spectra [20; 21; 22; 23; 19] have all played a major role in the advancement of the field. For example, while the low and medium resolution data are not particularly effective in sampling the scales of the cutoff fully, they nevertheless are sensitive to the thermal history and can return very tight constraints especially when combined with data at smaller scales. The goal of this paper is to give a comprehensive state-of-the-art analysis focusing on high resolution data [24]. In Section II we will describe the data set, while in Section III we will present the suite of hydrodynamical simulations used. Section IV will contain our new results which will be extensively discussed in terms of the thermal history of the IGM, the dependence on mass resolution, patchy reionization, instrumental effects (including modelling of the noise) and consistency with results in the literature. We will conclude in section V. ## II Data We apply our analysis to the measurements presented in [24]. Their 1D flux power spectrum is estimated using 15 high signal-to-noise spectra observed by VLT/UVES [25] and Keck/HIRES [26]. The measurements span the high redshift range of \(z=4.2-5.0\) in bins of \(\Delta z=0.4\). In each of the redshift bins the flux power spectrum is measured in 15 \(k\)-bins equidistantly spaced in \(\log_{10}k\) in the range of \(\log_{10}\left(k/\left[\mathrm{km}^{-1}\,\mathrm{s}\right]\right)=-2.2\) to \(\log_{10}\left(k/\left[\mathrm{km}^{-1}\,\mathrm{s}\right]\right)=-0.7\), with logarithmic spacing of \(\Delta\log_{10}\left(k/\left[\mathrm{km}^{-1}\,\mathrm{s}\right]\right)=0.1\). Unless specified otherwise we use the full extent of the data, resulting in 45 data points across three redshift bins. The spectrograph resolution in these observations is very high, with \(R\sim 50,000\) (FWHM of \(\sim 6\,\mathrm{km}\,\mathrm{s}^{-1}\) for HIRES and \(\sim 7\,\mathrm{km}\,\mathrm{s}^{-1}\) for UVES). As already pointed out in the study of [24] the effects of resolution uncertainty are very small, even for the highest wavenumber power spectrum bin measured. A conservative estimate of the 10% uncertainty on the resolution leads only to 1% (5%) uncertainty on the 1D flux power spectrum at scales of \(k=0.1\,(0.2)\,\mathrm{km}^{-1}\,\mathrm{s}\). The power spectrum measurements of [24] were reported both with and without instrumental resolution correction. In this analysis we use the measurements with instrumental resolution corrected, and propagate this correction through the covariance matrix. The reported measurements are also corrected for power spectrum due to metal contaminants. The typical flux noise estimated in these measurements is white noise, with its power spectrum amplitude of \(0.1-0.2\,\mathrm{km}\,\mathrm{s}^{-1}\). This is comparable to the estimated level of the models at the highest wavenumbers. Characterizing and accounting for the noise levels is of key importance, and has been one of the factors restricting previous analyses to smaller wavenumbers. The \(k\) range of [24] covers the smallest scales measured with the 1D flux power spectrum, extending to \(k\sim 0.2\,\mathrm{km}^{-1}\,\mathrm{s}\), a factor of two higher wavenumber than in previous studies [27; 21; 22]. These studies have shown that the constraining power on WDM models from the Ly\(\alpha\) forest is dominated by high redshifts and the smallest scales, making this an ideal data set to exploit. ## III Simulations The absorption features of the Ly\(\alpha\) forest contain a wealth of information regarding cosmology and the nature of dark matter, as well as the thermal state of the intergalactic gas. Due to the high sensitivity of the spectrographic instruments it provides a unique window into clustering at the smallest scales. Accessing that information, however, is a non-trivial task. The standard approaches of clustering analysis that invoke biasing schemes typically rely on perturbation theory [29] or build an approximate clustering scheme [30]. While very informative in a qualitative sense, these schemes cannot capture the complexity of the data that is highly sensitive to non-linear structure evolution and gas physics, such as Doppler broadening and thermal pressure smoothing [31]. Such a task requires simulating the expected Ly\(\alpha\) forest in different thermal and cosmological models, spanning a wide, multi-dimensional parameter space, and comparing it to the data. In this work we carry out the comparison within the framework of Bayesian inference analysis, which describes - according to Bayes' theorem - the posterior probability \(p(\theta|D)\) having parameters \(\theta\) given observed data \(D\) as: \[p(\theta|D)\propto\mathcal{L}(D|\theta)\times\pi(\theta)\,, \tag{1}\] where \(\mathcal{L}(D|\theta)\) is the likelihood and \(\pi(\theta)\) is the prior on each parameter. In this work we expand upon the Bayesian inference set-up adopted in [7; 32] to evaluate the likelihood and prior at each parameter combination in the sampler. This is based on the Monte Carlo Markov Chain (MCMC) sampler, combined with the Metropolis-Hastings algorithm by dynamically learning the proposal matrix from the covariance that was introduced in [27]. The precision of the thermal parameter recovery with this simulation based emulator was shown to be in good agreement with more advanced machine-learning augmented emulator models [32]. The priors \(\pi(\theta)\) we adopt in our analysis are described in section IV. The likelihood is modelled as a Gaussian likelihood, determined by the data and its covariance, and a theoretical prediction for the flux power spectrum. The latter is estimated using hydro-dynamical numerical simulations. We use simulations from the Sherwood-Relics project [28]. These are a series of high-resolution cosmological hydro-dynamical simulations that use a customized version of P-Gadget3 (see [33] for the original Gadget-2 reference). We use cosmological boxes of size 20 \(h^{-1}\) Mpc with \(2\times 1024^{3}\) dark matter and gas particles. The box size and resolution have been chosen to adequately resolve the small scale structure that contributes to the flux power spectrum of the Ly\(\alpha\) forest, while still retaining a cosmologically relevant volume [34; 35; 36; 37]. We further correct the numerical convergence with both box size and resolution with a series of additional simu \begin{table} \begin{tabular}{l c c c c c c c} \hline Name & \(L_{\rm box}\) & \(N_{\rm part}\) & \(z_{\rm rei}^{\rm end}\) & \(T_{0}(z=4.6)\) & \(u_{0}(z=4.6)\) & \(\sigma_{8}\) & \(n_{s}\) & WDM mass \\ & \([h^{-1}\,\rm{cMpc}]\) & & & [K] & [\(\rm{eV}\,\rm{m}_{0}^{-1}\)] & & & [\(\rm{keV}^{-1}\)] \\ \hline L20-ref & 20.0 & \(2\times 1024^{3}\) & 6.00 & 10066 & 7.7 & \([0.754-0.904]\) & \([0.921-1.001]\) & \([0,\frac{1}{4},\frac{1}{3},\frac{1}{2}]\) \\ L20-late & ” & ” & 5.37 & 10069 & 6.6 & ” & ” & ” \\ L20-early & ” & ” & 6.70 & 10050 & 9.6 & ” & ” & ” \\ L20-very early & ” & ” & 7.40 & 10003 & 11.4 & ” & ” & ” \\ L20-ref-cold & ” & ” & 5.98 & 6598 & 4.3 & ” & ” & ” \\ L20-late-cold & ” & ” & 5.35 & 6409 & 3.6 & ” & ” & ” \\ L20-early-cold & ” & ” & 6.69 & 6803 & 5.4 & ” & ” & ” \\ L20-very early-cold & ” & ” & 7.39 & 6806 & 6.4 & ” & ” & ” \\ L20-ref-hot & ” & ” & 6.01 & 13957 & 14.4 & ” & ” & ” \\ L20-late-hot & ” & ” & 5.38 & 13451 & 12.5 & ” & ” & ” \\ L20-early-hot & ” & ” & 6.71 & 14369 & 17.8 & ” & ” & ” \\ L20-very early-hot & ” & ” & 7.41 & 14624 & 21.1 & ” & ” & ” \\ \hline B40-ref & 40.0 & \(2\times 2048^{3}\) & 6.00 & 10063 & 7.7 & 0.829 & 0.961 & 0 \\ \hline R-set & [5.0,10.0,20.0] & \(2\times\)[\(1024^{3}\),7683\({}^{5}\),512\({}^{3}\)] & 6.00 & 10066 & 7.7 & 0.829 & 0.961 & 0 \\ R10-ref & 10.0 & \(2\times\)[\(1024^{3}\),512\({}^{3}\)] & 6.00 & 10066 & 7.7 & 0.829 & 0.961 & \([0,\frac{1}{4},\frac{1}{3},\frac{1}{2}]\) \\ R10-late & ” & ” & 5.37 & 10069 & 6.6 & ” & ” & ” \\ R10-early & ” & ” & 6.70 & 10050 & 9.6 & ” & ” & ” \\ R10-ref-cold & ” & ” & 5.98 & 6598 & 4.3 & ” & ” & ” \\ R10-late-cold & ” & ” & 5.35 & 6409 & 3.6 & ” & ” & ” \\ R10-ref-hot & ” & ” & 6.01 & 13957 & 14.4 & ” & ” & ” \\ \hline \end{tabular} \end{table} Table 1: List of simulations used in this work (see also [28]). From left to right, the columns list the simulation name, the box size in \(h^{-1}\) cMpc, the number of particles, the redshift of reionisation (defined as the redshift when the volume averaged ionised fraction \(1-x_{\rm HI}\lesssim 10^{-3}\)), the gas temperature at the mean density, \(T_{0}\), the cumulative energy input per proton mass at the mean density, \(u_{0}\), for \(4.6\leq z\leq 13\) [cf. 24], and the cosmological model described by \(\Lambda\)CDM parameters (\(\sigma_{8}\),\(n_{s}\)) and a WDM parameter for the inverse of the WDM particle mass of a thermal relic (\(m_{\rm WDM}^{-1}\)). The upper section of the table lists the models in the first set of simulations that we use for our MCMC analysis (see text for details). The lower section of the table lists our second set of simulations, which includes mass resolution (R10) and box size (B40) corrections to the predicted flux power spectrum. The dark matter and gas particle mass are \(5.37\times 10^{5}\)\(h^{-1}\,M_{\odot}\) and \(9.97\times 10^{4}\)\(h^{-1}\,M_{\odot}\) respectively for L20, B40 and a subset of R10 runs (\(2\times 512^{3}\)). The cosmology parameter ranges for \(\sigma_{8}\) include five runs \([0.754,0.804,0.829,0.854,0.904]\) and similarly 5\(\kappa\) runs for \(n_{s}\)\([0.921,0.941,0.961,0.981,1.001]\). The WDM mass in keV\({}^{-1}\) of 0 indicates a CDM run. The other WDM runs are for \(2\), \(3\) and \(4\) keV WDM particle mass. lations summarized in Table 1. In all models we use a simple, computationally efficient star-formation scheme - often called Quick_lya - where gas particles are converted into collisionless star particles if they reach overdensities \(\Delta=1+\delta>10^{3}\) and temperatures \(T<10^{5}\) K [38]. We assume a flat \(\Lambda\)CDM cosmology with \(\Omega_{\Lambda}=0.692\), \(\Omega_{m}=0.308\), \(\Omega_{b}=0.0482\), \(\sigma_{8}=0.829\), \(n_{s}=0.961\), \(h=0.678\), and a primordial helium mass abundance of \(Y_{p}=0.24\)[39]. The initial conditions for the CDM simulations are identical to those used in the earlier Sherwood simulation project [36]. We use the WDM transfer function approximation of [21]. A set of simulations is constructed using modifications to the spatially uniform UV background synthesis model introduced by [40]. These simulations are similar to models used in earlier works [41; 42; 43; 16], with the main improvements being the larger dynamic range of the simulations, the use of a non-equilibrium thermo-chemistry solver [44], and improved treatment of the IGM opacity that consistently captures the transition between neutral and ionised IGM. In addition to running a model with the fiducial UV background, we also vary the photo-heating rates to achieve models with different gas temperatures and ends of reionization, following the approach described in [45; 28]. This approach results in 12 models with varying thermal histories (see Table. 1). For _each_ of the thermal history models with fiducial \(\Lambda\)CDM cosmology, we also run models varying the WDM particle mass (\(m_{\rm WDM}=[2,3,4]\) keV), amplitude of \(\Lambda\)CDM matter clustering (\(\sigma_{8}=[0.754,0.804,0.854,0.904]\)) and spectral index of inflation (\(n_{s}=[0.921,0.941,0.981,1.001]\)). This results in a total of \(12\times(3+2\times 4+1)=144\) simulations. In order to construct a sufficiently well sampled grid of models spanning the entire multi-dimensional parameter range, we post-process the 144 simulations (12 simulations for each cosmology) to obtain different parameter combinations. We follow the method of [46; 24] in order to interpolate in the temperature-density plane. Briefly, we rotate and translate the line-of-sight particles in the temperature-density plane to obtain models with different temperature at mean density \(T_{0}\) and temperature-density power-law indices \(\gamma\) (the values of \(T_{0}\) and \(\gamma\) are inferred from the line-of-sight gas properties; a power-law relation is fitted to points in the temperature density plane in the range of gas overdensity (\(0.1<\Delta_{g}<1.0\)) and neutral fraction weighted gas temperature (\(T<10^{5}\) K)). This preserves the temperature-density cross-correlation coefficient, allowing for an inexpensive construction of models with different thermal parameters on a finely spaced grid. In post-processing we also vary the redshift evolution of the mean transmission \(\langle F\rangle\), by rescaling the optical depth of Ly\(\alpha\) absorption (\(\tau_{\rm Ly\alpha}\)) obtained from simulations to match observed values of the effective optical depth \(\tau_{\rm eff}=-\ln\langle F\rangle\). Uncertainties in the background photo-ionization rate mean a rescaling is commonly used to match the simulations to observations [47; 35]. Note that this step is only a good approximation after reionization, as it implicitly assumes that the gas in the low density IGM is in photo-ionization equilibrium, such that \(\tau_{\rm Ly\alpha}\propto x_{\rm HI}\propto\tau_{\rm HI}^{-1}\). The redshift evolution that we adopt for \(\tau_{\rm eff}\) is: \[\tau_{\rm eff}=1.56\times\left(\frac{1+z}{5.75}\right)^{4}, \tag{2}\] taken from [24], and similar to the evolution reported in [41; 48]. Using the methods described above, we construct a \(15\times 10\times 10\) grid of parameter values on top of _each_ of the 144 simulations (upper section of Table. 1). This grid of models consists of 10 values of \(T_{0}\) spanning the range from 5,000 to 15,000 K in steps of 1,000 K; 10 values of \(\gamma\) spanning the range from 0.9 to 1.8 in steps of 0.1; and 15 values of \(\tau_{\rm eff}\) in the range from 0.3 to 1.8 times the value in Eq. 2, in multiplicative steps of 0.1. This gives a total of \(15\times 10\times 10\times 12\times(1+3+2\times 4)=\)216,000 models. Since we do not extrapolate outside of this grid of models, we have implicit priors on \(T_{0}\) between 5,000 and 15,000 K, \(\gamma\) between 0.9 and 1.8, and for \(u_{0}\) between (4.03, 21.12), (3.65, 21.08) and (2.46, 18.73) \(\rm eV/m_{p}\) for redshifts 4.2, 4.6 and 5.0, respectively. ### Flux power spectrum models From the grid of models we extract 5,000 lines of sight in different orientations through the box. The flux field along each skewer is Fourier transformed, and the resulting power spectrum is averaged over all the lines of sight, resulting in the predicted 1D flux power spectrum for a given model. In order to compare the simulated models to the data we construct an emulator that interpolates the 1D flux power spectrum between the models, allowing us to explore the parameter space spanned by the simulated models. The emulator is based on linear interpolation [43]. Since the grid of models fills the parameter space in a uniform fashion the interpolation error is small as demonstrated on the sub-set of the models in [7]. Fig. 1 shows the 1D flux power spectra when varying the parameters that govern the three main scales of suppression of the flux power. In the left panel, increasing the temperature of the gas at mean density increases the suppression on small scales (high-\(k\)), while inducing a small increase in power at large scales (low-k). The latter is due to keeping \(\tau_{\rm eff}\) fixed, while the former can be understood in the context of thermal broadening of the lines - the transmission profile of the Lyman-\(\alpha\) scattering is determined by the random motion of the gas at a finite temperature. The higher the temperature the larger the velocity dispersion of the thermal motion, leading to more extended profiles that erase small-scale structure. A related effect, shown in the central panel of Fig. 1, is the effect of pressure smoothing. As the gas is heated during reionization, it hydrodynamically responds to the resulting increase in its temperature and pressure by expanding [49; 28]. The more heat injected, the more the gas expands, erasing more small-scale structure. In our models we parametrised this effect with the cumulative heat injected per proton by a given redshift (\(u_{0}\)) [42]. The exact redshift range of \(u_{0}\) parameters is the same as in [24]. The small-scale structure in the gas could further be affected by the free-streaming of non-standard dark matter mod els such as WDM. The lighter the mass of a thermal relic WDM particle, the longer the particles will free-stream, from when they decouple from the thermal bath until they become non-relativistic. The longer this time the larger the scales affected, and the stronger the suppression in the small-scale power. This is shown in the right-hand side panel of Fig. 1, where the proxy for the free-streaming scale used is the inverse of the particle mass, \(m_{\rm WDM}^{-1}\). ### Mass resolution and boxsize Since our models are built from the results of hydrodynamical simulations it is important to understand whether the results of these simulations are numerically converged. Two main factors limit this convergence [35; 36; 37] - the size of the simulated box limits the number of large-scale modes and affects the convergence on large scales; and the mass or particle resolution of the simulation limits the smallest resolved scale. We have supplemented our simulation suite with additional calibration runs varying the size of the simulated box at fixed mass resolution. Our fiducial grid of simulations uses a box size of 20 \(h^{-1}\,\)Mpc. We have applied the splicing correction, [50], using the 40 \(h^{-1}\,\)Mpc box with the same resolution as L20-ref, which results in a correction of the level of \(\leq 3\%\) on the 1D flux power spectrum in the low-\(k\) regime. We have further verified that at the scales of interest for the analysis of [24] data, further corrections using 80 and 160 \(h^{-1}\,\)Mpc box sizes were negligible. This was not an unexpected result, and has been observed in several previous studies [24; 21; 43]. Of more importance for the studies of the small-scale 1D flux power spectrum, is the mass resolution of the simulations (\(R_{s}\)). The grid of simulations was run with the fiducial gas mass resolution of \(9.97\times 10^{4}\)\(h^{-1}\,M_{\odot}\), corresponding to 2x1024\({}^{3}\) baryon and dark matter particles. These models are converged at the 5-10% at the smallest scales used in the analysis. We have complemented these models with additional simulations varying the number of simulated particles at different fixed box sizes. Fig. 2 shows the 1D flux power spectrum decrements between different models. The poorer the mass resolution of the simulation the larger the suppression of the small-scale flux power spectrum relative to a higher resolution simulation. The mass resolution correction is larger at higher redshifts, and at smaller scales, in agreement with previous results in the literature (e.g. [36; 37]). The mass resolution correction (\(R_{s}\)) and the 1D flux power decrements shown in in Fig.2 are connected as \(R_{s}^{-1}=1+\Delta P/P\). The grid of our simulations at the resolution of (20,1024) was corrected for the residual mass resolution with (10,1024) model (R-set; see Table 1), corresponding to gas mass resolution of \(1.25\,\times 10^{4}\)\(h^{-1}\,M_{\odot}\). Additional correction due to higher resolution simulations (e.g. (5,1024)) adds less than a few percent to the total mass resolution correction. ## IV Results The new results on the free-streaming of warm dark matter are summarized in Fig. 3. The six panels show the 2D posteriors for three redshift bins of the data [24]. The bottom row shows the constraints in the thermal parameter space of gas temperature and pressure smoothing (through the proxy of cumulative injected heat), whereas the top row shows the constraints spanning the parameter space of pressure smoothing and free-streaming. The fiducial analysis choice assumes priors on the thermal history in the \(u_{0}-T_{0}\) plane as an envelope around our fidu Figure 1: The relative ratios of the 1D flux power spectra of the simulated models relative to a reference simulation run when varying one parameter at a time: \(T_{0}\) (left), \(u_{0}\) (center) and \(m_{\rm WDM}\) (right). In each panel all the other parameters are kept fixed. The scale-dependence of the flux power spectrum changes in response to changes in the input parameters. The left panel shows the effect of thermal broadening on the absorption features of the forest. The center and right panels show the emergence of a small-scale enhancement of the relative flux power spectrum in simulations with varying reionization history and WDM free-streaming. The cumulative heat injection values (center panel) correspond to reionization ending at \(5.25,6.0,6.75\) and \(7.5\) (top to bottom) for the ionizing UV background model of [40]. cial grid of simulations (see below). We also assume Planck [51] priors on CDM cosmology parameters \((\sigma_{8},n_{s})\). For the default analysis we use mass resolution correction using a fiducial thermal history with CDM cosmology (R-set; see Table 1). We also do not include any correction due to inhomogenous reionization. These assumptions were chosen as our reference analysis in order to facilitate better comparison with previous analysis. The additional work presented in this paper which includes patchy correction, thermal dependence of the mass resolution correction (\(R_{s}(u_{0})\)) and observationaly informed thermal priors (\(T_{0}\) prior) is also shown in Fig. 3 (orange contours) and discussed in more detail in subsections below. Our measurements of the thermal state of the gas largely agree with independent measurements in the literature [15; 23; 24] within 1-2\(\sigma\). The data prefers a slightly colder temperature at mean density of \(T_{0}=8,000\) (7,500; 7,800) K at redshift \(z=4.2\) (4.6; 5.0) as a best-fit (see Table 2). At the same time the cumulative heat injected is constrained to be \(u_{0}=7.2\) (6.8; 5.2) eV/m\({}_{\rm p}\) between redshifts 4.2 and 12.0 (4.6 and 12.0; 6.0 and 13.0). The result is consistent with the analysis of [24]. However, models with slightly hotter temperature consistent with [46; 52] and less pressure smoothing [53] are within the 2\(\sigma\) contours. The measurements of effective optical depth, \(\tau_{\rm eff}\), from the flux power spectrum are also consistent with direct observations of the transmitted flux [48; 54]. The derived measurement of the mean transmitted flux at \(z=5.0\) is \(\langle F_{\rm Ly\alpha}\rangle=0.1764^{+0.0177}_{-0.0171}\). This is consistent at \(1-2\sigma\) with the measurement of [54] of \(\langle F_{\rm Ly\alpha}\rangle=0.1581^{+0.0082}_{-0.0089}\) which used almost four times the number of sightlines compared to [24]. This analysis also varies the power-law of the temperature-density relation (\(\gamma\)) as a free parameter in each redshift bin. The data, however, are not constraining this parameter well and its posterior is dominated by the prior. This result was also found in previous studies of high redshift Lyman-\(\alpha\) forest data (e.g. [24; 43]). The panels at the bottom of Fig. 3 also show the \(u_{0}-T_{0}\) combinations of hydro-dynamical simulations as gray markers (L20; see Table 1). The thermal and reionization histories were chosen to bracket the observed flux distribution of high redshift quasar spectra [55], as well as the electron optical depth inferred from the Cosmic Microwave Background (CMB) as reported by Planck [51; 56]. Through the post-processing Figure 2: The effect of mass resolution in the simulations, shown as a flux power spectrum decrement as a function of wavenumber for simulations of varying particle numbers. The mass resolution decrement of the flux power spectrum is largest for the lowest resolution simulations (blue-solid) and smallest for the highest resolution simulations (green-dashed). The decrement as a function of mass resolution decreases, indicating convergence. The fiducial grid of simulations (20,1024) used in this work is converged at the 5-10% level at \(k=0.2\) km\({}^{-1}\) s. The default mass resolution correction uses models with higher mass resolution (10,1024) that are converged at 2-5% at \(k=0.2\) km\({}^{-1}\) s. The shaded regions show the observational 1\(\sigma\) uncertainty on the flux power spectrum from [21] (pink) and [24] (violet). The vertical dashed lines indicate the \(k_{\rm max}\) of different data sets. technique described in Sec. III the likelihood is able to sample the full span of the \(u_{0}-T_{0}\) parameter space on a (non-uniform) grid, however in order to avoid unphysical parts of the \(u_{0}-T_{0}\) parameter space we consider a prior defined as an envelope around the simulations' results (indicated in Fig. 3 by the gray band). ### Degeneracy axes The simulated models exhibit a tight correlation between the IGM temperature at a given time, and the integrated injected heat up until that time. The positive correlation between the thermal parameters (dot-dashed lines in bottom panel of Fig. 3) can be well described by \(u_{0}\propto T_{0}^{1.7}\), and the parameter anti-correlation (dashed lines in bottom panel of Fig. 3) is well described by \(u_{0}\propto-T_{0}\). The anti-correlation also indicates the degeneracy axis we would expect from the measurement of the 1D flux power spectrum - at a given observed redshift the flux power suppression can be explained by either higher injected heat, and therefore a larger pressure smoothing scale; or it can be explained by a higher temperature and therefore larger thermal broadening. The Lyman-\(\alpha\) forest provides constraints in the direction perpendicular to that degeneracy axis, along the direction of the positive correlation between \(u_{0}\) and \(T_{0}\). Similarly to the degeneracy between the thermal broadening and pressure scales, we observe a correlation between the pressure smoothing and the free-streaming scales, as shown in the top panels of Fig. 3. The vertical black dashed line and gray shaded region, indicate measurements of the cumulative injected heat, \(u_{0}\), in a CDM analysis of [24]. A negative correlation (dot-dashed lines in top panel of Fig. 3) between the two smoothing scales can be understood as a consequence of both physical mechanisms reducing the small-scale power of the 3D density field. The pressure smoothing scale is typically described as an exponential suppression of the power, Figure 3: The 2D posterior distributions of the best-fit analysis for the 1D flux power spectrum measurements of [24] using UVES/HIRES quasar spectra. The blue contours show the default analysis, and the orange contours show the analysis that captures our best knowledge of the thermal history (Sec. IV.4), inhomogenous reionization (Sec. IV.7) and mass resolution corrections (Sec. IV.8). The three columns correspond to the three different redshifts of \(z=4.2,4.6\) and \(5.0\) (from left to right). The bottom row shows the contours in the thermal parameter space, with the violet band shows the envelope around the physically motivated simulations, shown as gray points (squares, circles and triangles). This band serves as a prior in the thermal parameter space in the default model. The coloured points correspond to the measurements in the literature from the same data set: from [24] (purple); [15] (in green); and [23] (in red). The top panels show the \(1\) and \(2\)\(\sigma\) contours in the parameter space of free-streaming and pressure smoothing (heat injection). The vertical dotted line and surrounding gray band indicate the best-fit measurements of [24]. The intersecting gray dashed and dot-dashed lines show typical degeneracy axes between the parameters. The cutoff at small \(u_{0}\) and small \(T_{0}\) values comes from the implicit prior imposed by the extent of the grid of models (see text for details). \(P_{\rm m}\exp\left(-k^{2}\lambda_{F}^{2}\right)\)[28; 57], at a typical filtering scale \(\lambda_{F}\). The larger the heat injected into the gas, the more the gas expands due to the pressure, resulting in a positive correlation between the filtering scale and the injected heat \(u_{0}\). Such a relation was explored in the simulations of [24], where it was found that \(\lambda_{F}\sim 20\ {\rm ckpc}\times\sqrt{1+2u_{0}/(1eV/m_{p})}\). Equivalently, the warm dark matter transfer function can be approximated by \(T_{\rm WDM}\sim\left[1+(\alpha\,k)^{2\mu}\right]^{-5/\mu}\), with \(\mu=1.12\) and the typical free-streaming scale, \(\alpha=70\ {\rm ckpc}\times(m_{\rm WDM}/(1{\rm keV}))^{-1.11}\), given by [21]. The total power suppression in the 3D field on small scales, is a product of both the pressure smoothing and free-streaming transfer functions. Expanding the product in powers of \(k\), the lowest scale dependent coefficient scales as \(\propto k^{2}\), with the amplitude of \(c_{2}^{2}=\lambda_{F}^{2}+10\alpha^{2}\), where we have approximated \(\mu\sim 1\). The anti-correlation between pressure smoothing and the free-streaming that we observe in the data are driven by being sensitive to the total shape of the suppression, thus \(c_{2}^{2}=\) constant. This can be interpreted as the smoothing being driven by either higher pressure smoothing or larger free-streaming length, and is shown as dot-dashed gray lines in Fig. 3. Whereas the shape of the power spectrum suppression is poorly constrained by the current data, the data is able to constrain the scale where the suppression occurs - shown as dashed lines in Fig. 3. We estimate this positive correlation between the parameters (dashed lines in top panel of Fig. 3) by matching the scale where the pressure smoothing and free-streaming transfer functions equal one half (e.g. \(T_{\rm WDM}(k_{1/2}=1/2\), or \(P_{\rm WDM}(k_{1/2})=1/4\)). The two scales are given by \(k_{1/2}^{g}=\sqrt{2\log 2}/\lambda_{F}\), and \(k_{1/2}^{\rm WDM}=(-1+2^{\mu/5})^{\mu/2}/\alpha\). Equating the two leads to a relation \(m_{\rm WDM}^{-2.22}\propto 1+2u_{0}/(1{\rm eV}/m_{p})\), that defines the directional axis along which the Lyman-\(\alpha\) forest data gives the tightest constraints. ### Best-fit model Fig. 4 shows 1D flux power spectrum corresponding to the best-fit model over-plotted on the data. The model fits the data reasonably well, with a total \(\chi^{2}\) of 40.7 and 34 degrees of freedom (see Table 2). Furthermore, the model is in excellent agreement with the data up to \(k\sim 0.1\ {\rm km}^{-1}\,{\rm s}\), and describes the position and shape of the flux power spectrum suppression on small scales. To illustrate this we can compare the model that is fit to all the data points and re-evaluate the \(\chi^{2}\) for the points up to \(k<0.1\ {\rm km}^{-1}\,{\rm s}\). In this case the fit gives \(\chi^{2}\) of 20.4 with 20 degrees of freedom. All three redshift bins show an increase in the measured power relative to the model at \(k>0.1\ {\rm km}^{-1}\,{\rm s}\). This indicates a possible shortcoming of the model on the smallest scales, or else a signal in the data that is not part of the model. The best-fit model excludes \(m_{\rm WDM}<5.7\ {\rm keV}\) (95% C.L.) and provides the tightest constraints on the thermal relics WDM particle mass to date (see Table 2). The model constraints exclude masses of 3.73 keV and 3.18 keV at \(3\sigma\) and \(5\sigma\), effectively excluding the much discussed 3 keV WDM model (e.g. [58; 59]) at more than a \(5\sigma\) confidence level. ### Improvement on WDM constraints In Fig. 5 we compare the results of this work to the existing constraints on the WDM mass from the literature. The main result of this work results in a WDM mass bound that excludes WDM masses below \(m_{\rm WDM}<5.7\ {\rm keV}\) at \(2\sigma\) confidence level. It provides improved constraints on WDM mass coming from the matter power spectrum suppression in the Lyman-\(\alpha\) forest analyses [43; 18] as well as non-Lyman-\(\alpha\) constraints such as the flux ratios of strong lensed systems [60] and stellar streams in the Milky-Way [61]. The new constraint is stronger than the studies using low-\(z\)[62] or a combination of low-\(z\) and high-\(z\)[21; 63] Lyman-\(\alpha\) data, especially when comparing to similar choices in the thermal history priors. The new data is in fact producing a strong enough constraint that, even when relaxing the prior on the astrophysical parameters, the WDM mass bound remains stronger or competitive with past studies that used strong priors on the e.g. temperature evolution with redshift [43; 18]. In the regime of the high redshift Lyman-\(\alpha\) forest analysis, the current analysis tightens the constraint on the WDM particle mass compared to previous analyses. In comparison to older analyses using HIRES/MIKE data [43; 21] we see an improvement in the number of the observed quasar spectra by almost a factor of 2 [24]. For a factor of 2 improvement in the number of sightlines, we would expect the uncertainty on the flux power spectrum to improve by \(\sim 1/\sqrt{2}\), at least in the limit that statistical uncertainty dominates the error budget. From Fig. 2 we see that this is indeed the case in the high-\(k\) regime of the data that is most sensitive to the free-streaming effect of WDM. In fact, in linear theory the sensitivity to the the WDM mass scales as \(P_{\rm L,wdm}/P_{\rm L,CDM}\sim m_{\rm wdm}^{20}k^{-20}\) in the limit of \(k\gg 14\ (m_{\rm wdm}/1\ {\rm keV})\ {\rm Mpc}^{-1}\). However, the non-linear mapping between the linear density field and the non-linear flux field is complex. For a range of redshifts (\(4.2<z<5.0\)) and scales (\(0.01<k/[{\rm km}^{-1}\,{\rm s}]<0.2\)) considered, the flux power spectrum suppression in our simulations (L20-ref) approximately scales as \[\frac{P_{\rm F,wdm}}{P_{\rm F,cdm}}\sim\begin{cases}1-0.1\left(\frac{1+z}{5} \right)^{4}\left(\frac{k}{0.1}\right)^{\frac{3}{2}}\left(\frac{m_{\rm wdm}}{4} \right)^{-1},&m_{\rm wdm}>3\ {\rm keV}\\ 1-0.1\left(\frac{1+z}{5}\right)^{3}\left(\frac{k}{0.1}\right)^{\frac{1}{2}} \left(\frac{m_{\rm wdm}}{4}\right)^{-\frac{3}{2}},&m_{\rm wdm}<3\ {\rm keV},\end{cases} \tag{3}\] with line-of-sight wavenumber \(k\) in units of \([{\rm km}^{-1}\ {\rm s}]\) and \(m_{\rm wdm}\) in units of \([{\rm keV}]\). For higher WDM masses, the flux power suppression due to WDM increases rapidly with redshift, but only linearly with the WDM particle mass. The scaling changes at around the WDM mass of 3 keV, when the scaling with mass becomes stronger, and the redshift dependence slightly weaker. The wavenumber dependence is roughly the same, and not dominant in this range of scales. The scaling is only approximately valid at a fixed thermal history (L20-ref), and the transition between the two scales, as well as the power-law dependencies, can vary across thermal histories. However, at the higher WDM mass limit the sensitivity to the particle mass increases with redshift relatively quickly in the redshift range of the data, improving the linear sensitivity to the mass. As a result the constraining power on WDM mass improves by more than a factor of \(\sim 1/\sqrt{2}\). ### Thermal history The signal at high-\(k\) in the 1D Lyman-\(\alpha\) forest flux power spectrum depends on the thermal parameters. The fiducial priors on the thermal history limit the possible combinations in the \(u_{0}-T_{0}\) plane to the volume of physically motivated simulation results [28]. A different approach would be to instead apply a prior based on independent measurements of the thermal history, for example using the measurements of \(T_{0}(z)\) from different datasets and different statistical methods. To achieve that we use improved and precise measurements of \(T_{0}(z)\) that span the redshift range \(z<3.8\)[46] and \(z>5.2\)[52]. In order to predict viable models in the redshift range covered by our data (\(4.2<z<5.0\)) we rescale and shift the photo-heating and photo-ionization rates of our fiducial thermal history model [40]. A similar methodology was employed in [64] in order to fit the flux power spectra measurements over a range of redshifts. Fig. 6 shows the two models from the literature, as well as our new fit calibrated directly against \(T_{0}(z)\) measurements. The best-fit prefers slightly higher temperatures in the redshift range \(4.2<z<5.0\) than the measurements of [24] using the flux power spectra data used in this work. In order to construct informative priors on \(T_{0}(z)\) parameters in our model, we use the best-fit values of the new thermal history as central points of a Gaussian distribution at each redshift, with a fixed standard deviation of 1,000 K. While the standard deviation is somewhat arbitrarily chosen, it roughly matches the typical uncertainty found in more recent works [46; 52]. Even if a realistic uncertainty is slightly lower at lower redshift, and slightly higher at higher redshift, due to the decreasing numbers of quasar spectra available, this should not impact the main conclusion of this exercise, which is to highlight the effect of independent, observationally informed thermal priors. The results of the analyses using different thermal prior choices are shown in Fig. 7. The fiducial model (green contours) uses thermal priors in the form of an envelope around the simulations (gray band). Replacing these priors by simpler priors on \(T_{0}\) at each redshift results in slightly more elongated constraints on the \(u_{0}-T_{0}\) plane (orange contours), with the posterior expanding along the degeneracy direction. The mean and best-fit of the posterior however, change only marginally compared to the standard analysis. Even though the posterior in the \(u_{0}\) direction expands slightly, the posterior on \(m_{\rm WDM}^{-1}\) remains roughly the same at the \(2\sigma\) level, resulting in a very similar constraint on the WDM mass of \(m_{\rm WDM}>5.85\) keV (\(2\sigma\)) compared to the default analysis choice of thermal priors. The main difference is that the thermal priors have now been informed by the measured \(T_{0}\) evolution with redshift from other observational studies, rather than by our suite of hydro-dynamical simulations. The model with the \(T_{0}\) prior (see Table 2) excludes low WDM masses of 3.75 keV and 3.21 keV at \(3\sigma\) and \(5\sigma\), respectively. Fig. 7 also illustrates the effect of not imposing any thermal priors on the analysis. This resulting posterior (blue contours) is shifted to lower IGM temperatures, and relatively higher values of the cumulative injected heat. This part of the thermal parameter space is unphysical as we expect \(u_{0}\) and \(T_{0}\) to be correlated for physically reasonable IGM heating scenarios. Rather counter-intuitively the constraints on the WDM mass become much stronger if we impose no thermal prior. This result can be understood by considering degeneracy axis along which the posterior distributions move. Colder temperatures and enhanced amount of pressure smoothing leave much less room for a WDM model to accommodate the amount of flux power spectrum suppression in the data. Therefore, models with strong WDM suppression are excluded more strongly. Figure 4: The best-fit model compared to the data [24]. The three panels correspond to three redshift bins, with the bottom panels showing the residuals of the data over the model. The total \(\chi^{2}\) is 40.7 with 34 degrees of freedom. The data were compared to a simulation based model that varies three thermal parameters and mean transmission independently in each redshift bin (\(\tau_{\rm eff},T_{0},\gamma,u_{0}\)) and three cosmology parameters (\(\sigma_{8}\),\(n_{\rm s}\)) and (\(m_{\rm WDM}\)) (see text for details). ### Effect of small-scale peculiar velocity The enhancement of the small-scale power in the models is associated with the enhancement of the small-scale structure in the peculiar velocities. Fig. 8 (left) shows the relative effect of the peculiar velocity field on the flux power spectrum ratios. The models of early (\(z_{\rm rei}=7.5\)) and reference (\(z_{\rm rei}=6.0\)) reionization (blue solid line) show a relative enhancement of power compared to the ratio of the two flux power spectra when the effects of peculiar velocities are not included in the calculation of the optical depth. This effect of setting \(v_{\rm pec}=0\) has also been seen in [23]. Fig. 8 further illustrates that the amplitude of this feature at \(k>0.1\) km\({}^{-1}\) s is sensitive to the amplitude of the peculiar velocity field changing with the amount of pressure smoothing. Furthermore, the feature in the flux power can be associated with the emerging feature in the 1D power spectra of the peculiar velocities (Fig. 8; right), with the strength of the feature exhibiting a positive correlation between its amplitude and the cumulative injected heat. That the feature is stronger for later ending reionization, and weaker for earlier reionization, suggests that this behaviour is due to the hydrodynamic response of the gas to the photo-heating. In terms of the constraints on the WDM particle mass, this suggests that a certain caution has to be exercised when pushing the models to \(k>0.1\) km\({}^{-1}\) s. While the peculiar velocity feature might be related and correlated with the existing thermal parameters, it is not a-priori obvious that this new scale in the model is properly covered within the range of existing simulations, and therefore not properly marginalized over. ### Thermal dependence of the mass resolution The results of Sec. IV.5 show that a small-scale peculiar velocity structure can modify the amount of small-scale flux power. However, this is also the regime where the mass resolution (\(R_{s}\)) of our simulations has the biggest effect. The mass resolution correction of the simulations should depend on the thermal history in this high-\(k\) regime of the model, i.e. \(R_{s}=R_{s}(u_{0})\). This is perhaps not surprising - the mass resolution corrections essentially describe how much small-scale structure is missing in the (power spectrum) statistics as a result of not resolving the structure at very small scales, and its non-linear coupling to larger scales. If the field in configuration space is smoothed out due to physical effects - such as higher pressure smoothing or larger free-streaming scale - the amount of missing small-scale flux power will also be smaller. To estimate this effect we repeated the resolution correction exercise for different models in our suite of simulations (R10-; see Table 1). The results, in Fig. 9, show that indeed the mass resolution correction exhibits a strong dependence on the thermal history at \(k>0.1\) km\({}^{-1}\) s. In particular, late reionization models with less pressure smoothing (or lower cumulative injected heat) can show up to 5% larger mass resolution corrections compared to the fiducial correction used in the analysis. This trend is more prominent at higher redshifts, and less important at \(z\leq 4.2\). On the other hand, models with larger pressure smoothing scales require consistently smaller resolution corrections at small-scales, by up to 2%. Similarly to the effect of the thermal history, the smoothing of the density field due to free-streaming also decreases the required mass resolution correction. As shown in Fig. 9 a 2 keV WDM model on average requires a 5% lower mass resolution correction at \(k\sim 0.1\) km\({}^{-1}\) s at \(z=5.0\). This effect is reduced at lower redshifts. These results imply that applying a mass resolution corrections that depends on the thermal history widens the range of \(P_{\rm 1D}\) at \(k>0.1\) km\({}^{-1}\) s for models within a given section of the parameter space, ultimately resulting in higher sensitivity to thermal parameters and lower sensitivity to free-streaming. On the other hand, the mass resolution correction that depends on \(m_{\rm WDM}\) shows stronger sensitivity of the \(P_{\rm 1D}\) at \(k>0.1\) km\({}^{-1}\) s, which leads to stronger constraints on the lower bound WDM mass. Current bounds on the WDM mass lie in the range of \(\sim 4-6\) keV, however, and the effect of mass resolution dependence on WDM free-streaming is severely reduced. Thus most of the effect of the mass resolution that depends on thermal history and WDM mass comes from the thermal history dependence. \begin{table} \begin{tabular}{l c c c c c c c} \hline Name & \(m_{\rm WDM}\) [keV] & \((2\sigma)\) & \(\tau_{\rm eff}(z=4.6)\) & \(T_{0}(z=4.6)\) [\(10^{4}\) K] & \(\gamma(z=4.6)\) & \(u_{0}(z=4.6)\) [eV/m\({}_{\rm p}\)] & \(A_{\rm noise}(z=4.6)\) & \(\chi^{2}/{\rm dof}\) \\ \hline Default & \(>5.72\) & \(1.502^{+0.061}_{-0.061}\) & \(0.743^{+0.041}_{-0.075}\) & \(1.35^{+0.24}_{-0.19}\) & \(6.19^{+0.68}_{-0.68}\) & - & 40.7/34 \\ \hline \(k_{\rm max}<0.1\) km\({}^{-1}\) s & \(>4.10\) & \(1.501^{+0.060}_{-0.074}\) & \(0.840^{+0.095}_{-0.340}\) & \(1.28^{+0.09}_{-0.28}\) & \(8.91^{+1.57}_{-5.26}\) & - & 10.2/20 \\ \(A_{\rm noise}\) & \(>3.91\) & \(1.458^{+0.053}_{-0.074}\) & \(0.966^{+0.165}_{-0.466}\) & \(1.23^{+0.06}_{-0.23}\) & \(5.93^{+0.38}_{-2.28}\) & \(1.12^{+0.49}_{-0.29}\) & \(18.4/31\) \\ \(T_{0}\) prior & \(>5.85\) & \(1.494^{+0.062}_{-0.077}\) & \(0.770^{+0.110}_{-0.120}\) & \(1.31^{+0.10}_{-0.31}\) & \(6.50^{+1.00}_{-1.60}\) & - & 47.6/34 \\ \(R_{s}(u_{0})\) mass resolution & \(>4.44\) & \(1.531^{+0.073}_{-0.064}\) & \(0.617^{+0.007}_{-0.118}\) & \(1.38^{+0.28}_{-0.13}\) & \(7.90^{+1.70}_{-2.30}\) & - & 30.7/34 \\ patchy reion. & \(>5.10\) & \(1.486^{+0.058}_{-0.068}\) & \(0.686^{+0.046}_{-0.046}\) & \(1.33^{+0.17}_{-0.76}\) & \(5.32^{+0.58}_{-0.52}\) & - & 41.0/34 \\ \hline \(R_{s}(u_{0})+T_{0}\) prior & \(>4.24\) & \(1.473^{+0.056}_{-0.076}\) & \(0.83^{+0.11}_{-0.11}\) & \(1.28^{+0.09}_{-0.28}\) & \(5.53^{+0.73}_{-1.2}\) & - & 39.4/34 \\ patchy + \(R_{s}(u_{0})+T_{0}\) prior & \(>5.90\) & \(1.450^{+0.051}_{-0.070}\) & \(0.828^{+0.098}_{-0.098}\) & \(1.26^{+0.08}_{-0.26}\) & \(4.87^{+0.52}_{-0.71}\) & - & 40.8/34 \\ \hline \end{tabular} \end{table} Table 2: List of different models used in the analysis with their corresponding best-fit warm dark matter constraints. The table shows the name of the model and the resulting \(2\sigma\) lower bound on the WDM particle mass (\(m_{\rm WDM}\)), along with best-fit values of the thermal parameters at \(z=4.6\) for the effective optical depth (\(\tau_{\rm eff}\)), gas temperature at mean density (\(T_{0}\)), the slope of the temperature-density relation (\(\gamma\)) and the cumulative injected heat (\(u_{0}\)). For the model where extra instrumental noise in the data was modelled with a free parameter, the best-fit value is shown as well (\(A_{\rm noise}\)). The last column displays the best-fit \(\chi^{2}\) value and the degrees of freedom. The results of applying free-streaming and thermal history dependent mass resolution correction are shown in Fig. 10. Compared to the analysis without any thermal priors shown in Fig. 7, the new mass resolution correction does not shift the posterior in the thermal parameters, suggesting that the effect of peculiar velocities is not completely explained by accounting for the thermal dependence in the mass resolution. On the other hand, the WDM constraints are weakened, roughly to the same level as when a sensible thermal prior is applied to the model, resulting in \(m_{\rm WDM}>4.44\) keV at \(2\sigma\). Low WDM masses of 3.19 keV and 2.78 keV are excluded at \(3\sigma\) and \(5\sigma\) respectively. Applying both the physical thermal prior (\(T_{0}\) prior) and the new mass resolution correction leads to \(m_{\rm WDM}>4.24\) keV at \(2\sigma\). ### Patchy reionization The original Sherwood suite of simulations used in this study evolves the reionization homogenously throughout the simulated volume. In reality the Universe reionizes in a more complex, inhomogenous manner, where local ionized bubbles first appear around the sources of ionizing photons [28]. Observations of the Lyman-\(\alpha\) forest at higher redshifts can thus still be affected by relic fluctuations of the reionization persisting for a time after most of the Universe has been reionized. This topic has been a focus of several studies over the years [65; 66; 7; 32; 67; 68]. The main effect of the patchy nature of reionization on the 1D flux power spectrum of the Lyman-\(\alpha\) forest has been found to be an enhancement of power on large scales, that traces the fluctuations in the temperature and ionized fraction of hydrogen gas. The conclusions of recent works [32; 7] suggest that the enhancement of power appears at larger scales (\(k<5\times 10^{-3}\) km\({}^{-1}\) s) than those observed in [24], i.e., the flux power spectrum measurements used in this study. While the large scale effect of ionization fluctuations and their effect on the Lyman-\(\alpha\) forest has seen a certain agreement between different methods and simulations, the same is not true for the effect of inhomogenous reionization on small scales. Spatial fluctuations of the photo-ionization rate result in spatial fluctuations of the temperature density relation. Regions of the IGM that are ionized later heat up later as well, while regions that reionized and heated up earlier had time to cool down, due primarily to the expansion of the Universe and inverse Compton scattering [65; 66; 69]. As a result regions ionizing later would exhibit stronger suppression of the flux power spectrum due to thermal Doppler broadening, than IGM regions that have ionized long ago. As pointed out by [28; 67] a competing effect to the thermal fluctuations, is that the IGM Figure 5: The \(2\sigma\) constraints on the thermal relic warm dark matter mass. The arrows indicate the exclusion limits on the WDM particle mass in keV. The bottom panel shows a compilation of constraints from high redshift Lyman-\(\alpha\) forest 1D flux power spectrum. The black arrows at the very bottom indicate the results of this study, for three different analysis choices pertaining the measured flux power at highest wavenumbers (default, \((P,R_{s},T_{0})\) that represents corrections due to patchy reionization, thermal dependence on the mass resolution and independent \(T_{0}\) prior, and \(k_{\rm max}<0.1\) km\({}^{-1}\) s data scale cut analysis). The resulting lower bounds on the WDM particle mass are stronger or comparable to those previously published in the literature, including studies that combined low- and high- \(z\) Lyman-\(\alpha\) forest data to increase the redshift lever arm (middle panels). The top panel shows a compilation of results from non-Lyman-\(\alpha\) studies. Figure 6: The thermal evolution of \(T_{0}(z)\) for various models, compared to the independent measurements of [46] (high-\(z\)), [52] (low-\(z\)) and temperature measurements of [24]. The models shown are that of [40] (in black; our reference simulation run), [64] (in blue) and a new fit to [52; 46] data (in red). The new fit was obtained by rescaling the model of [40], and serves as a prior in the analysis shown in Fig. 7, with prior values shown as red circles and error bars at \(z=4.2,4.6,5.0\) with values of 9155.5, 8986.5 and 9286.5 K respectively. The uncertainty propagated in the prior is 1,000 K at each of the redshifts. regions that ionized earlier had more time to hydrodynamically respond to the injected heat, resulting in a larger pressure smoothing scale. More pressure smoothing also reduces the small-scale power. It has been suggested that these two effects might largely cancel each other out, leaving small-scale power unchanged compared to homogenous reionization models. In this study we make use of the tabulated correction to the 1D Lyman-\(\alpha\) flux power spectrum from [7], that is based on the Sherwood Relics simulation suite [28]. The effect of a patchy reionization correction in that study results in a \(\sim 10\%\) suppression of Lyman-\(\alpha\) flux power at \(k>0.1\) km\({}^{-1}\) s. The amplitude and shape of the suppression are largely independent of the thermal history models used in that study. Aside from the temperature fluctuations, [7] found that the dominant effect of the small-scale suppression was due to the effect of the peculiar velocity field. The results of this analysis are presented in Fig. 11. Since patchy reionization suppresses the small-scale power, one could expect that lower values of WDM masses might be even further excluded by the data. However the small-scale suppression induced by inhomogenous reionization affects _all models equally_, including the models with different thermal and pressure broadening. The main effect on the Lyman-\(\alpha\) data analysis is to move the peak of the \(T_{0}\) posterior to lower values. The reason for this is as follows: the higher the \(T_{0}\) value the stronger the suppression due to thermal broadening. The flux power spectrum models for low \(T_{0}\) values that on their own do not exhibit enough suppression to explain the data, now achieve enough suppression through patchy reionization correction. Therefore the first conclusion is that lower \(T_{0}\) models that were excluded before now fit the data, and the posterior of the \(T_{0}\) parameter expands towards lower \(T_{0}\) values. On the other hand, the models with high \(T_{0}\) values would now show too strong of a suppression, and models that fit the data without the correction due to patchy reionization are now in tension with the data. The posterior of \(T_{0}\) therefore shrinks for high \(T_{0}\) values. Due to the prior in the \(u_{0}-T_{0}\) plane, shifting the \(T_{0}\) posterior to lower values also shifts the \(u_{0}\) posterior to lower values at each redshift, resulting in data preferring less pressure smoothing. Since both the thermal and pressure smoothing effects are reduced, the posterior of the WDM mass expands to compensate for the fact that somewhat lower WDM mass models are now no longer in tension with the data. While the specific result shown in Fig. 11 depends on the choice of thermal priors, the main conclusion would remain the same even in the light of less stringent priors. As the \(T_{0}\) posterior systematically shifts to lower values, the \(u_{0}-T_{0}\) anti-correlation direction is preserved as it depends on the fact that both parameters increase the small-scale suppression. The resulting posterior in a scenario with wider thermal priors would therefore only extend further along the \(u_{0}-T_{0}\) anti-correlation direction, but still resulting in reduced sensitivity to \(m_{\rm WDM}\). From Fig. 11 we also observe that a \(\sim 10\%\) suppression of power in all the models results in only \(\sim 0.5\sigma\) shift of the posterior in the \(u_{0}-T_{0}\) plane, along the positive degeneracy axis (shift between the blue and orange countours). The constraints on the WDM mass are thus slightly weaker, with Figure 7: Effect of thermal priors on the posterior. The two panels show 2D posterior distributions for redshift \(z=4.2\) in the plane of temperature and heat injection (left) and warm dark matter mass and heat injection (right). In the thermal parameter space (left), the default analysis (blue contours) uses thermal priors that envelop the simulations (the envelope is shown as a violet shaded area). A similar result can be obtained by instead imposing a \(T_{0}(z)\) prior (orange contours) using independent temperature measurements [46; 52]. The warm dark matter particle mass constraints get slightly stronger if a temperature prior is used instead of the envelope prior in the \(u_{0}-T_{0}\) plane. As a reference we also show an analysis without imposing any thermal priors (green contours). The vertical shaded band on the left panel indicates measurement of \(u_{0}\) from [24]. \(m_{\rm WDM}>5.10\) keV at \((2\sigma)\). However, as was highlighted in [7], the 10% small-scale suppression is mainly driven by the peculiar velocity field differences between the inhomogenous and homogenous reionization models. As we have shown in previous sections, the exact nature of the peculiar velocity structure on small-scales has implications for the WDM mass inference, and can affect both the mass resolution correction of the simulations as well as parametrisation of the thermal history on the smallest scales probed by the Lyman-\(\alpha\) forest (\(\sim 50-100\) ckpc/h). While the nature of the peculiar velocity field structure requires further study, it is reassuring that the effect on the WDM constraints is small (\(\sim 10\%\)). Combining the corrections due to inhomogenous reionization and the thermal history dependence of the mass resolution (\(R_{s}(u_{0})\)), with the thermal priors coming from independent \(T_{0}(z)\) observations (\(T_{0}\) prior) we get a combined constraint on the WDM particle mass of \(>5.9\) keV (95% C.L.). Even though individually both the patchy reionization and the \(R_{s}(u_{0})\) correction reduce the WDM constraining power, together with the \(T_{0}\) prior they are pushed in the parameter space of higher \(T_{0}\) values and a lower pressure smoothing scale (or late reionization), which leaves little room for additional suppression due to WDM free-streaming. While these constraints are the strongest presented in this paper, they rely on our first attempt at both a patchy reionization and \(R_{s}(u_{0})\) corrections. With their impact on the WDM particle mass, these results provide additional incentive to improve on the modelling of the small-scale thermal history in the inhomogenous reionization models. ### Instrumental effects Several instrumental and observational effects can potentially systematically alter the small-scale flux power spectrum: mis-estimation of the observed flux noise, contamination due to metal lines or the instrument resolution. Of the three, the instrument resolution has been one of the more studied effects, as it has a large impact on large Lyman-\(\alpha\) surveys that observe spectra at lower spectral resolution [70; 71; 63; 72]. For a typical line-spread function shape the correction of the instrument resolution on the flux power spectrum is well described by a Gaussian kernel \(P_{F}\to P_{F}/W_{k}^{2}=P_{F}\exp k^{2}\sigma_{R}^{2}\), with the Gaussian width of the resolution \(\sigma_{R}={\rm FWHM}_{R}/(2\sqrt{2\ln 2})\) given as function of the FWHM resolution element (\({\rm FWHM}_{R}=c/R\)) or resolving power (\(R\)). For the scales of \(k\gtrsim\sigma_{R}^{-1}\) the correction due to resolution becomes \(\sim 1\), dominating the total signal in the instrument. While of significant concern for Figure 8: Effect of peculiar velocities on the small scale (\(k>0.1\)\({\rm km}^{-1}\,{\rm s}\)) suppression of power. _Left:_ The ratio of 1D flux power spectra of very early (\(z_{\rm rei,end}=7.5\)) and reference (\(z_{\rm rei,end}=6.0\)) reionization models (solid blue). The dashed lines show the effect of replacing the peculiar velocity fields of both simulations simultaneously. The dashed blue line shows the effect where no peculiar velocities are included. The coloured lines show the effect of using peculiar velocity fields corresponding to different thermal histories – in particular different heat injection values. All models show a relative increase of small-scale power ratio compared to the model with no peculiar velocities. The strength of this relative increase correlates with heat injected during reionization, with \(v_{\rm pec}\) coming from a late reionization run (low heat injection; red dashed line) giving the strongest signal. _Right:_ The relative increase in the small-scale structure of the 1D flux power spectrum is related to small-scale structure in the peculiar velocity field. A feature is present in the power spectrum of the peculiar velocity gradient (\(\eta=\nabla v_{\rm pec}\)), where the peak shifts from \(k\sim 0.15\)\({\rm km}^{-1}\,{\rm s}\) for early reionization models (with higher heat injection) to \(k\sim 0.30\)\({\rm km}^{-1}\,{\rm s}\) for late reionization models (with lower heat injection). Models with cumulative injected heat of \(u_{0}=8.14,11.6,7.11,21.1\)\({\rm eV}/{\rm m}_{\rm p}\) correspond to the L20-ref, L20-very early, L20-late, L20-very early-hot models respectively. Similarly, the models from [6] with \(u_{0}=6.04\)\({\rm eV}/{\rm m}_{\rm p}\) correspond to their late reionization model (\(z_{\rm rei,end}=5.3\)). lower resolution instruments such as X-Shooter (\(R\sim 8,000\)), the resolving power of Keck/HIRES and VLT/UVES is high enough (\(R\sim 50,000-80,000\)) to not play a major role in flux power spectrum measurements for scale cuts \(k<0.1\) km\({}^{-1}\) s [21; 24; 73]. Indeed assuming the fiducial value of the resolution \({\rm FWHM}_{R}=6\) km\(/\)s (\(R\sim 50,000\)) of the data [24], this translates into \(\sigma_{R}=2.55\) km\(/\)s. In order to improve the fit to the data at small scales, the resolution width would have to be overestimated by 30-40%. Typically the resolution is estimated to \(\sim 10\%\), and a factor three to four seems unlikely to be an explanation for excess small scale power. Similarly, the contribution from contaminating metal absorption in the Lyman-\(\alpha\) forest has been studied in both low-resolution [74; 63] and high resolution data [70; 24; 72]. The contamination can be split into two main groups: (a) metals that have a rest-frame wavelength transition close to the Lyman-\(\alpha\) line (e.g. SiIII) [63; 50]; and (b) metals situated at a lower redshift and associated with either IGM or circumgalactic medium (CGM) contributions [50; 75]. The first group (a), imprints an oscillatory feature on the flux power spectrum. The frequency of this feature increases with scale, and is typically averaged over many periods in measurements of the high-\(k\) flux power spectra, leaving distinct features observable only at low k, \(k<0.01\) km\({}^{-1}\) s. The second group (b) is important at all redshifts and scales, and due to large differences in redshift can be subtracted statistically by measuring the flux power spectrum on the red side of the Lyman-\(\alpha\) emission line. The metal flux power spectra are typically dominated by CIV and SiIV doublets [76], and are smaller than the Lyman-\(\alpha\) flux power spectrum by one or two orders of magnitude. The amplitude of the metal power spectrum would need to be larger by a factor of 5-10, in order to have an impact on Lyman-\(\alpha\) flux power spectrum parameter estimation. While some studies suggest that the redside metal power spectrum captures only about half of the contaminated metal content in the Lyman-\(\alpha\) forest [77], it is difficult to argue on observational grounds that the small-scale enhancement of the small-scale power spectrum due to metals could significantly affect our analysis. The flux noise estimation has received somewhat less attention as a source of systematic uncertainty in high resolution and high signal-to-noise quasar spectra. It plays a crucial role in the low signal-to-noise spectra of large surveys (e.g. [63]). The flux power spectrum of the noise is 2-3 orders of magnitude lower than the Lyman-\(\alpha\) forest flux power spectrum signal in medium (\(S/N>20\)) and high (\(S/N>40\)) quality data (e.g. [70]). The signal decays exponentially towards higher Figure 9: The effect of mass resolution in the simulations, shown as a flux power spectrum decrement as a function of wavenumber for different models with varying the thermal history and WDM particle mass. The more heat is injected into the IGM, the more the gas is smoothed due to pressure effects. Larger pressure smoothing results in less structure in the flux power spectrum at small scales, and depends less on the mass resolution. The same happens if the matter density is smoother due to the presence of free-streaming, which results in a smaller required mass resolution correction. Figure 11: Effect of including a correction due to inhomogenous reionization. The two panels show 2D posterior distributions for redshift \(z=4.2\) in the plane of temperature and heat injection (left) and warm dark matter mass and heat injection (right). The effect on the analysis mostly comes from the small scale suppression of power due to the peculiar velocity structure of the gas, rather than the large scale enhancement of power due to photo-ionization and temperature fluctuations. Different thermal history models have an almost identical suppression of power due to inhomogeneous reionization compared to the equivalent homogenous reionization model. The net effect is a systematic shift along the (positive) \(u_{0}-T_{0}\) degeneracy axis in the thermal parameter space (left), in the direction of lower temperature and lower pressure smoothing. Similarly, the degeneracy between free-streaming and pressure smoothing (right) opens up along the degeneracy axis, because the models with lower thermal and pressure smoothing leave more freedom for WDM suppression to accommodate the data. Figure 10: Effect of using a mass resolution correction that depends on the IGM thermal history1 (\(R_{s}(u_{0})\)). The two panels show 2D posterior distributions for redshift \(z=4.2\) in the plane of temperature and heat injection (left) and warm dark matter mass and heat injection (right). In the thermal parameter space (left), the mass resolution correction adds more power for models with less pressure smoothing, which makes such models easier to fit the data. As a result the posterior still lies along the \(u_{0}-T_{0}\) degeneracy axis, but the amplitude of this axis moves to lower \(u_{0}\) values. Similarly, the degeneracy between free-streaming and pressure smoothing (right) opens up along the degeneracy axis, because the models with low \(u_{0}\) require larger mass resolution corrections that increase the power, even for lower warm dark matter masses. wave numbers, suggesting that the noise flux power quickly becomes a bigger contribution to the signal as the analysis is pushed towards higher k. The analysis of [24] estimated the noise power on a per quasar sightline basis in 20 \(h^{-1}\,\)Mpc sections. This was achieved by measuring the raw or total flux power spectrum in each section of the Lyman-\(\alpha\) forest and estimating the asymptote level at high k. This method relies on the assumption that the noise power is white - an assumption that is largely validated in other studies (e.g. [62, 63, 70]) - and that it dominates at high wavenumbers. The method contends with several challenges, from a noisy estimation of the measured noise power spectrum in individual 20 cMpc/h sections, to the fact that the asymptote levels at high k will also include the metal contamination, as well as the very signal that one wishes to measure. A careful analysis of uncertainty propagation is warranted, especially for a signal dominated by the highest wavenumbers such as is the case in this WDM study. Fig. 12 shows the probability distribution of the noise power estimates from the individual 20 cMpc/h sightline sections, in each of the three redshift bins. The vertical black lines indicate the effective average \(P_{\rm noise}\) assumed in the analysis of [24]. This is simply a result of averaging the difference of raw and noise power per section over all the sightlines in a given redshift bin. As the average was not weighted by the signal-to-noise, the estimated average \(P_{\rm noise}\) is simply the mean of the distribution. One immediate conclusion of Fig. 12 is that the distribution of the noise power is not Gaussian around the mean, with the bulk of the distribution typically peaking at lower than average \(P_{\rm noise}\) values. The distributions at each redshift are also relatively broad. The mean of the distributions, \(\langle P_{\rm noise}\rangle\) are 0.08, 0.1 and 0.12 for \(z=4.2\), 4.6 and 5.0 respectively. This corresponds to roughly 5% of the total power at the highest wavenumber in the data. We approximate the \(P_{\rm noise}\) distributions with a log-normal model with the min/max range of the measured values (solid black lines in each of the redshift bins in Fig. 12). The noise power spectrum distribution in Fig. 12 is dominated primarily by the distribution of signal-to-noise in the data, as well as the mean transmission variations among the \(20h^{-1}\,\)Mpc segments of the Lyman-\(\alpha\) forest. This has been verified in mock data with pathlength and redshift ranges of observed quasars reported in [24]. The methodology of estimating the noise power asymptote within each \(20h^{-1}\,\)Mpc segment is noisy it is therefore unlikely that the uncertainty on the noise power estimation exceeds the width of the distributions in Fig. 12. As such we use the distribution of the noise power as a conservative prior. In order to asses the potential impact of noise mis-estimation in the data, we add a constant term \(A_{\rm noise}(z)\) to the model of the 1D flux power spectrum. This term is scale independent, but is modelled separately for each redshift bin. Since the mean of the noise flux power spectra distributions were already subtracted from the data, this constant term measures the deviation of the noise flux power from this mean value. In the data analysis step, the noise is subtracted from the raw power before the resolution correction of the instrument is deconvolved. The theoretical 1D flux power spectrum model is modified as follows: \[P_{F,1D}^{\rm tot}(k,z)=P_{F,1D}^{\rm Ly\alpha}(k,z)+A_{\rm noise}(z)\,\frac{ \langle P_{\rm noise}\rangle(z)}{W^{2}(k)}, \tag{4}\] where \(P_{F,1D}^{\rm Ly\alpha}\) is the Lyman-\(\alpha\) flux power spectrum as given by the emulator, \(\langle P_{\rm noise}\rangle(z)\) are the means of the \(P_{\rm noise}\) distribution in each of the redshift bins, and \(W^{2}(k)\) is the instrumental correction due to resolution and pixel size (Following [24] we use a pixel size of 2.5 km/s and a FWHM resolution of 6.0 km/s using top-hat and Gaussian kernels for the two corrections, respectively.). Fig. 13 shows the results of the analysis where three \(A_{\rm noise}\) parameters were added to the theoretical model (one for each redshift bin), and the parameters' priors were assumed to be given by the approximate log-normal model of the \(P_{\rm noise}\) distribution. The resulting WDM mass constraint is slightly weakened, and the lower WDM mass bound is \(m_{\rm WDM}>3.91\) keV. The thermal constraints are significantly degraded along the \(u_{0}-T_{0}\) degeneracy axis. This is because at every individual redshift, the noise parameter \(A_{\rm noise}\) strongly correlates (anti-correlates) with the IGM temperature (cumulative heat injection), whereas the correlation with the WDM mass parameter is weaker. The marginalized mean of the posteriors (and their best-fit) values of \(A_{\rm noise}\) are \(0.74^{+0.49}_{-0.49}\) (0.24), \(1.12^{+0.49}_{-0.29}\) (1.73), \(0.87^{+0.56}_{-0.15}\) (1.38). The data prefers values of \(A_{\rm noise}>0\), implying noise was underestimated. The best-fit values also show a slight increase with redshift, suggesting that the effect was larger for higher-redshift quasar spectra. The typical values of \(A_{\rm noise}\) are of the order of unity, suggesting that the noise subtraction of the data performed by [24] may be incomplete. The sensitivity of thermal parameters to this relatively small noise contribution is quite large, possibly implying that measurements of the IGM temperature and reionization are very sensitive to noise subtraction in the data. There is also sensitivity of the WDM mass constraints to this effect, although somewhat reduced compared to the thermal parameters. The sampling of the \(P_{\rm noise}\) distribution is relatively sparse, measured in 20 \(h^{-1}\,\)Mpc sections in only 15 quasar sightlines in each redshift bin. This marks a significant improvement on previous measurements, but is nonetheless sensitive to sample variance. To understand the sensitivity of the conclusions of this analysis step we modify the prior choice to be a Gaussian distribution, with the mean and the standard deviation estimated from the average and variance of the samples in each redshift bin. Since the posteriors of \(A_{\rm noise}\) for the highest two redshifts are dominated by the upper limit on the prior range, we further allow this Gaussian prior to have no min/max limits other than the physical requirement that noise is larger or equal to zero \(P_{\rm noise}\geq 0\). This allows for a tail of the \(A_{\rm noise}\) distribution to arbitrarily large values. The mean of the posterior distributions of \(A_{\rm noise}\) parameters however do not move significantly. The main difference is the tail of the posteriors towards higher \(A_{\rm values}\). Note that the WDM constraint changes less than 1%. If the signal at the high-\(k\) end of [24] data is indeed due to under-subtracted noise power, then the situation should improve with better and more data. If the flux uncertainties are dominated by the read-out noise component, then increasing the signal-to-noise (\(S/N\)) ratio of individual quasar sightlines quickly reduces the level of noise power (\(P_{\rm noise}\propto(S/N)^{-2}\)) [78]. Future studies should thus suffer less from the impact of instrumental effects on the flux power spectrum measurements, allowing for exploration of data to high \(k_{\rm max}\). ### Small-scale data cuts In order to facilitate a more direct comparison between the new analysis using [24] data, and previous analyses using high redshift HIRES/MIKE data [43; 21] a consistency check can be performed by limiting the new analysis to the same scale cuts (\(k<0.1\) km\({}^{-1}\) s). We further compare such an analysis to [43; 21] that used \(k_{\rm max}=0.088\) km\({}^{-1}\) s, and a similar redshift range. The previous analysis extended to \(z=5.4\), however the Figure 12: The probability distribution of the measured noise power spectra from [24] in each of the redshift bins. The measured distribution is reasonably well approximated by a log-normal distribution at each redshift, shown as a solid black line. For comparison we also show a normal distribution with mean and variance computed from the first two moments of the measured distribution. The distributions are fairly broad, however the inclusion or removal of tails beyond the range of measured \(P_{\rm noise}\) has a negligible effect. Figure 13: Effect of marginalizing over the noise uncertainty distribution. The two panels show 2D posterior distributions for redshift \(z=4.2\) in the plane of temperature and heat injection (left) and warm dark matter mass and heat injection (right). As the noise affects the amount of small-scale power it effectively removes the information from those scales, which leads to poorer constraints on thermal parameter as well as warm dark matter mass. The noise distribution is marginalized over the measured distribution from [24]. However, the results remain largely unchanged if the shape of the distribution was changed to a normal distribution with a standard deviation of 10% of the measured power. This can also be included at the level of the covariance matrix. flux power spectrum uncertainty at this highest redshift was considerably larger, and most of the constraining power came from the \(z=4.2,4.6,5.0\) redshift bins, which are also the ones used in this study. Furthermore, we limit the comparison to the thermal history priors where \(T_{0}\) is varied independently in each redshift bin. In [43] the resulting lower bound on the WDM mass was \(\sim 2.1\) keV (\(2\sigma\)) (MIKE/HIRES Irsic+17 + wide thermal prior (Fig. 5); [43]). A similar test was performed in [23], where the reported value on the WDM mass bound sits at \(3.6\,\,\mathrm{keV}\). The same scale cuts were used, using the quasar spectra data of [24]. However somewhat different thermal history priors were applied. Fig. 14 shows the result of small scale data cuts in this analysis. Using the same scale cuts and treatment of the thermal history with the new data improves the constraint to \(m_{\mathrm{WDM}}>4.09\,\,\mathrm{keV}\) (\(2\sigma\)) (\(k_{\mathrm{max}}<0.1\,\,\mathrm{km}^{-1}\) s - this work (Fig. 5)). The right hand side panel of Fig. 14 illustrates that imposing conservative scale cuts reduces the sensitivity to \(m_{\mathrm{WDM}}^{-1}\) and pressure smoothing scale as probed by the injected heat \(u_{0}\). This reduced sensitivity to the pressure smoothing can be understood in the thermal parameter space (left panel of Fig. 14) as expanding of the posterior along the \(u_{0}-T_{0}\) degeneracy axis. The posterior in this parameter space also shifts by \(\sim 0.2\sigma\) along the positive \(u_{0}-T_{0}\) relation that exists in hydro-dynamical simulations. The shift, however, is small, and can at least in part be attributed to reaching the corner of the priors at low \(u_{0}\) and low \(T_{0}\) values. In the case of conservative scale cuts (\(k_{\mathrm{max}}<0.1\,\,\mathrm{km}^{-1}\,\mathrm{s}\)) the best-fit improves over the fit to all the data, with the \(\chi^{2}/\mathrm{d.o.f.}=10.2/20\) (see Table 2). The fit prefers slightly warmer temperatures of the IGM (\(T_{0}(z=4.6)\sim 8,400\) K) and slightly higher value of cumulative injected heat. The posterior of the \(u_{0}\) parameter is very wide, however, suggesting that with conservative scale cuts the data are not sensitive to this parameter anymore. This has been observed in previous analysis using older data sets that did not extend beyond \(k\sim 0.1\,\,\mathrm{km}^{-1}\,\mathrm{s}\). ## V Discussion and Conclusions This study presents new constraints on the free-streaming of WDM using a simulation based likelihood and Bayesian analysis of the VLT/UVES and Keck/HIRES Lyman-\(\alpha\) forest flux power spectrum measurements of [24]. The new constraints of our fiducial analysis on the mass of a thermal relic WDM particle mass, \(m_{\mathrm{WDM}}>5.7\,\,\mathrm{keV}\), are the strongest to date. For the fixed shape of the WDM transfer function used in this study, the bound on the WDM particle mass translates into a wavenumber scale below which the matter power spectrum cannot drop by more than 5%, \(k_{0.05}=14.35\,\,h^{-1}\,\mathrm{Mpc}\). Comparing to the previous high-redshift Lyman-\(\alpha\) forest data from HIRES/MIKE [21], the new data comprises of a larger number of quasar spectra in the range \(4.2<z<5.0\), and is probing small scales up to a wavenumber of \(k_{\mathrm{max}}=0.19\,\,\mathrm{km}^{-1}\,\,\mathrm{s}\) - almost a factor of two improvement. Limiting the analysis to the same \(k_{\mathrm{max}}\) cuts, and thermal state priors, as in previous HIRES/MIKE analyses we find the constraint to be \(m_{\mathrm{WDM}}>4.1\,\,\mathrm{keV}\) (this work), compared to \(m_{\mathrm{WDM}}>2.0\,\,\mathrm{keV}\) (e.g. [43]). This factor two improvement on the bound on the WDM particle mass is consistent with the expected improvement of the statistical power of the high-redshift Lyman-\(\alpha\) Figure 14: Effect of varying \(k_{\mathrm{max}}\) of the analysis. The two panels show 2D posterior distributions for redshift \(z=4.2\) in the plane of temperature and heat injection (left) and warm dark matter mass and heat injection (right). In the thermal parameter space (left) limiting the analysis to the value of \(k_{\mathrm{max}}<0.1\,\,\mathrm{km}^{-1}\,\,\mathrm{s}\) – similar to previous analyses using Ly\(\alpha\) forest data – has a similar effect to marginalizing over noise or thermal dependence of the resolution correction. The posterior stretches in the degeneracy direction of \(u_{0}-T_{0}\). Similarly the warm dark matter mass constraints relax as more thermal support can accommodate the data. forest data, and WDM mass sensitivity at 0.1 km\({}^{-1}\) s dominated by statistical uncertainty. This result is qualitatively similar to the recent analysis of [23] using the same scale cuts, and a different thermal state parametrisation. Recent studies of [79; 23] have found a preference for non-zero \(m_{\rm WDM}^{-1}\) in their default analysis, indicating a preference for a WDM cosmology. This warrants further study and rigorous tests on both the data and theory side. Our findings here lead us to suggest that one possibility is that the non-zero preference in the WDM parameter space is a result of the restricted variations in the thermal parameters. E.g., the analysis of [79] assumes a thermal history with very low cumulative injected heat, and therefore a small amount of pressure smoothing. Limiting the amount of pressure smoothing can be of interest in specific applications, but in terms of a WDM particle mass constraint a thorough marginalization over the parameter space should be more robust. The analysis of [23] follow a similar simulation setup and parameter space variation as in this work, except for two main differences: (a) the variations in the redshift of hydrogen reionization were much narrower than explored in this work, and (b) parametrisation in the \(z_{\rm rei}-T_{0}\) as opposed to \(u_{0}-T_{0}\) plane only allowed for more restricted thermal histories. The HIRES/UVES data of [24] has also recently been used to provide constraints on a slightly different class of dark matter models - ultra-light axion dark matter [15]. While the transfer functions of the two dark matter models are different enough that a direct comparison is non-trivial, the results of [15] suggest a strong bound on the thermal WDM mass, while at the same time recovering a hotter thermal history compared to both results in the literature [23; 24] and the results of this study. A more thorough investigation is required but a major difference in the simulation setup of [15] is the initial condition generation with MP-Gadget[80] which uses glass initial conditions for the gas component. This introduces spurious small-scale power [81]. Additionally, [15] uses a Lyman-\(\alpha\) spectral extraction code fake_spectra [82] that uses a different optical depth assignment scheme that leads to additional enhancement of small-scale power in the 1D Lyman-\(\alpha\) forest flux power spectrum [83]. These differences in the simulated flux power spectrum suggest further investigation is required for the comparison with the ultra-light axion constraints of [15]. Further improvement in the WDM constraint comes from the smallest scales, \(k>0.1\) km\({}^{-1}\) s. The sensitivity to the WDM mass is increased at smaller scales, resulting in potentially stronger constraining power. However, the regime of \(k>0.1\) km\({}^{-1}\) s is also more sensitive to observational and modelling systematics. In this study we have reviewed several aspects of the observational and instrumental systematics, of which the observational flux noise subtraction in the Lyman-\(\alpha\) forest flux power spectrum is potentially the most likely to affect the results. An average of a factor of two increase in the noise power (or 40% increase in the level of noise), would on its own explain the small-scale signal observed, with no additional cosmological information beyond \(k_{\rm max}>0.1\) km\({}^{-1}\) s. While this appears not very likely, it illustrates the need for improved treatment of the noise power at smallest scales in future Lyman-\(\alpha\) forest data analyses. The thermal history priors have previously been identified as the dominant source of modelling systematics in the Lyman-\(\alpha\) forest flux power spectrum. In this study we revisited this, by exploring thermal priors motivated by different assumptions: a prior in the plane of IGM temperature and cumulative injected heat that envelopes physically motivated simulations consistent with the still rather weak constraints on the evolution of the neutral hydrogen fraction during the epoch of reionization, or a simple prior on the IGM temperature as interpolated from the measurements of the IGM temperature at \(z<4.2\) and \(z>5.0\). This was possible due to the improved range of simulations as well as post-processing techniques to expand on the number of models. While the posterior distributions are indeed sensitive to the choice of these priors, the WDM mass only changes by 2%. This suggests that reasonable thermal priors lead to a stable and robust constraint on the WDM particle mass. We further point out that not imposing any thermal priors leads to a stronger and not weaker bound on the WDM particle mass. With the data extending to \(k>0.1\) km\({}^{-1}\) s, we have also identified a new source of modelling systematics - the gas peculiar velocity field as modified by inhomogenous reionization. The effect of peculiar velocities is present, although different in amplitude, in both homogeneous and inhomogeneous models of reionization. The peculiar velocity field induces a knee in the flux power spectrum, that appears sensitive to the cumulative injected heat, indicating that the timing and process of reionization are an important factor in the peculiar velocity structure. This high-\(k\) regime of the models is also sensitive to the numerical mass resolution of the simulations (at a level of up to 20%). The peculiar velocity field structure is sensitive to the mass resolution at the level of as much as 50%, resulting in thermal history dependent mass resolution corrections. All of these statements result in a similar effect on the WDM mass inference, weakening the constraint to \(m_{\rm WDM}>4.44\) (5.10) keV at 95% C.L. for including the mass resolution and inhomogeneous reionization respectively. Combining inhomogenous reionzation and mass resolution corrections together with observationally motivated thermal prior results in a WDM constraint that is not very different from our fiducial analysis, while at the same time preferring a slightly hotter IGM and reionization histories that end later. These statements are, however, somewhat model and simulation dependent, and indicate that further work into the origin and impact of the small-scale peculiar velocity structure is required. However, an important result for the particle astrophysics modelling is that WDM particle masses of 2.5 keV are ruled out at more than 5\(\sigma\), and 3 keV at more than 3\(\sigma\), for any of the analysis choices presented in this paper. In fact, the 3 keV is ruled out at 5\(\sigma\) for any of the reasonable choices of thermal priors (e.g. a \(T_{0}\) prior). We summarize the main conclusion points as follows: * Our fiducial analysis leads to improved WDM mass constraints from high-redshift quasar spectra of \(m_{\rm WDM}>5.7\) keV at 95% C.L. * Using small scale data cuts, limiting the analysis to \(k_{\rm max}<0.1\) km\({}^{-1}\) s, results in a WDM constrain of \(m_{\rm WDM}>4.1\) keV at 95% C.L., a factor of two stronger constraint than previously published for the same choice of thermal priors and redshift range of the data [21; 43]. * The 50% higher WDM constraint coming from small-scales \(k>0.1\) km\({}^{-1}\) s has been explored with a variety of checks for instrumental systematics. We find that the flux noise may be underestimated by 40% in the data, reducing the constraining power at \(k>0.1\) km\({}^{-1}\) s. It should be possible to mitigate this in future surveys by a careful study of the instrumental noise, as well as by obtaining higher signal-to-noise spectra. * The modelling uncertainties on the small-scale peculiar velocity structure can weaken the constraining power on the WDM mass by as much as 25%. The effect of thermal history and inhomogenous nature of reionization on the peculiar velocity fields of the baryonic gas is still poorly explored and needs further study. The Lyman-\(\alpha\) forest data continue to push the frontier on astrophysical constraints on the CDM paradigm. The new constraints on the free-streaming of dark matter in this study both improve on exisiting constraints, and demonstrates that a larger number of quasar sightlines should translate into strong improvements on the WDM particle mass bound. As has been done in previous studies [27; 18; 21], the high-redshift Lyman-\(\alpha\) forest data can be further combined with the low-redshift (\(z<4.0\)) flux power spectrum measurements in order to increase the redshift leverage arm, and further push the constraints on the free-streaming of dark matter. We leave such a study for future work. ###### Acknowledgements. The authors would like to thank Matthew McQuinn, Simeon Bird, Steven Gratton and Anson D'Aloisio for helpful conversations. VI acknowledges support by the Kavli Foundation. MV is supported by the INFN PD51 INDARK grant. Support by ERC Advanced Grant 320596 'The Emergence of Structure During the Epoch of Reionization' is gratefully acknowledged. MGH has been supported by STFC consolidated grant ST/N000927/1 and ST/S000623/1. JSB is supported by STFC consolidated grants ST/T000171/1 and ST/X000982/1. GB is supported by the National Science Foundation through grant AST-1751404. GK is partly supported by the Department of Atomic Energy (Government of India) research project with Project Identification Number RTI 4002, and by the Max Planck Society through a Max Planck Partner Group. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. The simulations used in this work were performed using the Joliot Curie supercomputer at the Tre Grand Centre de Calcul (TGCC) and the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). We acknowledge the Partnership for Advanced Computing in Europe (PRACE) for awarding us time on Joliot Curie in the 16th call. The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. This work also used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility. The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1 and ST/R002371/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.
2310.16054
Using Simulations to Highlight the Role of Electromagnetism in Special Relativity
Physics Education Research (PER) studies have demonstrated that undergraduate students struggle with fundamental principles from Electromagnetism (EM) and Special Relativity (SR). However, few studies have approached the intersection of the two subjects: the role which EM played in inspiring the development of SR. To address this issue, this paper presents two simulations which highlight the fundamental role of electromagnetism in special relativity. The first simulation demonstrates the Lorentz Transformations and the origin of the Gamma Factor from Maxwell's Equations. The second simulation offers an experiential introduction to the Biot-Savart Law, from which the Displacement Current in Ampere-Maxwell's Law can be derived. These programs may be useful in an undergraduate electromagnetism course. The simulations discussed in this paper are available at the link given in the footnote.
Refath Bari
2023-09-21T09:44:23Z
http://arxiv.org/abs/2310.16054v1
# Using Simulations to Highlight the Role of Electromagnetism in Special Relativity ###### Abstract Physics Education Research (PER) studies have demonstrated that undergraduate students struggle with fundamental principles from Electromagnetism (EM) and Special Relativity (SR). However, few studies have approached the intersection of the two subjects: the role which EM played in inspiring the development of SR. To address this issue, this paper presents two simulations which highlight the fundamental role of electromagnetism in special relativity. The first simulation demonstrates the Lorentz Transformations and the origin of the Gamma Factor from Maxwell's Equations. The second simulation offers an experiential introduction to the Biot-Savart Law, from which the Displacement Current in Ampere-Maxwell's Law can be derived. These programs may be useful in an undergraduate electromagnetism course. The simulations discussed in this paper are available at the link given in the footnote1. Footnote 1: [https://refath.notion.site/Using-Simulations-to-Highlight-the-Role-of-Electromagnetism-in-Special-Relativity-107a01c7bd394fdab5b6e7bbfe158fe4?pvs=4](https://refath.notion.site/Using-Simulations-to-Highlight-the-Role-of-Electromagnetism-in-Special-Relativity-107a01c7bd394fdab5b6e7bbfe158fe4?pvs=4) Introduction A significant program in Physics Education Research (PER) has been research on student difficulties with undergraduate electromagnetism[9; 15; 21; 22]. Furthermore, the counter-intuitive features of Special Relativity have also been well documented[6; 7; 18]. However, the subtle insight which inspired Einstein to make the leap from Classical Electrodynamics to Special Relativity[8], namely the frame dependence of electric and magnetic fields, is not emphasized in most undergraduate electromagnetism textbooks[30]. As a result, many students struggle to understand how relativistic phenomenon such as time dilation and the Lorentz gamma factor emerge from classic electromagnetism. Undergraduate textbooks seldom highlight the historical connection between Special Relativity and Electromagnetism[10; 12; 13; 20]. Many studies have demonstrated the effectiveness of simulations in clarifying student confusions of abstract concepts such as space-time intervals and causality[5; 14; 19]. Although a few emerging texts and individual papers are paving the way for experiential-based electromagnetism via computational visualizations[4; 11; 16; 29], most textbooks have yet to take advantage of simulations to better highlight the intimate relationship between the two subjects. To address this issue, this paper presents two simulations which highlight the role of electromagnetism in special relativity, supplemented with the prerequisite theory for completeness. The two simulations are as follows: (1) A simulation of the Lorentz Transformation in 1+1 Dimensions and (2) A simulation of the Biot-Savart Law. These simulations may be useful in an undergraduate course on Electromagnetism. The above two principles were selected due to previously documented student difficulties associated with them. For instance, the Lorentz transformation is a subject of considerable consternation amongst students[1; 3; 27]. Its formulation as a symmetric matrix transformation may seem to be a mere accident[24]. Furthermore, it may seem to be an unmotivated ansatz, spuriously satisfying lorentz invariance[25]. The application of the lorentz transformation to transition between reference frames is another well-documented student difficulty[26; 28]. To address this issue, we present a simulation of the Lorentz transformations in 1+1 dimensions. We also present a theoretical motivation for the Lorentz transformation, showing why anything less than a full transformation of both space and time (i.e., Galilean Transformation and Fitzgerald Transformation), fails to preserve the invariance of the wave equation. Motivating the Lorentz Transformation The motivating basis for the Lorentz Transformations is that the Galilean Transformations fails to maintain the invariance of Maxwell's Equations. Many students grapple with the inconsistency of the Galilean Transformations, but to no avail. Indeed, many struggle with the basic Galilean Principle of Relativity itself [6]. To address this problem, we demonstrate why transforming neither space or time (Galilean Transformation) or transforming only space (Fitzgerald Transformation) fails to preserve the invariance of the wave equation under a transformation. The wave equation is trivial to obtain from Maxwell's Equations: \[\nabla\times B=\tfrac{\partial E}{\partial t}+\mu_{0}J\rightarrow-\nabla( \nabla\cdot E)+\nabla^{2}E=E_{tt} \tag{1}\] We now demonstrate why the Galilean and Fitzgerald Transforms fail to preserve this wave equation, and thus motivate why a transformation of both the space and time is required. ### Application of GT to Wave Equation To simplify our analysis, we only consider a wave dependent on \((z,t)\). \[\frac{\partial^{2}E}{\partial^{2}t}=\nabla^{2}E\to E_{tt}-E_{zz}=0 \tag{2}\] The Galilean Transformation has \(\zeta=z+vt,\tau=t\) \[E(t,z)=\varepsilon(t,z+vt)=\varepsilon(\tau,\zeta) \tag{3}\] We use the multivariable chain rule to verify whether \(E_{tt}-E_{zz}=0\). We begin with \[E_{t}=\frac{\partial\varepsilon}{\partial\tau}\cdot\frac{\partial\tau}{ \partial t}+\frac{\partial\varepsilon}{\partial\zeta}\cdot\frac{\partial \zeta}{\partial t}=\varepsilon_{\tau}+\varepsilon_{\zeta}\cdot\frac{\partial (z-vt)}{\partial t}=\varepsilon_{\tau}-v\varepsilon_{\zeta}\to E_{tt}= \frac{\partial}{\partial t}(\varepsilon_{\tau}-v\varepsilon_{\zeta}) \tag{4}\] Finding \(E_{tt}\) requires evaluating \(\frac{\partial}{\partial t}(\varepsilon_{\tau})\) and \(\frac{\partial}{\partial t}(-v\varepsilon_{\zeta})\). We find that \(E_{tt}\) becomes \[E_{tt}=\frac{\partial}{\partial t}(\varepsilon_{\tau})+\frac{\partial}{ \partial t}(-v\varepsilon_{\zeta})=(\varepsilon_{\tau\tau}-v\varepsilon_{ \tau\zeta})-(-v\varepsilon_{\zeta\tau}+v^{2}\varepsilon_{\zeta\zeta})= \varepsilon_{\tau\tau}-v\varepsilon_{\tau\zeta}+v\varepsilon_{\zeta\tau}-v^ {2}\varepsilon_{\zeta\zeta} \tag{5}\] Likewise, we find \(E_{zz}\). Note that \(\uptau=t=0\) and that \(\uptau\) has no dependence on \(z\), so that \(\uptau_{z}=0\) and \(\mathcal{E}_{\uptau}\uptau_{z}=0\). By the Galilean Transformation, \(\zeta=z-vt\), and thus we have \[E_{z}=\frac{\partial\varepsilon}{\partial\tau}\cdot\frac{\partial\tau}{ \partial z}+\frac{\partial\varepsilon}{\partial\zeta}\cdot\frac{\partial \zeta}{\partial z}=\varepsilon_{\tau}\cdot 0+\varepsilon_{\zeta}\cdot\frac{ \partial(z-vt)}{\partial z}=\varepsilon_{\zeta} \tag{6}\] \[E_{zz}=\frac{\partial^{2}E}{\partial z^{2}}=\frac{\partial}{\partial t}(\varepsilon _{\zeta})=\frac{\partial(\varepsilon_{\zeta})}{\partial\tau}\cdot\frac{ \partial\tau}{\partial z}+\frac{\partial(\varepsilon_{\zeta})}{\partial\zeta} \cdot\frac{\partial\zeta}{\partial z}=\varepsilon_{\zeta\tau}\cdot 0+ \varepsilon_{\zeta\zeta}\cdot\frac{\partial(z-vt)}{\partial z}=\varepsilon_{ \zeta\zeta} \tag{7}\] Thus, the Galilean Transformation fails to preserve the Wave Equation: \[E_{tt}-E_{zz}=\varepsilon_{\tau\tau}-v\varepsilon_{\tau\zeta}+v\varepsilon_{ \zeta\tau}-v^{2}\varepsilon_{\zeta\zeta}-\varepsilon_{\zeta\zeta}\neq 0 \tag{8}\] The next significant transformation came from George Francis FitzGerald, who made conjectured that length itself may contract. We will now implement the Fitzgerald Transformation and find that it also fails to preserve the wave equation. ### Application of FT to Wave Equation The Fitzgerald Transformation (FT) was coined as an ad-hoc correction[7] to the Galilean Transformation by George Francis Fitzgerald in his 1889 paper on the Ether and the Earth's Atmosphere[7][7]. Fitzgerald published a colloquial paper, stating that length contracted "by an amount depending on the square of the ratio of their velocities to that of light": \[E(t,z)=\varepsilon(t,\frac{z}{\sqrt{1-v^{2}}}-\frac{vt}{\sqrt{1-v^{2}}})= \varepsilon(\tau,\zeta) \tag{9}\] To verify whether the Fitzgerald Transformations holds the Wave Equation invariant, we must verify whether \(E_{tt}-E_{zz}=0\) by expanding \(E_{t}\) using the multivariable chain rule. \[E_{t}=\varepsilon_{\tau}+\varepsilon_{\zeta}\cdot\frac{\partial(\frac{z}{ \sqrt{1-v^{2}}}-\frac{vt}{\sqrt{1-v^{2}}})}{\partial t}=\varepsilon_{\tau}+ \varepsilon_{\zeta}\cdot(-\frac{v}{\sqrt{1-v^{2}}})\to E_{t}= \varepsilon_{\tau}-\frac{v\varepsilon_{\zeta}}{\sqrt{1-v^{2}}} \tag{10}\] \[E_{tt}=\frac{\partial^{2}E}{\partial t^{2}}=\frac{\partial}{\partial t}( \varepsilon_{\tau}-\frac{v\varepsilon_{\zeta}}{\sqrt{1-v^{2}}})=\varepsilon _{\tau\tau}-\frac{2v\varepsilon_{\tau\zeta}}{\sqrt{1-v^{2}}}+\frac{v^{2} \varepsilon_{\zeta\zeta}}{1-v^{2}} \tag{11}\] Likewise, we find \(E_{zz}\) using the chain rule to be \[E_{z}=\varepsilon_{\tau}\cdot 0+\varepsilon_{\zeta}\cdot\frac{\partial(\frac{z}{ \sqrt{1-v^{2}}}-\frac{vt}{\sqrt{1-v^{2}}})}{\partial z}=\frac{\varepsilon_{ \zeta}}{\sqrt{1-v^{2}}}\to E_{zz}=\frac{\varepsilon_{\zeta\zeta}}{1-v^{2}} \tag{12}\] We now verify if FT preserves the invariance of the Wave Equation: \[E_{tt}-E_{zz}=(\varepsilon_{\tau\tau}-\frac{2v\varepsilon_{\tau\zeta}}{\sqrt {1-v^{2}}}+\frac{v^{2}\varepsilon_{\zeta\zeta}}{1-v^{2}})-(\frac{\varepsilon _{\zeta\zeta}}{1-v^{2}})=\varepsilon_{\tau\tau}-\varepsilon_{\zeta\zeta}- \frac{2v\varepsilon_{\tau\zeta}}{\sqrt{1-v^{2}}}\neq 0 \tag{13}\] We find that even length contraction has failed to uphold the invariance of the wave equation. However, it is instructive to examine the matrix formulation of \(F(R^{2}\to R^{2})\): \[F:R^{2}\to R^{2}=\begin{pmatrix}1&0\\ \frac{-v}{\sqrt{1-v^{2}}}&\frac{1}{\sqrt{1-v^{2}}}\end{pmatrix}\begin{pmatrix}t \\ z\end{pmatrix}=\begin{pmatrix}t\\ \frac{z-vt}{\sqrt{1-v^{2}}}\end{pmatrix}=\begin{pmatrix}\tau\\ \zeta\end{pmatrix} \tag{14}\] Hendrik Lorentz independently made the next conjecture, that both space and time must be transformed to preserve \(E_{tt}=E_{zz}\). We observe that the matrix formulations for the FT and LT are quite similar. Students will thus be able to understand the origin of the Lorentz Transformation, not as an ansatz, but as an evolution from GT and FT. \[L:R^{2}\to R^{2}:\begin{pmatrix}\frac{1}{\sqrt{1-v^{2}}}&\frac{-v}{\sqrt{1-v^{2 }}}\\ \frac{-v}{\sqrt{1-v^{2}}}&\frac{1}{\sqrt{1-v^{2}}}\end{pmatrix} \tag{15}\] Immediately, we find the transformation to be symmetric such that \(L=L^{T}\) and \(L=L^{-1}\). But does it maintain the invariance of the wave equation? To verify, we briefly apply LT to the Wave Equation. ### Application of LT to Wave Equation We now consider the Lorentz Transformations (LT) and test whether \(E_{tt}=E_{zz}\) under \(L\). \[L:R^{2}\to R^{2}:\begin{pmatrix}\frac{1}{\sqrt{1-v^{2}}}&\frac{-v}{\sqrt{1-v^{ 2}}}\\ \frac{-v}{\sqrt{1-v^{2}}}&\frac{1}{\sqrt{1-v^{2}}}\end{pmatrix}\begin{pmatrix}t \\ z\end{pmatrix}=\begin{pmatrix}\frac{t-vz}{\sqrt{1-v^{2}}}\\ \frac{z-vt}{\sqrt{1-v^{2}}}\end{pmatrix} \tag{16}\] \[E(t,z)=\varepsilon(\tau,\zeta)=\varepsilon(\frac{t-vz}{\sqrt{1-v^{2}}},\frac{ z-vt}{\sqrt{1-v^{2}}})\to E_{t}=\frac{\partial\varepsilon}{\partial\tau} \cdot\frac{\partial\tau}{\partial t}+\frac{\partial\varepsilon}{\partial \zeta}\cdot\frac{\partial\zeta}{\partial t} \tag{17}\] However, the two observers' times are no longer equivalent. By the Lorentz Transformation, we now have time dilated as one's velocity \(v\) approaches the speed of light \(c\). \[E_{t}=\varepsilon_{\tau}\cdot\frac{\partial(\frac{t}{\sqrt{1-v^{2}}}-\frac{vz }{\sqrt{1-v^{2}}})}{\partial t}+\varepsilon_{\zeta}\cdot\frac{\partial(\frac{ z}{\sqrt{1-v^{2}}}-\frac{vt}{\sqrt{1-v^{2}}})}{\partial t}=\frac{\varepsilon_{ \tau}-\varepsilon_{\zeta}v}{\sqrt{1-v^{2}}} \tag{18}\] Taking the second partial derivative of the Electric Field in respect to time, we have \[E_{tt}=\frac{\partial}{\partial\tau}(E_{t})=\frac{\partial}{\partial\tau}( \frac{\varepsilon_{\tau}-\varepsilon_{\zeta}v}{\sqrt{1-v^{2}}})=\frac{ \varepsilon_{\tau\tau}-\varepsilon_{\zeta}v}{1-v^{2}}+\frac{-v\varepsilon_{ \tau\zeta}+\varepsilon_{\zeta\zeta}v^{2}}{1-v^{2}}=\frac{\varepsilon_{\tau \tau}-2\varepsilon_{\zeta\tau}v+\varepsilon_{\zeta\zeta}v^{2}}{1-v^{2}} \tag{19}\] Likewise, we find \(E_{zz}\) using the chain rule. \[E_{z}=\frac{\partial\varepsilon}{\partial\tau}\cdot\frac{\partial\tau}{ \partial z}+\frac{\partial\varepsilon}{\partial\zeta}\cdot\frac{\partial \zeta}{\partial z}=\varepsilon_{\tau}\cdot-\frac{v}{\sqrt{1-v^{2}}}+ \varepsilon_{\zeta}\cdot\frac{1}{\sqrt{1-v^{2}}}=\frac{\varepsilon_{\zeta}-v \varepsilon_{\tau}}{\sqrt{1-v^{2}}} \tag{20}\] \[E_{zz}=\frac{\partial}{\partial z}(\frac{\varepsilon_{\zeta}-v\varepsilon_{ \tau}}{\sqrt{1-v^{2}}})=\frac{\partial(\frac{\varepsilon_{\zeta}-v\varepsilon _{\tau}}{\sqrt{1-v^{2}}})}{\partial\tau}\cdot\frac{\partial\tau}{\partial z}+ \frac{\partial(\frac{\varepsilon_{\zeta}-v\varepsilon_{\tau}}{\sqrt{1-v^{2} }})}{\partial\zeta}\cdot\frac{\partial\zeta}{\partial z}=\frac{-2v\varepsilon _{\zeta\tau}+v^{2}\varepsilon_{\tau\tau}+\varepsilon_{\zeta\zeta}}{1-v^{2}} \tag{21}\] Completing the proof, we find that \[E_{tt}-E_{zz}=\frac{\varepsilon_{\tau\tau}-2\varepsilon_{\zeta\tau}v+\varepsilon_ {\zeta\zeta}v^{2}}{1-v^{2}}-\frac{-2v\varepsilon_{\zeta\tau}+v^{2}\varepsilon_ {\tau\tau}+\varepsilon_{\zeta\zeta}}{1-v^{2}}=0 \tag{22}\] We have just proved the Wave Equation does indeed hold invariant under the Lorentz Transformation \(L\). From this transformation alone, rises all major tenets of Special Relativity, including Time Dilation and Length Contraction. ### Simulation of Lorentz Transformation in 1+1 Dimensions The simulation below demonstrates the lorentz transformation in 1+1 dimensions. Students may use the simulation to transform any event \(E:(x,t)\) between reference frames. Three sliders are presented at the bottom of the simulation, which students may use to control the location of an event in space-time and the relative velocity between two observers. Students will subsequently observe an animation of the coordinate axes of the simulation bending to illustrate the new coordinates. Figure 1: Lorentz Transformation Simulation in 1+1 Dimensions ## III \(\frac{\partial E}{\partial t}\) from Biot-Savart and Str The traditional introduction of the Displacement Current term in standard electromagnetics textbooks is via an inconsistency in Ampere's Law when applied to Parallel Plate Capacitors [2] (an Amperian Loop can be chosen such that a current does not penetrate the boundary of the surface). This presentation may lead one to believe the Displacement Current to be an ad-hoc correction to Ampere's Law, when it is actually a natural consequence of a moving test charge [23]. To motivate the existence of the displacement current, we chose an alternative route - not a parallel plate capacitor, but a simple test charge located at the origin. According to Maxwell himself, the Displacement Current term is 'electrostatically' analogous to a normal current element or a moving test charge. We thus examine a test charge from a stationary and moving observer's point of view. Whereas a stationary observer only finds an electric field, an observer moving with velocity \(-v\) with respect to the stationary frame will witness both an electric and magnetic field. Such a test charge would thus exhibit the displacement current, due to the moving electric field. We take the relativistic form of the electromagnetic fields of the test charge and derive the Biot Savart Law from it. Hidden implicitly within the Biot Savart Law is the Displacement Current Term, which we reformulate using the Partial Derivative of a Cross Product to conclude with Ampere's Law, corrected with the Displacement Current Term. Before doing so, however, it is crucial to understand the Biot-Savart Law as it is traditionally applied to a current-carrying wire. ### Visualization of Biot-Savart We seek to create the magnetic field vector at any distance \(R\) from the current-carrying wire. To do so, we require both Magnitude and Direction. The magnitude will be supplied by the Biot-Savart Law as \[\|\vec{B}\|=\frac{mu_{0}I}{2\pi R} \tag{23}\] he Right Hand Rule gives the direction of the magnetic field to be counterclockwise, since the current has \(\text{dir}(I)=\hat{j}\). The unexpected challenge, however, is to compute a consistently counterclockwise magnetic field \(\vec{B}\) vector around the wire. To do so, we construct three auxiliary vectors: a vector \(\vec{u}\) from the origin to the \(y\) axis, a vector \(\vec{p}\) from the origin to the cursor position, and a vector \(\vec{v}\) as the difference between the two vectors. From that difference, a dot product \(\vec{v}\cdot\vec{w}\) is calculated such that the result is 0, and thus we have that \(\vec{w}\) is indeed the magnetic field vector at the cursor's position. Prior to constructing this in code, we must declare three functions - each corresponding to three of the user's actions: impressing the track-pad, dragging, and releasing. We thus define our down() function to be In the next auxiliary function Pythonmove(), the program updates the magnitude and direction of all four vectors based on the position of the cursor. We must also declare a few global variables so they may be used outside the Pythonmove() function. Finally, we have the release function Pythonup(), which generates a graph of the magnitude of the magnetic field by distance and updates real-time, based on the user's placement of Magnetic Field vectors. We may verify the graph and indeed find that the magnetic field drops proportional to the distance \(R\). ### Time Dilation from EM We now pivot from the Biot-Savart Law to a moving test charge. In doing so, we will find that the Law we have just visually demonstrated implicitly contains within it the Displace Figure 2: Simulation of Magnitude and Direction of \(\vec{B}\) for a Current-Carrying Wire ment Current, without the need for any capacitors. Consider once again our test charge \(Q\), centered at the origin and moving in the \(\hat{i}\) direction. As Maxwell himself states the Displacement Current to be "electrostatically" equivalent to a traditional current formed by a moving Electric Field, it makes sense to find the change in the Electric Field of the charge in respect to time. To do so, we employ the chain rule as \[\frac{\partial E}{\partial t}=\frac{\partial E}{\partial x}\cdot\frac{\partial x }{\partial t}=\frac{\partial E}{\partial t}=(-v)\cdot\frac{\partial}{\partial x }(\frac{1}{4\pi\varepsilon_{0}}\frac{Q}{r^{2}}) \tag{24}\] Here's where relativity comes in. Through the Electromagnetic Field Strength Tensor, we find the equations of a relativistic electric field to be \[E_{\parallel}={E^{\prime}}_{\parallel},B_{\parallel}=0,E_{\perp}=\gamma{E^{ \prime}}_{\perp},B=+\gamma\frac{1}{c_{2}}v\times E^{\prime} \tag{25}\] We thus have the Magnetic Field at any given distance \(R\) to be \[B(r)=\gamma\frac{1}{c^{2}}v\times E=\gamma\mu_{0}\frac{qv\times\hat{r}}{4\pi r ^{2}} \tag{26}\] We thus find that the Displacement Current is implicit within the Biot-Savart Law. \[B(r)=\gamma\frac{1}{c^{2}}(\vec{v}\times\vec{E})=\gamma\mu_{0}r\times( \epsilon_{0}\frac{\partial E}{\partial t})\rightarrow\frac{\partial B}{ \partial t}=\gamma\mu_{0}v\times(\epsilon_{0}\frac{\partial E}{\partial t}) \tag{27}\] We thus find that the Biot Savart Law implicitly contains the Displacement Current [17]. ## IV Conclusion We have demonstrated how computational programming visualizations can work symbiotically with theoretical derivations to solidify the concepts behind the bridge between EM and SR and make them tangible - indeed, computational visualizations such as the ones above offer rich pedagogical opportunities for teachers and students alike to intuitively grasp the ideas of Maxwell's Equations and Einstein's Special Relativity.
2303.17925
Beyond Multilayer Perceptrons: Investigating Complex Topologies in Neural Networks
In this study, we explore the impact of network topology on the approximation capabilities of artificial neural networks (ANNs), with a particular focus on complex topologies. We propose a novel methodology for constructing complex ANNs based on various topologies, including Barab\'asi-Albert, Erd\H{o}s-R\'enyi, Watts-Strogatz, and multilayer perceptrons (MLPs). The constructed networks are evaluated on synthetic datasets generated from manifold learning generators, with varying levels of task difficulty and noise, and on real-world datasets from the UCI suite. Our findings reveal that complex topologies lead to superior performance in high-difficulty regimes compared to traditional MLPs. This performance advantage is attributed to the ability of complex networks to exploit the compositionality of the underlying target function. However, this benefit comes at the cost of increased forward-pass computation time and reduced robustness to graph damage. Additionally, we investigate the relationship between various topological attributes and model performance. Our analysis shows that no single attribute can account for the observed performance differences, suggesting that the influence of network topology on approximation capabilities may be more intricate than a simple correlation with individual topological attributes. Our study sheds light on the potential of complex topologies for enhancing the performance of ANNs and provides a foundation for future research exploring the interplay between multiple topological attributes and their impact on model performance.
Tommaso Boccato, Matteo Ferrante, Andrea Duggento, Nicola Toschi
2023-03-31T09:48:16Z
http://arxiv.org/abs/2303.17925v2
# Beyond Multilayer Perceptrons: Investigating Complex Topologies in Neural Networks ###### Abstract In this study, we explore the impact of network topology on the approximation capabilities of artificial neural networks (ANNs), with a particular focus on complex topologies. We propose a novel methodology for constructing complex ANNs based on various topologies, including Barabasi-Albert, Erdos-Renyi, Watts-Strogatz, and multilayer perceptrons (MLPs). The constructed networks are evaluated on synthetic datasets generated from manifold learning generators, with varying levels of task difficulty and noise. Our findings reveal that complex topologies lead to superior performance in high-difficulty regimes compared to traditional MLPs. This performance advantage is attributed to the ability of complex networks to exploit the compositionality of the underlying target function. However, this benefit comes at the cost of increased forward-pass computation time and reduced robustness to graph damage. Additionally, we investigate the relationship between various topological attributes and model performance. Our analysis shows that no single attribute can account for the observed performance differences, suggesting that the influence of network topology on approximation capabilities may be more intricate than a simple correlation with individual topological attributes. Our study sheds light on the potential of complex topologies for enhancing the performance of ANNs and provides a foundation for future research exploring the interplay between multiple topological attributes and their impact on model performance. ## 1 Introduction Modern neural architectures are widely believed to draw significant design inspiration from biological neuronal networks. The artificial neuron, the fundamental functional unit of neural networks (NNs), is based on the McCulloch-Pitts unit [13], sharing conceptual similarities with its biological counterpart. Additionally, state-of-the-art convolutional NNs incorporate several operations directly inspired by the mammalian primary visual cortex, such as nonlinear transduction, divisive normalization, and maximum-based pooling of inputs. However, these architectures may be among the few examples where the evolutionary structural and functional properties of neuronal systems have been genuinely relevant for NN design. Indeed, the topology of biological connectomes has not yet been translated into deep learning model engineering. Due to the ease of implementation and deployment, widely-used neural architectures predominantly feature a regular structure resembling a sequence of functional blocks (e.g., neuronal layers). The underlying multipartite graph of a multilayer perceptron (MLP) is typically controlled by a few hyperparameters that define its basic topological properties: depth, width, and layer sizes. Only recently have computer vision engineers transitioned from chain-like structures [32] to more elaborate connectivity patterns [16; 17] (e.g., skip connections, complete graphs). Nevertheless, biological neuronal networks display much richer and less templated wirings at both the micro- and macro-scale [14]. Considering synaptic connections between individual neurons, the _C. elegans_ nematode features a hierarchical modular [5] connectome, wherein hubs with high betweenness centrality are efficiently interconnected [4; 33]. Moreover, the strength distribution of the adult Drosophila central brain closely follows a power law with an exponential cutoff [29]. As a result, the relationship between the graph structure of a NN and its predictive abilities remains unclear. In the literature, there is evidence that complex networks can be advantageous in terms of predictive accuracy and parameter efficiency [18]. However, past attempts to investigate this connection have yielded conflicting results that are difficult to generalize outside the investigated context. The first experiment on complex NNs was performed in 2005 by Simard et al., who trained a randomly rewired MLP on random binary patterns [31]. Nearly a decade later, Erkaymaz and his collaborators employed the same experimental setup on various real-life problems [12; 11; 9; 10] (e.g., diabetes diagnosis, performance prediction of solar air collectors). The best-performing models featured a number of rewirings consistent with the small-world regime. However, all assessed topologies were constrained by MLP-random interpolation. In [2], an MLP and a NN generated following the Barabasi-Albert (BA) procedure were compared on a chemical process modeling problem. Both models were trained with an evolutionary algorithm, but the MLP achieved a lower RMSE. The _learning matrix_[24], a sequential algorithm for the forward/backward pass of arbitrary directed acyclic graphs (DAGs), enabled the evaluation of several well-known complex networks on classification [24] and regression [26] tasks. The experiments included random and small-world networks, two topologies based on "preferential attachment", a complete graph, and a _C. elegans_ sub-network [7]. Nevertheless, the learning matrix's time complexity limited the network sizes (i.e., 26 nodes), and for each task, a different winning topology emerged, including the MLP. Some recent works have instead focused on mul tipartite sparse graphs [23, 35]. While these architectures outperformed the complete baselines, their topological complexity was entirely encoded within the connections between adjacent layers. We propose the hypothesis that, given the same number of nodes (i.e., neurons) and edges (i.e., parameters), a complex NN might exhibit superior predictive abilities compared to classical, more regularly structured MLPs. Unlike previous studies, we conduct a systematic exploration of random, scale-free, and small-world graphs (Figure 1) on synthetic classification tasks, with particular emphasis on the following: * **Network size.** The defining properties of a complex topology often emerge in large-scale networks. For example, the second moment of a power-law degree distribution diverges only in the \(N\rightarrow\infty\) limit [3], where \(N\) is the network size1. The networks in [24, 26] have 15 and 26 nodes, respectively. We trained models with 128 neurons. Footnote 1: The proposition holds when the degree exponent is smaller than 3. * **Dataset size.** The _estimation error_ achieved by a predictor depends on the training set size: the greater the number of samples, the lower the error [30]. Except for studies based on multipartite graphs, all previous research works in a small-data regime. Our datasets are three times larger than those used before. * **Hyperparameter optimization.** Learning rate and batch size are crucial in minimizing the loss function. Ref. [24] is the only one that considers finding the optimal learning rate. The role of batch size has never been investigated. Each DAG, however, could be characterized by its optimal combination of hyperparameters. Hence, we optimized the learning rate and batch size for each topology. ## 2 Theory ### Complex Graph Generators **Erdos-Renyi (ER).** An ER graph [8], or _random network_, is uniformly sampled from the set of all graphs with \(N\) nodes and \(L\) edges. For \(N\gg\langle k\rangle\), the degree distribution of a random graph is well approximated by a Poisson distribution: \(p_{k}=e^{-\langle k\rangle}\frac{\langle k\rangle^{k}}{k!}\); \(k\) and \(\langle k\rangle\) represent node degree and average degree, respectively. **Watts-Strogatz (WS).** The WS generator [34] aims to create graphs that exhibit both high clustering and the _small-world_ property; this is achieved by interpolating _lattices_ with random networks. The generation starts from a ring in which nodes are connected to their immediate neighbors. The links are then randomly rewired with probability \(p\). **Barabasi-Albert (BA).** The well-known BA model [1] can be used to generate networks characterized by the \(p_{k}\propto k^{-3}\)_scale-free_ degree distribution. Being the model inspired by the growth of real networks, the generative procedure iteratively attaches nodes with \(m\) stubs to a graph that evolves from an initial star of \(m+1\) nodes. Node additions respond to the preferential attachment mechanism: the probability that a stub reaches a node is proportional to the degree of the latter. **Multilayer Perceptron (MLP).** The underlying networks of MLPs are called multipartite graphs. In a multipartite graph (i.e., a sequence of bipartite graphs) nodes are partitioned into layers, and each layer can only be connected with the adjacent ones; no intra-layer link is allowed. Additionally, inter-layer connections have to form _bicliques_ (i.e., fully-connected bipartite graphs). ## 3 Methods ### Datasets The foundation of the datasets developed, as displayed in Figure 2, is established by manifold learning generators2 provided by the scikit-learn machine learning library [25]. To modify the generators for classification purposes, 3D points sampled from one of the available curves (_s curve_ and _swiss roll_) are segmented into n_classes\(\times\)n_reps portions based on their univariate position relative to the primary dimension of the manifold samples. As the term implies, n_classes refers to the number of classes involved in the considered classification. Each segment is then arbitrarily allocated to a class, maintaining task balance (i.e., Figure 1: Example feedforward NNs (128 neurons, 732 synaptic connections) based on complex topologies: scale-free (BA), random (ER), and small-world (WS). All graphs are directed and acyclic. Information flows from top to bottom. Input, hidden, and output units are denoted in blue, orange, and green, respectively. Since the networks are defined at the micro-scale, hidden and output nodes implement weighted sums over the incoming edges. In the hidden units, the computational operation is followed by an activation function. The activations of nodes located on the same horizontal layer can be computed in parallel. precisely n_reps segments have the same label). We define n_reps as the task _difficulty_. An additional aspect of our datasets is the standard deviation \(\sigma\) of the Gaussian noise that can be added to the points. The generation procedure is finalized with a min-max normalization. ### Feedforward Neural Networks All trainable models are produced following the same 3-step procedure and share \(N\) and \(L\). Consequently, NNs exhibit identical density and parameter counts. **Undirected Graph Generation.** The initial step in creating a NN involves sampling an undirected graph using the generators detailed in Section 2. Once \(N\) and \(L\) are established, all models exhibit a single parameter configuration compatible with the required density3. The WS generator is the sole exception: the probability \(p\) is allowed to vary between 0 and 1. If the generator is limited to sample networks with a number of links from a finite set (e.g., \(L=m+(N-m-1)m\) according to the BA model), we first generate a graph with slightly higher density than the target before randomly eliminating excess edges. After obtaining the graph, we confirm the existence of a single connected component. Footnote 3: This statement is accurate if the number of MLP layers is predetermined. **Directed Acyclic Graph (DAG) Conversion.** Before performing any calculations, the direction for information propagation through the network links Figure 2: Benchmark classification datasets. **Top**: the _swiss roll_. **Bottom**: the _s curve_. Each dataset is composed of 3D points divided into multiple segments. Classes are color-coded. Datasets differ in terms of difficulty (\(x\) axis) and noise (\(y\) axis). must be determined; this is accomplished by randomly assigning, without replacement, an integer index from \(\{1,\ldots,N\}\) to the network nodes. It can be shown that the directed graph obtained by setting the direction of each edge from the node with a lower index to the node with a higher index is free of cycles. However, this conversion results in an unpredictable number of sources and sinks. Since classification tasks typically involve a pre-defined number of input features and output classes, it is necessary to resolve such network-task discrepancies. To address this issue, we developed a straightforward heuristic capable of adjusting DAGs without altering the underlying undirected graphs. **Mapping of Functional Roles.** The last step of the presented procedure consists in mapping computational operations to the DAG nodes. Working at the micro-scale (i.e., connections between single neurons), the operations allowed are two. Source nodes implement constant functions; their role, indeed, is to feed the network with the initial conditions for computations. Hidden and sink nodes, instead, perform a weighted sum over the incoming edges, followed by an activation function: \[a_{v}=\sigma\Bigg{(}\sum_{u}w_{uv}a_{u}+b\Bigg{)} \tag{1}\] where \(a_{v}\) is the activation of node \(v\), \(\sigma\) denotes the activation function4 (SELU [20] for hidden nodes and the identity function for sinks), \(u\) represents the generic predecessor of \(v\), \(w_{uv}\) is the weight associated with edge \((u,v)\) and \(b\) the bias. In order to implement the map of functional roles, we made use of the 4Ward library5[6], developed for the purpose. Starting from a DAG, the package returns a working NN deployable as a PyTorch Module. Footnote 4: Depending on the context, we use the same \(\sigma\) notation for both the standard deviation of the dataset noise and the activation function. Footnote 5: [https://github.com/BoCtrl-C/forward](https://github.com/BoCtrl-C/forward) ### Experiments **Dataset Partitioning.** Each generated dataset is randomly divided into 3 non-overlapping subsets: the train, validation and test splits. All model trainings are performed over the train split while the validation split is exploited in validation epochs and hyperparameter optimization. Test samples, instead, are accessed only in the evaluation of the final models. **Model Training.** Models are trained by minimizing cross entropy with the Adam [19] optimizer (\(\beta_{1}=0.9\), \(\beta_{2}=0.999\)). A scheduler reduces the learning rate by a factor of 0.5 if no improvement is seen on the validation loss for 10 epochs. The training procedure ends when learning stagnates (w.r.t. the validation loss) for 15 epochs, and the model weights corresponding to the epoch in which the minimum validation loss has been achieved are saved. **Hyperameter Optimization.** Hyperparameters are optimized through a grid search over a predefined 2D space (i.e., learning rate/batch size). We generate networks of the same topological family starting from 5 different random seeds. In the MLP case, models differ only in the weight initialization. For each parameter pair, the 5 models are trained accordingly, and the resulting best validation losses are collected. Then, the learning rate and batch size that minimize the median validation loss computed across the generation seeds are selected as the optimal hyperparameters of the considered graph family. **Topology Evaluation.** Once the optimal learning rate and batch size are found, we train 15 new models characterized by the considered topology and compute mean classification accuracy and standard deviation on the dataset test split. The procedure is repeated for each investigated graph family and a Kruskal-Wallis (H-test) [21] is performed in order to test the null hypothesis that the medians of all accuracy populations are equal. If the null hypothesis is rejected, a Mann-Whitney (U-test) [22] post hoc analysis follows. **Robustness Analysis.** We use the final trained models in a _graph damage_ study to investigate their _functional_ robustness (accuracy vs. fraction of removed nodes). The _topological_ robustness (giant component vs. fraction of removed nodes) is already well-studied in network science. We randomly remove a fixed fraction of nodes, \(f\), from a neural network and compute the accuracy achieved by the resulting model on the test dataset. Practically, node removal is implemented using PyTorch's Dropout6, which zeroes some network activations by sampling from i.i.d. Bernoulli distributions. As each batch element is associated with specific random variables, activations produced by different dataset samples are processed by differently pruned neural networks. Therefore, the figure of interest is averaged over the dataset and the 15 generation seeds. In a typical topological analysis, when \(f=0\), the giant components of all tested graphs have the same size (i.e., \(N\)). We adopt this convention in our experimental setup by replacing test accuracy with _accuracy gain_: \(\mathcal{A}(f)\). The metric is defined as the ratio between the accuracy obtained by a pruned network and the accuracy obtained by the original one (i.e., \(f=0\)). An accuracy gain \(<1\) indicates a decline in model performance. Consequently, the figure of merit for our analysis is the mean accuracy gain, with the expectation taken over the generation seeds. Footnote 6: [https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html) ## 4 Results We obtained the presented results by following the experimental protocol outlined in Section 3 using the specified topologies (i.e., BA, ER, MLP, and WS) and datasets. We set n_classes = 3 and n_reps \(\in 3,6,9,12\); for the _swiss roll_ dataset, \(\sigma\in 0.0,1.0\), while for the _s curve_, \(\sigma\in 0.0,0.3\). The train, validation, and test split sizes were 1350, 675, and 675, respectively. Given that in a 1-hidden layer MLP (h1 notation) the number of synaptic connections depends solely on \(N\) (i.e., \(L=3\times H+H\times 3\), with \(H=N-3-3\)), we chose an MLP with 128 neurons as a reference model and calculated the hyperparameters for the complex networks to achieve graphs with \(L=732\) edges. The additional degree of freedom in the WS generator enabled us to separate the small-world topology into three distinct graph families: p.5 (\(p=0.5\)), p.7 (\(p=0.7\)), and p.9 (\(p=0.9\)). The hyperparameter optimization searched for learning rates in {0.03, 0.01, 0.003, 0.001} and batch sizes in {32, 64}. Figure 3 displays the mean test accuracy achieved by each group of models as a function of task difficulty. All manifolds, noise levels, and difficulties are represented. Excluding difficulty level 9 in the _swiss roll_ dataset, the accuracy curves exhibit a clear decreasing trend. Specifically, as the difficulty increases, the performance of the MLPs degrades more rapidly than that of complex networks. Confidence intervals, on the other hand, are wider in the high-difficulty plot regions. As expected, noisy tasks were more challenging to learn. In Figure 4, the results obtained by the models for the two highest levels of task difficulty are shown in detail. The H-test null hypothesis is rejected for all experiments, and the U-test statistical annotations are displayed. Regardless of the scenario considered, a complex topology consistently holds the top spot in Figure 3: Mean test accuracy as a function of the task difficulty. Confidence intervals (\(\pm\) standard deviation) are reported as well. Different subplots correspond to different datasets. Each curve denotes the trend of a specific network topology. the mean accuracy ranking. MLPs, in contrast, are always the worst-performing models. Moreover, the MLP performance differs significantly from that of the complex networks, in a statistical sense. Conversely, only 3 out of 8 experiments exhibit statistical differences within the group of complex networks. Figure 5 presents the results of the robustness analysis. We investigated \(f\in\{0.0,0.1,\ldots,0.5\}\) and removed nodes from the models trained on the datasets characterized by the lowest level of difficulty. On these tasks, indeed, all models behave approximately the same (see Figure 3), hinting at a fair comparison. Unsurprisingly, node removal has the same effect on all topologies: the accuracy gain decreases as \(f\) increases. MLPs, however, show enhanced robustness to random deletions. Confidence intervals of the complex graph families overlap. It is worth noting that the chance level (i.e., accuracy of \(1/3\)) could be reached by different accuracy gains depending on the task; the best accuracy under \(f=0\), indeed, varies between the manifold/noise pairs. ## 5 Discussion The most significant finding from the experiments performed is the performance in terms of accuracy attained by the architectures built on complex topologies in the high-difficulty regime. In this context, and in light of the statistical tests Figure 4: Mean test accuracy at the highest difficulty levels. **Left**: difficulty \(=9\). **Right**: difficulty \(=12\). The bars display both means and standard deviations. Each bar corresponds to a specific network topology and is represented by a consistent color across all histograms (following the color scheme from Figure 3). Statistical annotations appear above the histograms, with each segment indicating a significant difference between two accuracy distributions. Figure 5: Robustness analysis. The horizontal axis reports the fraction of removed nodes (i.e., \(f\)) while the vertical one the accuracy gain (i.e., \(\mathcal{A}(f)\)). Each curve refers to a different network topology. Confidence intervals (\(\pm\) standard deviation) are reported. carried out, the complex models prove to be a solid alternative to MLPs. Formally justifying the observed phenomenon is challenging. Fortunately, in 2017, Poggio et al. discussed two theorems [28] that guided our explanation. According to the first theorem7, a shallow network (e.g., an MLP h1) equipped with infinitely differentiable activation functions requires \(N=\mathcal{O}(\epsilon^{-n})\) units to approximate a continuous function \(f\) of \(n\) variables8 with an approximation error of at most \(\epsilon>0\). This exponential dependency is technically called the _curse of dimensionality_. On the other hand, the second theorem states that if \(f\) is compositional and the network presents its same architecture, we can escape the "curse". It is important to remember that a compositional function is defined as a composition of "local" constituent functions, \(h\in\mathcal{H}\) (e.g., \(f(x_{1},x_{2},x_{3})=h_{2}(h_{1}(x_{1},x_{2}),x_{3})\), where \(x_{1},\ x_{2},\ x_{3}\) are the input variables and \(h_{1},\ h_{2}\) the constituent functions). In other words, the structure of a compositional function can be represented by a DAG. In this approximation scenario, the required number of units depends on \(N=\mathcal{O}(\sum_{h}\epsilon^{-n_{h}})\), where \(n_{h}\) is the input dimensionality of function \(h\). If \(\max_{h}n_{h}=d\), then \(\sum_{h}\epsilon^{-n_{h}}\leq\sum_{h}\epsilon^{-d}=|\mathcal{H}|\epsilon^{-d}\). Footnote 7: We invite the reader to consult ref. [28] for a complete formulation of the theorems. Footnote 8: Depending on the context, we use the same \(f\) notation for both the fraction of removed nodes and the function to be approximated. The primary advantage of complex networks is their potential to avoid the curse of dimensionality when relevant graphs for the function to be learned are present. Under the assumption that the function linking the _swiss roll_ and _s curve_ points to the ground truth labels is compositional (intuitively, in non-noisy datasets, each class is a union of various segments), we conjecture that our complex NNs can exploit this compositionality. In the high-difficulty regime, the necessary network size for MLP h1 to achieve the same accuracy as complex models likely exceeds the size set for experiments. While one could argue that the datasets employed were compositionally sparse by chance, according to [27], all _efficiently computable functions_ must be _compositionally sparse_ (i.e., their constituent functions have "small" \(d\)). Performance differences on noisy datasets are less noticeable, possibly due to the minimal overlap between the functions to be approximated and the studied topologies. Notably, our setup does not precisely match the theorem formulations in [28] (e.g., SELUs are not infinitely differentiable), but Poggio et al. argue that the hypotheses can likely be relaxed. No statistically significant differences emerged between the complex graph families from the results of Section 4. Various explanations exist for this outcome: all tested topologies could be complex enough to include relevant subgraphs of the target \(f\) functions; the random DAG conversion heuristic might have perturbed hidden topological properties of the original undirected networks; or the degree distribution of a network may not be the most relevant topological feature in a model's approximation capabilities. However, the higher accuracy in complex networks comes with trade-offs. Although the methodology in [6] improves the scalability of complex NNs and enables experimentation with arbitrary DAGs, it is important to note that 1-hidden layer MLPs typically have faster forward pass computation. In these models, the forward pass requires only two matrix multiplications, whereas, in NNs built using 4Ward, the number of operations depends on the DAG _height_. Moreover, the analyses in Figure 5 demonstrate MLPs' superiority in a graph damage scenario. We speculate that the hidden units in an MLP h1 contribute equally to the approximation of the target function. In contrast, the ability of complex networks to exploit the compositionality of the function to be learned might lead to high specialization of some hidden units. ## 6 Conclusion Our study provides valuable insights into the influence of network topology on the approximation capabilities of artificial neural networks (ANNs). Our novel methodology for constructing complex ANNs based on various topologies has enabled a systematic exploration of the impact of network structure on model performance. The experiments conducted on synthetic datasets demonstrate the potential advantages of complex topologies in high-difficulty regimes when compared to traditional MLPs. While complex networks exhibit improved performance, this comes at the cost of increased computational requirements and reduced robustness to graph damage. Our investigation of the relationship between topological attributes and model performance (Appendix A) reveals a complex interplay that cannot be explained by any single attribute. This finding highlights the need for further research to better understand the interactions among multiple topological attributes and their impact on ANN performance. As a result of this study, researchers and practitioners can consider the potential benefits and limitations of complex topologies when designing ANNs for various tasks. Moreover, our work provides a foundation for future research focused on identifying optimal topological features, understanding the impact of multiple attributes, and developing new methodologies for constructing more efficient and robust ANN architectures. By further exploring the role of network topology in ANNs, we can unlock new possibilities for improving the performance and adaptability of these models across diverse applications.
2308.16791
Solving the initial conditions problem for modified gravity theories
Modified gravity theories such as Einstein scalar Gauss Bonnet (EsGB) contain higher derivative terms in the spacetime curvature in their action, which results in modifications to the Hamiltonian and momentum constraints of the theory. In principle, such modifications may affect the principal part of the operator in the resulting elliptic equations, and so further complicate the already highly non-linear, coupled constraints that apply to the initial data in numerical relativity simulations of curved spacetimes. However, since these are effective field theories, we expect the additional curvature terms to be small, which motivates treating them simply as an additional source in the constraints, and iterating to find a solution to the full problem. In this work we implement and test a modification to the CTT/CTTK methods of solving the constraints for the case of the most general four derivative, parity invariant scalar-tensor theory, and show that solutions can be found in both asymptotically flat/black hole and periodic/cosmological spacetimes, even up to couplings of order unity in the theory. Such methods will allow for numerical investigations of a much broader class of initial data than has previously been possible in these theories, and should be straightforward to extend to similar models in the Horndeski class.
Sam E. Brady, Llibert Aresté Saló, Katy Clough, Pau Figueras, Annamalai P. S
2023-08-31T15:08:09Z
http://arxiv.org/abs/2308.16791v2
# Solving the initial conditions problem for modified gravity theories ###### Abstract Modified gravity theories such as Einstein scalar Gauss Bonnet (EsGB) contain higher derivative terms in the spacetime curvature in their action, which results in modifications to the Hamiltonian and momentum constraints of the theory. In principle, such modifications may affect the principal part of the operator in the resulting elliptic equations, and so further complicate the already highly non-linear, coupled constraints that apply to the initial data in numerical relativity simulations of curved spacetimes. However, since these are effective field theories, we expect the additional curvature terms to be small, which motivates treating them simply as an additional source in the constraints, and iterating to find a solution to the full problem. In this work we implement and test a modification to the CTT/CTK methods of solving the constraints for the case of the most general four derivative, parity invariant scalar-tensor theory, and show that solutions can be found in both asymptotically flat/black hole and periodic/cosmological spacetimes, even up to couplings of order unity in the theory. Such methods will allow for numerical investigations of a much broader class of initial data than has previously been possible in these theories, and should be straightforward to extend to similar models in the Horndeski class. ## I Introduction Recent breakthroughs in well posed formulations [1; 2] have resulted in an expansion of the class of modified theories of gravity to which numerical relativity (NR) - numerical simulations that solve the Einstein Equations as a time evolution problem - can be applied. This has allowed for the simulation of strong gravity spacetimes in these theories, including the fully non-linear backreaction of the additional curvature terms onto the metric [3; 4; 5; 6; 7; 8], building on previous works that neglected such effects [9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Such simulations are of particular interest for the construction of binary merger waveforms, but also have relevance to more general questions about the hyperbolicity of such theories in the strongly coupled regime [3; 4; 5; 19; 20; 21], and potentially in early Universe cosmology. However, in works to date in 3+1 spacetime dimensions, the challenge of satisfying the more complicated constraint equations has limited the physical scenarios that can be investigated. Most have begun with a trivial solution that satisfies the regular constraint equations of general relativity (GR) and set the additional scalar degree of freedom to zero (it is then evolved to a non trivial configuration over time) [3; 5; 6; 7]. Alternatively, studies of spin-induced scalarisation that require a scalar "seed" have tolerated a certain level of constraint violation in the initial data arising from a non trivial profile of the scalar, and relied on constraint damping to remove it during the initial stages of the evolution [8; 4; 20]. The Arnowitt-Deser-Misner (ADM) [22] decomposition of the Einstein Field Equations provides the basis for NR simulations, posing the system as a Cauchy problem, with a set of evolution and constraint equations for the 3-dimensional spatial metric \(\gamma_{ij}\) and the extrinsic curvature, \(K_{ij}\), of the spatial slices. The ADM approach gives rise to four independent constraint equations, which must be satisfied on each time slice, and in particular in the initial data that starts an NR simulation. These constraints are given by \[\mathcal{H}\equiv R+K^{2}-K_{ij}K^{ij}-16\pi\rho=0\, \tag{1}\] \[\mathcal{M}_{i}\equiv D_{j}K^{j}_{\ i}-D_{i}K-8\pi S_{i}=0\, \tag{2}\] where \(K=\gamma^{ij}K_{ij}\) is the trace of the extrinsic curvature (also known as the mean curvature), and \(R\) and \(D_{i}\) are the Ricci scalar and covariant derivative associated with the 3-dimensional spatial metric. The energy density \(\rho\) and momentum densities \(S_{i}\) are functions of the matter fields and their derivatives. These can be derived directly from the matter action as projections of the stress-energy tensor \(T_{\mu\nu}\). In the case of modified gravity theories like EsGB, the additional curvature terms can be treated as further effective matter sources that depend on higher order derivatives of the other metric variables. Including the lapse \(\alpha\) and shift \(\beta^{i}\), there are 16 components that must be specified initially, but only four constraint equations. Whilst four of the extra components relate to physical degrees of freedom, the remainder are gauge. Values that represent free data and those that are fully determined by the physical scenario are not easy to separate, and so some must be chosen somewhat arbitrarily. The result is that there will often be a large number of possible ways to set the free data that appear to still meet the physical requirements. However, poor choices can result in uniqueness and existence problems in the initial constraints that prevent unique solutions being found. Many different approaches have been developed for solving the constraint equations (for reviews see the stan dard NR texts [23; 24; 25]). These approaches vary in the degrees of freedom that have their values imposed, and those that are solved for. The metric and extrinsic curvature can be decomposed in various ways (e.g., through a conformal decomposition), allowing for the freely chosen variables to be more closely identified with a particular physical interpretation. This can also leave the equations in a form more amenable to numerical solutions. One such approach is known as the Conformal-Transverse-Traceless (CTT) method. In this approach the extrinsic curvature is decomposed into its trace \(K\) and a traceless tensor \(A_{ij}\). The 3-metric \(\gamma_{ij}\) is decomposed into a conformal metric with determinant one \(\bar{\gamma}_{ij}\), and a conformal factor \(\psi\), as \(\gamma_{ij}=\psi^{4}\bar{\gamma}_{ij}\). \(A_{ij}\) is similarly decomposed as \(A_{ij}=\psi^{-2}\bar{A}_{ij}\), and the conformal \(\bar{A}_{ij}\) is further decomposed into a transverse-traceless part \(\bar{A}_{ij}^{TT}\) and a vector potential \(W_{i}\) (see equation 3). The conformal metric \(\bar{\gamma}_{ij}\), the mean curvature \(K\), and the transverse-traceless extrinsic curvature \(\bar{A}_{ij}^{TT}\) are then imposed to have some values (usually trivially those of a flat spacetime), and the constraints are used to solve for the conformal factor \(\psi\) and the vector potential \(W_{i}\). \(\bar{A}_{ij}\) can then be reconstructed as, \[\bar{A}_{ij}=\bar{A}_{ij}^{TT}+\bar{D}_{i}W_{j}+\bar{D}_{j}W_{i}-\frac{2}{3} \bar{\gamma}_{ij}\bar{D}_{k}W^{k}. \tag{3}\] In this method, the term in the energy density \(\rho\) is generically problematic, due to the nature of the resulting elliptic equation for \(\psi\). It is normally only possible to specify a conformally-related density \(\bar{\rho}\), as this allows for the term in \(\rho\) to appear with the "right sign" to guarantee unique solutions of the elliptic equation for \(\psi\)[26]. A modification to the CTT technique, known as CTTK, was recently proposed to address this limitation, which is more acute for problems involving fundamental fields as opposed to fluids [27]. In this approach the variables are decomposed in the same way, but the elliptic equation for \(\psi\) is split into an algebraic equation for \(K\) in terms of \(\rho\), and \(\psi\) is then only required to satisfy Laplace's equation with a source term in \(\bar{A}_{ij}\). The conformal metric \(\bar{\gamma}_{ij}\) and \(\bar{A}_{ij}^{TT}\) still need to be chosen, but the mean curvature \(K\) is now solved for as part of the algorithm and this simplifies the solution for \(\psi\). Overall the method was found to be highly robust, with unique solutions being found by the solver over a wide range of scenarios. In this work, the CTTK method has been modified to solve the full constraint equations of the most general four derivative, parity invariant scalar-tensor theory of gravity (\(4\partial ST\)). It has been shown that the CTT formulation of the elliptic equations for these theories has unique solutions in the weak coupling limit [28], and one can expect this to carry over to the CTTK case. To achieve this, as suggested in [28], the contributions from the higher-derivative curvature terms in the action are treated as additional sources that are split between the available degrees of freedom. The CTTK technique is modified in two different ways, depending on the boundary conditions of the problem at hand, to ensure that solutions exist. We describe the methods in more detail below, and demonstrate their effectiveness in practise by showing that the solutions converge as expected. We follow the conventions in Wald's book [29]. Greek letters \(\mu,\nu\ldots\) denote spacetime indices and they run from 0 to 3; Latin letters \(i,j,\ldots\) denote indices on the spatial hypersurfaces and they run from 1 to 3. We set \(G=c=1\). ## II Methods ### Additional terms In \(4\partial ST\) gravity, the Einstein-Hilbert action is modified to include a scalar field coupled non-trivially to gravity, through the Gauss-Bonnet term \(\mathcal{L}^{\rm GB}\), \[S_{4\partial ST}=\int d^{4}x\sqrt{-g}\big{[}\frac{R}{16\pi}-V(\phi)+X+g_{2}( \phi)X^{2}+\lambda(\phi)\mathcal{L}^{\rm GB}\big{]}\, \tag{4}\] where \(X=-\frac{1}{2}(\nabla_{\mu}\phi)(\nabla^{\mu}\phi)\), \[\mathcal{L}^{\rm GB}=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu \nu\rho\sigma}\, \tag{5}\] and \(g_{2}(\phi)\) and \(\lambda(\phi)\) are smooth functions of \(\phi\). This is the most general parity-invariant scalar-tensor theory of gravity that includes up to four derivatives of the metric, and is an example of the wider class of Horndeski theories [30]. The local magnitude of the coupling \(\lambda(\phi)\) controls the deviation of the solutions from the GR case, with the \(g_{2}(\phi)\) term being generally subdominant for similar coupling constants and the same field amplitude. Deviations are also amplified in regions of high curvature (i.e., with a larger Gauss-Bonnet invariant). The effects of these additional terms can be included in \(\rho\) and \(S_{i}\), so that the constraints remain as they are in Equations 1 and 2 but with an effective \(\rho\) and \(S_{i}\) given by \[\rho = \rho^{\rm SF}+\rho^{X}+\rho^{\rm GB}\, \tag{6}\] \[S_{i} = S_{i}^{\rm SF}+S_{i}^{X}+S_{i}^{\rm GB}. \tag{7}\] Here \(\rho^{\rm GB}\) and \(S_{i}^{\rm GB}\) include higher derivative contributions from the scalar and tensor metric variables, and \(\rho^{X}\) and \(J_{i}^{X}\) contain the contributions of the \(g_{2}(\phi)\) term. Explicitly: \[\rho^{\rm GB} = \Omega M-2M_{kl}\Omega^{kl}\,, \tag{8a}\] \[S_{i}^{\rm GB} = \Omega_{i}M-2M_{ij}\Omega^{j}\,,\] (8b) \[-4\big{(}\Omega^{j}_{\,\,[i}N_{j]}-\Omega^{jk}D_{[i}K_{j]k}\big{)}\,,\] \[\rho^{\rm X} = \frac{g_{2}(\phi)}{4}\left(\Pi^{2}-D_{i}\phi D^{i}\phi\right) \left(3\Pi^{2}+D_{i}\phi D^{i}\phi\right),\] (8c) \[S_{i}^{\rm X} = -g_{2}(\phi)\,\Pi\,D_{i}\phi\left(\Pi^{2}-D_{i}\phi D^{i}\phi \right)\,, \tag{8d}\] with \[M_{ij} = R_{ij}+\tfrac{2}{3}\gamma_{ij}K^{2}+\tfrac{1}{3}KA_{ij}-A_{ik}A_{ j}^{k}\,, \tag{9a}\] \[N_{i} = -\frac{2}{3}D_{i}K+D_{j}A_{i}^{j}\,,\] (9b) \[\Omega_{i} = -4\lambda^{\prime}\big{(}D_{i}\Pi+A^{j}_{i}D_{j}\phi+\tfrac{K}{3} D_{i}\phi\big{)}-4\lambda^{\prime\prime}\Pi\,D_{i}\phi,\] (9c) \[\Omega_{ij} = 4\lambda^{\prime}\left(D_{i}D_{j}\phi+\Pi\,K_{ij}\right)+4 \lambda^{\prime\prime}(D_{i}\phi)D_{j}\phi\,, \tag{9d}\] where \(\Pi\) is the conjugate momentum of \(\phi\), \(N_{i}\) is the GR momentum constraint, \(\Omega=\gamma^{ij}\Omega_{ij}\), and \(\Omega_{i}\) and \(\Omega_{ij}\) come from the \(3+1\) decomposition of the Weyl tensor. The contributions from the kinetic and potential terms are given by the usual minimally coupled, real scalar terms, \[\rho^{\rm SF} = \frac{1}{2}\Pi^{2}+V(\phi)+\frac{1}{2}(D_{i}\phi)(D^{i}\phi)\, \tag{10}\] \[S_{i}^{\rm SF} = -\Pi\,D_{i}\phi. \tag{11}\] As mentioned in the introduction, previous work in such theories has required that \(\rho^{\rm GB}+\rho^{X}\ll\rho^{\rm SF}\) and \(S_{i}^{\rm GB}+S_{i}^{X}\ll S_{i}^{\rm SF}\) everywhere. This reduces the problem to (approximately) satisfying the regular GR constraint equations, and still allows the solution to evolve away from that of GR during the evolution. However, here we are able to treat the deviations from GR fully non-perturbatively. ### Choice of components to solve for As discussed in the introduction (and in more detail in [27]), in contrast to the CTT method that imposes a spatially constant value of the mean curvature \(K\), and solves for the conformal factor \(\psi=(\det\gamma)^{1/12}\), the CTK method allows both \(K\) and \(\psi\) to have a spatially varying profile. The momentum constraint (now with an additional source term from the spatial variation of \(K\)) is solved as in the CTT method, for the vector potential \(W_{i}\), which is further decomposed into a scalar \(U\) and a vector \(V_{i}\). This additional flexibility in \(K\) can be used to absorb some of the problematic matter source terms, resulting in a more robust method for general field configurations. The same approach will be used here, but we will see that in general the \(4\partial ST\) terms are better absorbed into the elliptic equation for \(\psi\), rather than in the mean curvature \(K\). The transverse-traceless part of the extrinsic curvature, \(\bar{A}_{ij}^{TT}\), is usually set to zero, which roughly corresponds to a spacetime containing no gravitational waves. The conformal metric is also assumed to be \(\delta_{ij}\) for simplicity (i.e., conformal flatness is assumed). It should be possible to apply the same techniques with more general choices of \(\bar{A}_{ij}^{TT}\) and \(\bar{\gamma}_{ij}\), such as those that are required for highly spinning black hole initial data, which would simply result in additional source terms in the equations. For now we maintain these choices for simplicity. ### Black hole spacetimes With the \(4\partial ST\) terms included and an appropriate form chosen for \(U\) (see the appendix of [27] for a discussion), we write the Hamiltonian and momentum constraints in CTTK as follows \[K^{2} =24\pi\rho^{\rm SF}\, \tag{12}\] \[\partial_{j}\partial^{j}\psi =-\frac{1}{8}\psi^{-7}\bar{A}^{ij}\bar{A}_{ij}-2\pi\psi^{5}(\rho^ {\rm GB}+\rho^{X})\,\] (13) \[\partial_{j}\partial^{j}V_{i} =\frac{2}{3}\psi^{6}\partial_{i}K+8\pi\psi^{6}(S_{i}^{\rm SF}+S_{ i}^{\rm GB}+S_{i}^{X}). \tag{14}\] The GR case with \(\rho=\rho^{\rm SF}\) and \(S_{i}=S_{i}^{\rm SF}\) is derived in [27] - in what follows we explain the motivation for the positioning of the additional terms. In regions of physical relevance, we can demand from an effective field theory (EFT) perspective that \(\rho^{\rm GB}+\rho^{X}<\rho^{\rm SF}\). However, in some regions of high curvature this may not hold. Moreover, the value of \(\rho^{\rm GB}+\rho^{X}\) is not positive definite, and in practise for most cases of interest it varies between positive and negative regions in different parts of the spatial slice. For this reason, \(\rho^{\rm SF}\) and \(\rho^{\rm GB}+\rho^{X}\) have been separated between equations (12) and (13). By including only the positive definite part in Eq. (12), we avoid the possibility of \(K^{2}\) having a negative source, even in regions where the additional curvature contributions dominate. However, since it is also not negative definite, the \(\rho^{\rm GB}+\rho^{X}\) term in equation (13) is likely to appear with the "wrong sign" in a linearised expansion of \(\psi\) in certain regions, which violates the conditions of the maximum principle and removes the guarantee of unique solutions [26]. In all our test cases (see Section III), this has not caused issues, and no further modification has been necessary for convergence to a solution. We speculate that this may be because we also include the contribution from \(\bar{A}^{ij}\bar{A}_{ij}\) in the elliptic equation for \(\psi\), which tends to stabilise the solutions 1. Footnote 1: This potential problem could be avoided by amending the split further such that \(\rho^{\rm SF}\to\rho^{+}=\rho^{\rm SF}+\rho^{P}\) and \(\rho^{\rm GB}+\rho^{X}\to\rho^{-}=\rho^{\rm GB}+\rho^{X}-\rho^{P}\), with any \(\rho^{P}\) that satisfies \(\rho^{P}>0\) and \(\rho^{P}>\rho^{\rm GB}+\rho^{X}\) everywhere, and \(\rho^{P}\to 0\) at the boundaries. If the spacetime contains one or more black holes, divergences will appear in \(\psi\) and \(\bar{A}_{ij}\). This can be dealt with by decomposing \(\bar{A}_{ij}\) (and therefore \(U\) and \(V_{i}\)) into a black hole part, \(\bar{A}_{ij}^{bh}\), which contains the divergences, and a regular part, \(\bar{A}_{ij}^{reg}\), which is solved for. The conformal factor \(\psi\) can also be decomposed into a sum of isolated black-hole solutions, \(\psi_{bh}\), and the regularised remainder, \(\psi_{reg}\). The Poisson equations for \(\psi\) and \(V_{i}\) can then be solved directly, or by linearising around the previous solution and solving for the error. For example, the results given in Section III.1 are calculated by fully solving for \(V_{i}\) at each step, and iterating for \(\psi\) as \(\psi=\psi_{0}+\delta\psi\). The forms of \(A_{ij}^{bh}\) and \(\psi_{bh}\) are given in [27], and are those proposed by Bowen and York in [31; 32]. ### Cosmological spacetimes For simulations of cosmological spacetimes, periodic boundary conditions are often used (see [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43], although other approaches are possible [44]). In the CTTK method without a \(4\partial ST\) term, the method for solving the constraints with periodic boundaries is very similar to that for black hole spacetimes. Again, the constraints reduce to Equations (12) to (14). However, the elliptic equations now impose a set of integrability conditions, as discussed in [38; 45]. In cosmological spacetimes, the differential operator in equation (14) has a non-trivial kernel. Therefore, by the Fredholm alternative, solutions to this equation (which are necessarily non-unique) will only exist if the source is in the orthogonal complement of the kernel - i.e., in the adjoint. Since the kernel includes constants, it is necessary (but not sufficient) for the right hand side of the Poisson equation to equal zero when integrated over the entire grid. A similar requirement applies to the right hand side of equation (13) when it is treated as a constant source. With no \(4\partial ST\) term, this simply corresponds to requiring a periodic distribution of the scalar field, and ensuring that there is no net momentum flux through the box in any direction. However, with \(\lambda(\phi)\neq 0\), this is no longer sufficient, as there is no guarantee of the source terms including \(\rho^{\rm GB}\), \(\rho^{X}\), \(S_{i}^{\rm GB}\) and \(S_{i}^{X}\) averaging to zero across the grid, and no obvious way of choosing them such that this is always the case. In simple cases where only the elliptic equation for \(\psi\) is problematic (e.g., with a simple sinusoidal profile for \(\phi\) and no conjugate momentum \(\Pi\)), this can be solved by further dividing \(K\) into two parts, one constant and one spatially varying, which we call respectively \(K_{C}\) and \(K_{GR}\). The 'GR' part of \(K\) satisfies equation (12), as in the GR case, sourced by \(\rho^{\rm SF}\), that is \[K_{GR}^{2}=24\pi\rho^{\rm SF}. \tag{15}\] The constant part of \(K\), \(K_{C}\), can then be used to compensate for any violation of the integrability condition for \(\psi\). This is achieved by setting \(K_{C}\) to a value satisfying the condition \[K_{C}^{2}\int\frac{\psi^{5}}{12}d\Omega+K_{C}\int\frac{\psi^{5}K _{GR}}{6}d\Omega\\ -\int\Big{(}\partial_{j}\partial^{j}\psi+\frac{1}{8}\psi^{-7} \bar{A}_{ij}\bar{A}^{ij}+2\pi\psi^{5}(\rho^{\rm GB}+\rho^{X})\Big{)}d\Omega=0. \tag{16}\] As a result of these choices, the equation for \(\psi\) becomes \[\partial_{j}\partial^{j}\psi=-\frac{1}{8}\psi^{-7}\bar{A}_{ij} \bar{A}^{ij}-2\pi\psi^{5}(\rho^{\rm GB}+\rho^{X})+\frac{1}{12}\psi^{5}K_{C}^{2} \\ +\frac{1}{6}\psi^{5}K_{C}K_{GR}\,. \tag{17}\] Equation (16) will always have real solutions, unless \(\rho^{\rm GB}\) dominates over the combined contributions of \(\rho^{GR}\) and \(\bar{A}_{ij}\bar{A}^{ij}\) when averaged across the grid 2. Footnote 2: If this is true the weak-coupling condition will not be satisfied, so the solutions can probably not be stably evolved anyway. However, we note that real solutions for the initial data will still be available if the sign of \(\lambda(\phi)\) is chosen to make the discriminant positive - although this adds a physical restriction on the coupling. The integrability condition for the momentum constraint can also be spoiled by the presence of various non-linear terms in \(S_{i}^{\rm GB}\). With a shift-symmetric or quadratic coupling and \(\Pi=0\), many of these terms are simplified. A sinusoidal profile for \(\phi\) then gives a contribution to \(S_{i}^{\rm GB}\) that satisfies this constraint. For a more general \(S_{i}^{\rm GB}\), removing the assumption of conformal flatness would provide another source in the constraints, potentially allowing for this contribution to be cancelled out by a judicious choice. In our tests below we restrict ourselves to showing that the method works in the simpler case, and leave more general conditions to future work. Even once the source in equation (14) is in the adjoint and the right hand side of (17) integrates to zero, the solution to the elliptic equations at each step suffers from non uniqueness - the equations have multiple solutions, where solutions differ by the addition of a constant or linear term in the equation for the conformal factor 3, and one or more Killing vectors of the conformally flat metric in the case of \(V_{i}\) (see [45]). This is addressed by solving for their values perturbatively, starting with an initial guess and solving for a small correction (\(\delta\psi,\ \delta V^{i}\)) at each step. This perturbative treatment naturally generates a linear term in the equation for \(\delta\psi\) that prevents the constant and linear modes from growing. The freedom in \(\delta V^{i}\) can also be eliminated by adding a small linear coefficient to the Poisson equation for \(\delta V^{i}\). The addition of the conformal Killing vectors in \(V_{i}\) is unimportant as they do not change the resulting value of the extrinsic curvature \(K_{ij}\)[45]. However, any non uniqueness in \(\psi\) has a physical consequence - its value at the start of the iteration picks out the final uniquely chosen solution, and this in turn determines physical properties of the field - e.g., the density of the gradient terms measured by normal observers. Footnote 3: This lack of uniqueness arises when the right hand side of equation (17) is treated as a constant source, which happens at each non-linear iteration if we do not solve perturbatively for \(\psi\). ## III Tests The methods described above for both black hole and cosmological spacetimes have been tested with a modified version of the CTTK solver used in [27]. This solver is constructed using the open-source Chombo [46] code for finite difference solution of PDEs with Adaptive Mesh Refinement - in particular here we adapt their multigrid solver for elliptic equations. The results are imported into GRFlores [47], the modified version of the NR evolution code GRChombo [48; 49], to check the constraint violation using the methods verified in [7; 8]. Here we show typical results for both overall convergence to a solution and convergence to the true zero-constraints solution with increasing resolution. The Chombo multigrid solver is designed to be second-order in all derivatives, so with the assumption that the errors are dominated by errors in the derivatives, the solver errors should also converge at that rate as the resolution is increased. This means that doubling the resolution should reduce the constraints by a factor of four 4. These tests have been conducted with a variety of coupling functions and potentials, although (as described above) the possible scalar field configurations are restricted in the case of periodic boundaries. Figure 1 shows the convergence of the constraint violations to zero (with a fixed resolution) as the number of non-linear iterations increases for the black hole spacetime described in Section III.1. This shows good convergence to a solution of the full non-linear problem within \(\sim 10\) iterations. In practise, this global measure is rather crude and ignores the fact that in some regions with small errors (often nearer the boundaries in BH spacetimes) the solver takes longer to show good local convergence. More information can be gained by checking the spatial profiles. The convergence tests with increasing resolution for two particular cases are shown below in Figures 2 and 3. These plots show that the solver is consistently displaying the desired second-order convergence, but in the black hole case the solver was run for approximately 1000 non-linear iterations. Given that the solver is much less costly to run than the evolution code, such a high number of steps is not prohibitive - with three levels of refinement, this takes a few hours with \(\sim 100\) CPU cores on a typical computing cluster. In the cosmological case, where only one level is used, it takes under an hour. Footnote 4: Only two resolutions are required for the convergence tests since the true solution is known - i.e., the constraints should be zero across the grid. ### Tests of black hole method The black hole method described in Section II.3 has been tested with a dumbbell-shaped scalar field and momentum configuration around a central black hole with dimensionless spin coefficient \(a/M=0.5\) and a spin-axis along the \(z\) direction, where \(M\) is the total bare mass. The potential and coupling functions are both chosen to be quadratic, i.e., \(\lambda(\phi)=\frac{1}{2}\lambda_{\rm GB}^{2}\phi^{2}\) and \(V(\phi)=\frac{1}{2}m^{2}\phi^{2}\), with \(m=1\) and \(\lambda_{\rm GB}/M=1\), a scalar field amplitude of 0.1, and a momentum amplitude of 0.01. In this test \(g_{2}(\phi)\) was set to zero - its impact on the solutions was found to be negligible. This configuration was inspired by the scalar field configuration that the field has been found to settle into in spin induced cases -see for example the plots in [8; 20]- although in such cases the field is massless. We do not attempt to match the stationary configuration exactly since this is simply a proof of principle, and we have tested other spatial configurations that show similar results to the ones here. Figure 2a shows the amplitude of the scalar field across a slice of constant \(y\)-coordinate, passing through the singularity. Figure 2b then shows the convergence as the grid spacing is halved. The results with a finer grid are approximately second-order across the grid, other than at grid boundaries. In the Hamiltonian constraint the errors are not fully dom Figure 1: Plots of the Hamiltonian and momentum constraint violation against non-linear iterations of the elliptic solver for the black hole spacetime with a dumbbell scalar field configuration shown in Figure 2b. We see that the solver converges to a good solution within 10 iterations. The behaviour is similar for the cosmological initial data. The gradients of \(\log_{2}(\mathcal{H})\) and \(\log_{2}(||\mathcal{M}||_{L^{2}})\) over the first 7 steps are both -2.1, so the solver is converging approximately quadratically. inated by the derivatives, which is likely to cause the small deviation from exact second-order convergence in the central region. The convergence towards a solution for a fixed resolution is also shown, in Figure 1. This method has also been tested with initial data for a black hole binary. The same coupling functions and amplitudes as the dumbbell test were used for two black holes with Bowen-York masses \(m_{1,2}=0.5M\), momenta \(|P|=2M\) perpendicular to their separation of \(12M\), and dimensionless spin parameters \(a/M=0.6\). In this case we set \(g_{2}(\phi)/M^{2}=1\), although as above this only has a small impact on the solutions. A Gaussian scalar field with momentum was imposed over each black hole, as shown by the contours in Figure 4, along with the change in the conformal factor over that of the bare punctures. Figure 3: Scalar field configuration and convergence plots for a sinusoidal scalar field configuration and periodic boundary conditions. Figure 2: Scalar field configuration and convergence plots for a growing dumbbell scalar field configuration around a spinning black hole. ### Tests of cosmological spacetimes method A similar test was conducted with periodic boundary conditions, and a sinusoidal profile for \(\phi\) in each direction. The same coupling and potential functions were used, with \(m=0.5\) and the other parameters unchanged. As explained above, we needed to set the momentum of the field \(\Pi\) to be zero in this case. Figure 3 shows the scalar field configuration and tests of convergence with increasing resolution in the periodic case, and also displays second-order convergence across the grid. The speed of convergence with non-linear iterations is similar to that in the black hole case. ## IV Discussion and future work In this work the CTTK method was adapted to solve the constraint equations of \(4\partial ST\) gravity, treating the additional curvature terms as another source to the Poisson equations of the GR case, as suggested in [28]. Whilst such a treatment is a sensible first guess given that the new terms should be small in an effective theory (and given that in the weakly coupled regime a unique solution has been shown to exist [28]), it is far from clear that such an approach will work in practise given the high non-linearity of the problem, especially as the coupling is made large. We have demonstrated that this is the case, and that the method is robust, provided certain choices are made about the split of the new terms between the available degrees of freedom. In fact, the method appears to be robust up to the strongly coupled regime of the theory, with coupling parameters of order 1. Beyond this regime the well-posedness of the evolution is no longer guaranteed, and in practise it will break down [19, 20, 21, 3, 4, 5]; therefore the solutions beyond this point are not of particular physical interest. This adapted method has been tested for both black hole and cosmological spacetimes, and shows appropriate convergence in both cases. Techniques have also been described for guaranteeing the existence and uniqueness of solutions with either asymptotically flat or periodic boundary conditions. This is the first method that has been demonstrated to work for fully satisfying the constraint equations for generic initial data in 3+1 dimensions, and it therefore expands the possibilities for numerical investigation of these theories. Future applications of this technique may include simulations of black holes with non-trivial initial scalar field configurations, and tests of inflation in \(4\partial ST\) gravity with inhomogeneous initial conditions. It would also be useful to extend the method to permit non conformally flat spacetimes (which could address the issues with integrability of the momentum constraints that we encountered in the cosmological case) and to check that a similar approach works in the alternative extended conformal thin sandwich (XCTS) method [50, 51], in which one solves also for the lapse and the shift functions to achieve a specific initial time evolution of the variables. XCTS is more widely used in practise than CTT and offers a number of advantages, especially for identifying equilibrium initial data. We anticipate that the approach developed here should work equally well in such cases, but we hope to see this demonstrated in future work by groups working with such solvers. ## V Acknowledgements We would like to thank Josh Aurrekoetxea and Eugene Lim for helpful discussions about the CTTK method, and for the use of the CTTK code they developed with KC, which was modified in this work. We also thank Aron Kovacs and Miguel Bezares for helpful conversations. We thank the entire GRChombo 5 collaboration for their support and code development work. PF would like to thank the Enrico Fermi Institute and the Department of Physics of the University of Chicago for hospitality during the final stages of this work. PF is supported by a Royal Society University Research Fellowship No. URF\(\backslash\)R\(\backslash\)201026, and No. RF\(\backslash\)ERE\(\backslash\)210291. KC is supported by an STFC Ernest Rutherford fellowship, project reference ST/V003240/1. SB is supported by a QMUL Principal studentship. LAS is supported by a QMUL Ph.D. scholarship. APS thanks Queen Mary University of London for hosting him to work on this project, and Figure 4: Correction to the conformal factor of the bare punctures for a black hole binary, with a superimposed contour plot showing the Gaussian scalar field over each black hole. Erik Schnetter for discussions on initial condition solvers. This work used the ARCHER2 UK National Supercomputing Service ([https://www.archer2.ac.uk](https://www.archer2.ac.uk)). This work also used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. Calculations were also performed using the Sulis Tier 2 HPC platform hosted by the Scientific Computing Research Technology Platform at the University of Warwick. Sulis is funded by EPSRC Grant EP/T022108/1 and the HPC Midlands+ consortium. This research also utilised Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT [52]. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2302.14863
Chiral quantum optics in the bulk of photonic quantum Hall systems
We study light-matter interactions in the bulk of a two-dimensional photonic lattice system, where photons are subject to the combined effect of a synthetic magnetic field and an orthogonal synthetic electric field. In this configuration, chiral waveguide modes appear in the bulk region of the lattice, in direct analogy to transverse Hall currents in electronic systems. By evaluating the non-Markovian dynamics of emitters that are coupled to those modes, we identify critical coupling conditions, under which the shape of the spontaneously emitted photons becomes almost fully symmetric. Combined with a directional, dispersionless propagation, this property enables a complete reabsorption of the photon by another distant emitter, without relying on any time-dependent control. We show that this mechanism can be generalized to arbitrary in-plane synthetic potentials, thereby enabling flexible realizations of re-configurable networks of quantum emitters with arbitrary chiral connectivity.
Daniele De Bernardis, Francesco Piccioli, Peter Rabl, Iacopo Carusotto
2023-02-28T18:59:57Z
http://arxiv.org/abs/2302.14863v2
# Chiral quantum optics in the bulk of photonic quantum Hall systems ###### Abstract We study light-matter interactions in the bulk of a two-dimensional photonic lattice system, where photons are subject to the combined effect of a synthetic magnetic field and an orthogonal synthetic electric field. In this configuration, chiral waveguide modes appear in the bulk region of the lattice, in direct analogy to transverse Hall currents in electronic systems. By evaluating the non-Markovian dynamics of emitters that are coupled to those modes, we identify critical coupling conditions, under which the shape of the spontaneously emitted photons becomes almost fully symmetric. Combined with a directional, dispersionless propagation, this property enables a complete reabsorption of the photon by another distant emitter, without relying on any time-dependent control. We show that this mechanism can be generalized to arbitrary in-plane synthetic potentials, thereby enabling flexible realizations of re-configurable networks of quantum emitters with arbitrary chiral connectivity. ## I Introduction The challenge of building fully operative quantum devices such as quantum computers, quantum simulators and quantum cryptography systems has stimulated an unprecedented flow of ideas for the implementation of technologies based on the principles of quantum mechanics [1]. In this effort, many disruptive ideas came by combining concepts from different areas of quantum sciences [2]. Theories and concepts that were originally developed to explain fundamental physical phenomena are now re-elaborated in a new technological perspective. This process not only serves to inspire the realization of new devices, but also provides new insights on existing knowledge and contributes to building a more complete understanding of the microscopic world. This is the case, for instance, for the quantum Hall effect. This effect was first discovered in electronic materials more than 40 years ago [3] and sparkled the development of the theory of topological materials [4] and the proposal of novel schemes for topologically protected quantum computing [5]. These concepts are now being re-elaborated in the context of photonic systems, giving rise to the field of _topological photonics_[6]. Here, ideas and phenomena of the integer and fractional quantum Hall effect are implemented and generalised to various synthetic photonic, phononic, atomic or even molecular platforms [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], with unprecedented freedom in tuning the physical parameters and measuring observables, which were previously inaccessible in traditional solid-state systems. These developments in fundamental science have naturally opened the way to technological applications, for instance to exploit the topologically protected chiral edge modes to create new integrated solutions for a unidirectional transport of information, robust against system imperfections [18; 19]. The potential of these new devices became particularly evident in the framework of the field of _chiral quantum optics_[20]: here, the use of topological chiral channels for the propagation of photons combined with non-linear quantum emitters, such as atoms or quantum dots, opens the way toward the creation of a full cascaded quantum network [21; 22], which is a central piece in the development of quantum information technologies [23]. This sparkled a new era for topological photonics experiments, with the objective of creating hybrid qubit-photonic lattice platforms in different spectral regions, from the GHz up to the optical range. In these systems, topological features are exploited to realize complete chiral quantum optical setups, coupling their topological chiral edge channels to localized quantum emitters (or qubits) [24; 25; 26]. In parallel to such intense experimental efforts, new innovative theoretical proposals are constantly made to exploit these devices for new technological applications [27; 28; 29; 30; 31; 32; 33; 34; 35]. In this article we study the light-matter interaction dynamics of two-level quantum emitters coupled to a 2D photonic lattice subject to an homogeneous perpendicular synthetic magnetic field and an in-plane homogeneous synthetic electric field. Differently from the existing chiral quantum optics literature [7; 21; 22; 30; 31; 33; 34], which mostly focuses on light propagating along edge modes, here we investigate new strategies based on light propagation through the _bulk_ of a 2D photonic system via the photonic analog of the Hall current. In the last decade, related anomalous transport and Berry curvature effects in the bulk of photonic systems have been the subject of several theoretical [36; 37; 38] and experimental [39; 40; 41] works, but have never been proposed as the operating principle of photonic devices. Specifically, we show here how the combined effect of crossed synthetic electric and magnetic field produces effective 1D waveguides based on the Hall effect, which allow light to unidirectionally propagate through the bulk of the lattice, similar to Hall currents. The highly in homogeneous local density of photonic states of these waveguides makes the emission dynamics of two-level quantum emitters strongly non-Markovian. In this peculiar regime, photons can be naturally emitted along a single direction with a highly symmetric shape of the wavepacket. This enables a very efficient chiral state transfer between distant emitters located in the bulk of the lattice, without the need for fine-tuned time-dependent control pulses [23, 42]. In view of the high tunability of topological photonic structures, this new way of transporting information through the bulk of a photonic lattice opens up intriguing new possibilities for designing cascaded quantum networks with an improved performance compared to conventional chiral quantum optics setups. In particular, instead of using 2D topological systems to engineer 1D chiral channels along their edge, our proposed approach makes optimally use of the full bulk region of the photonic lattice, and thereby drastically increases the number of emitters that can interfaced in a directional manner. Thanks to the topological origin of the underlying mechanism, this transfer method is highly robust with respect to imperfections and can be readily generalized to non-uniform electric field profiles leading to arbitrary curvilinear chiral 1D waveguides. This paves the way for realizing re-configurable cascaded networks with arbitrary connectivity. The article is structured as follows. In Sec. II we introduce the model for our 2D photonic quantum Hall system with synthetic magnetic and synthetic electric fields. In Sec. III we use the lattice photonic Green's function to provide a basic description of photon propagation across the bulk of this system and for the appearance of effective chiral waveguide modes. In Sec. IV we study the dynamics of a single emitter coupled to the photonic lattice and discuss the different regimes of light-matter interactions in this setup. In Sec. V we show how the non-Markovian emission dynamics in the critical-coupling regime enables high-fidelity quantum-state transfer operations between two emitters in the lattice. In Sec. VI we extend our results to arbitrary electric field profiles and establish the concept of photonic percolation quantum networks. Finally, in Sec. VII we summarize our main conclusions. ## II The model ### Light-matter interactions in photonic lattices We consider the system depicted in Fig. 1 (a), where \(N\) (artificial) two-level emitters with frequency \(\omega_{e}\) are locally coupled to a two-dimensional photonic lattice with dimensions \(L_{x}\) and \(L_{y}\). We denote the position of the \(i\)-th lattice site by \(\vec{r}_{i}=(x_{i},y_{i})\) and consider a simple square lattice geometry with lattice spacing \(l_{0}\) and \(M=L_{x}L_{y}/l_{0}^{2}\) lattice sites in total. We also assume that the number of emitters is much smaller than the number of lattice sites, \(N\ll M\). As shown in Fig. 1 (b), every single lattice site represents a localized photonic mode with frequency \(\epsilon_{i}\) and annihilation operator \(\Psi_{i}\equiv\Psi(\vec{r}_{i})\). By considering only nearest-neighbor hopping between the localized modes, the general lattice Hamiltonian is given by \[H_{\rm ph}=\sum_{i=1}^{M}\epsilon_{i}\Psi_{i}^{\dagger}\Psi_{i}-\hbar J\sum_{ \langle ij\rangle}\left(e^{i\phi_{ij}}\Psi_{i}^{\dagger}\Psi_{j}+{\rm H.c.} \right). \tag{1}\] The tunneling amplitude is complex-valued, with a non-trivial phase \(\phi_{ij}\) to break time-reversal symmetry. In our system we use this phase to generate a homogeneous synthetic magnetic field for photons by imposing \(\phi_{ij}=\frac{\epsilon}{\hbar}\int_{\vec{r}_{i}}^{\vec{r}_{i}}\vec{A}( \vec{r})\cdot d\vec{r}\)[6], with \(\vec{A}(\vec{r})=B(0,x,0)\) being the synthetic vector potential for a homogeneous synthetic magnetic field taken for simplicity in the Landau gauge. As usual, we express the strength of the magnetic field in terms of the dimensionless parameter \(\alpha=e\Phi/(2\pi\hbar)\), where \(\Phi=Bl_{0}^{2}\) is the flux enclosed in a single plaquette. Going beyond the setup considered in Ref. [43], here we impose an additional linear frequency gradient, \[\epsilon_{i}=-eE\,x_{i}+\hbar\omega_{p}, \tag{2}\] Figure 1: Photonic quantum Hall system. (a) Sketch of a 2D photonic lattice with a perpendicular synthetic magnetic field and an in-plane synthetic electric field. In analogy to Hall currents, this configuration results in the formation of chiral waveguide modes. Coupling of two-level emitters to the chiral waveguide modes leads to the directional emission and reabsorption of photons. (b) Schematic view of the photonic hopping amplitudes for the specific Landau-gauge, Harper-Hofstadter lattice configuration under consideration. (c) Spectrum of the photonic lattice Hamiltonian \(H_{\rm ph}\) in Eq. (1), where the energy of each mode is plotted as a function of the mean displacement of its center of mass along the \(x\)-direction. In this representation, the tilting of the Landau levels by the electric field and the presence of edge modes at the boundaries are clearly visible. The parameters used for this plot are \(\alpha=1/10\), \(N_{x}=N_{y}=40\) and \(U_{0}/J=0.05\) and we have assumed periodic boundary conditions (PBC) along the \(y\)-direction. where \(\omega_{p}\) is the bare frequency of the local modes. This simulates the effect of a homogeneous electric field in the \(x\)-direction. The strength of field is characterized by a voltage drop \(U_{0}=eEl_{0}\) between two neighboring sites. This situation is then similar to a solid-state system, where the effect of a crossed magnetic field \(\vec{B}\) and an electric field \(\vec{E}\) gives rise to a quantized Hall current \(\vec{\mathcal{J}}_{\mathrm{H}}\sim\vec{E}\times\vec{B}\), which flows, in our convention, along the \(y\)-axis. Including the emitters and their coupling to the local photon modes, the total Hamiltonian for this setup is given by \[H=H_{\mathrm{ph}}+\sum_{n=1}^{N}\frac{\hbar\omega_{e}^{n}}{2}\sigma_{z}^{n}+ \sum_{n=1}^{N}\left(\frac{\hbar g_{n}}{2}\Psi(\vec{r}_{e}^{\,n})\sigma_{+}^{n}+ \mathrm{H.c.}\right). \tag{3}\] Here, the \(\sigma_{z/\pm}^{n}\) are Pauli matrices for an emitter at position \(\vec{r}_{e}^{\,n}\), while \(\omega_{e}^{\,n}\approx\omega_{p}\) and \(g_{n}\) are its transition frequency and the strength of the light-matter coupling, respectively. ### A photonic lattice in the quantum Hall regime Since the photons are noninteracting, the properties of the photonic lattice are fully encoded in the eigenfrequencies \(\omega_{\lambda}\) and eigenmodes \(f_{\lambda}(\vec{r}_{i})\) of the hopping matrix, i.e., in the solutions of the eigenvalue equation \[\sum_{j}\left(\epsilon_{i}\delta_{ij}-\hbar Je^{i\phi_{ij}}\delta_{\langle ij \rangle}\right)f_{\lambda}(\vec{r}_{j})=\hbar\omega_{\lambda}f_{\lambda}(\vec {r}_{i}), \tag{4}\] where \(\delta_{ij}\) is the Kronecker delta, and \(\delta_{\langle ij\rangle}\) is the Kronecker delta for nearest neighbors. For a non-zero magnetic flux, \(\alpha\neq 0\), and a homogeneous on-site frequency \(\epsilon_{i}=\hbar\omega_{p}\) for all lattice sites, the photonic spectrum is given by the famous Hofstadter butterfly [44], where all the eigenvalues are grouped in a finite number of narrow Landau levels, with energies that are symmetrically distributed around \(\omega_{p}\). A first consequence of the presence of a finite electric field is the broadening of these levels into bands with a width \(\sim U_{0}N_{x}\). This is clearly visible in Fig. 1 (c), where we plot the eigenfrequencies \(\omega_{\lambda}\) (black dots) as a function of the mean displacement of the corresponding mode-function, \(\langle x\rangle=\sum_{i}x_{i}\,|f_{\lambda}(\vec{r}_{i})|^{2}\), for finite \(\alpha\) and a finite voltage drop \(U_{0}\). Here, the most visible effect of the synthetic electric field is the tilting of the photonic Landau level with a slope proportional to \(\sim U_{0}\). In the intermediate magnetic field regime, where \(l_{0}<l_{B}<L_{x,y}\) and \(l_{B}=\sqrt{\hbar/eB}=l_{0}/\sqrt{2\pi\alpha}\) is the magnetic length, Eq. (4) can be approximated by a differential equation in the continuum, which recovers the form of a Schrodinger equation for a particle in an external electric and magnetic field [43]. The photonic eigenmodes of the lattice can then be approximated by Landau levels in the continuum, \(f_{\lambda}(\vec{r}_{i})\equiv\Phi_{\ell k}(\vec{r}_{i})\), where \[\Phi_{\ell k}(\vec{r})=l_{0}\frac{e^{iky}}{\sqrt{L_{y}}}\varphi_{\ell}^{\mathrm{ b.o.}}\left(x+l_{B}^{2}k-l_{B}\frac{U_{B}}{\hbar\omega_{B}}\right). \tag{5}\] and \(\varphi_{\ell}^{\mathrm{h.o.}}(x)\) is the \(\ell\)-th harmonic oscillator eigenfunction with oscillator length given by the magnetic length \(l_{B}\) (see Appendix A for more details). In Eq. (5), the index \(\ell=0,1,2,\dots\) labels the discrete Landau levels and \(k\) is the wavevector along the \(y\)-direction. We have also introduced the parameter \[U_{B}=eEl_{B}, \tag{6}\] which characterizes the interplay between the magnetic and the electric field and corresponds to the voltage drop across a cyclotron orbit. In the following we will refer to \(U_{B}\) as the _Landau voltage_ and we will see how it plays a crucial role in determining the light-matter coupling dynamics. In the presence of the electric field, the Landau levels are no longer degenerate and their energy is approximately given by \[\hbar\omega_{\ell k}\approx\hbar\omega_{b}+\hbar\omega_{B}\left(\ell+\frac{1} {2}\right)+\hbar\omega_{\ell}^{(2)}+U_{B}\left(l_{B}k-\frac{U_{B}}{2\hbar \omega_{B}}\right), \tag{7}\] where \(\omega_{b}=\omega_{p}-4J\) is the frequency of the lower band edge and \(\omega_{B}=4\pi\alpha J\) is the cyclotron frequency. Here we have also included the second order correction to the Landau levels due to lattice discretization [43], \[\omega_{\ell}^{(2)}=-\frac{\omega_{B}^{2}}{32J}\left(2\ell^{2}+2\ell+1\right), \tag{8}\] which is necessary to match this analytic result with exact numerics. In Fig. 1 (c) we compare Eq. (7) (red dashed lines) to the full spectrum (black dots) obtained via a numerical diagonalization of Eq. (4). For the analytic results in this plot, we approximate the average displacement of the eigenmodes by \(\langle x\rangle\simeq-l_{B}^{2}k+U_{B}l_{B}/(\hbar\omega_{B})\). We see that Eq. (7) predicts very accurately the lowest energy bands for the bulk modes, while the highest energy bands are given by a symmetric mirroring of the lowest states. Of course, the continuum approximation fails near the center of the band, where the effect of the discretization become important. As we are going to see in what follows, the linear shape of the dispersion relation in Eq. (7) plays a crucial role since it guarantees that wavepackets do not suffer broadening or distortion during propagation. Note that because we consider a finite lattice, we have a limited number of bulk modes in each Landau level \(\ell\). The eigenmodes are localized states along the \(x\)-direction, with a spatial extension of \(\Delta x\sim\sqrt{1+\ell}\,l_{B}\) and centered at the \(k\)-dependent position \(\langle x\rangle=-l_{B}^{2}k+U_{B}l_{B}/(\hbar\omega_{B})\). As such, the number of states in each Landau level can be estimated by counting the number of \(k\)-modes with spacing \(\Delta k=2\pi/L_{y}\) that cover the distance \(L_{x}\) For example, based on this estimate, the lowest Landau level (LLL) with \(\ell=0\), contains about \(M_{\ell=0}\approx L_{x}L_{y}/(2\pi l_{B}^{2})=M\alpha\) states for which \(0<k<L_{x}/l_{B}^{2}\). The remaining states outside this range are instead fully localized on the edge and represent the usual topological edge modes. In Fig. 1 (c) we can see that for states at the boundary of the system, where \(\langle x\rangle\sim 0,L_{x}\), the eigenfrequencies lie outside the tilted Landau level, and form a band of edge states. Note, however, that in contrast to the dispersionless bulk states, the large variation of the group velocity along the edges typically leads to a significant broadening and deformation of propagating wavepackets. ## III Quantum Hall transport for photons In this section we provide a theoretical framework to describe the single photon dynamics in this synthetic quantum Hall configuration. In particular, this analysis highlights one of the most important consequences that arises from the presence of both magnetic and electric fields: the photonic bulk, often referred as a Chern insulator, allows now for propagation and transport in a direction perpendicular to the two fields. This is the direct photonic analog of the quantum Hall effect for electrons. ### Photon Green's function In order to describe the quantum Hall dynamics we make use of the photonic Green's function (or propagator). In its general form, this Green's function can be expressed in terms of the photonic eigenmodes and their corresponding eigenfrequencies as \[\begin{split} G(t,\vec{r}_{i},\vec{r}_{j})&=\langle \text{vac}|\Psi(t,\vec{r}_{i})\Psi^{\dagger}(0,\vec{r}_{j})|\text{vac}\rangle \\ &=\sum_{\lambda}f_{\lambda}(\vec{r}_{i})f_{\lambda}^{*}(\vec{r}_ {j})e^{-i\omega_{\lambda}t}\end{split} \tag{9}\] and provides the propagation amplitude of a photon from position \(\vec{r}_{j}\) to position \(\vec{r}_{i}\) in a time \(t\). Using the approximated forms of the photonic eigenmodes and eigenfrequencies given in Eqs. (5)-(7), we can write down an explicit expression for the Green's function restricted to the LLL, \[\begin{split} G_{\ell=0}(t,\vec{r}_{i},\vec{r}_{j})\approx& \frac{l_{0}^{2}}{2\pi l_{B}^{2}}e^{i(\theta_{ij}-\omega_{\ell=0}^{LL}t)}e^{- \frac{|\vec{r}_{i}-\vec{r}_{j}|^{2}}{4l_{B}^{2}}}I(t,\vec{r}_{i},\vec{r}_{j}). \end{split} \tag{10}\] Here \(\theta_{ij}\) is the (gauge-dependent) phase \[\theta_{ij}=-\frac{x_{i}y_{j}-x_{j}y_{i}}{2l_{B}^{2}}+\frac{x_{i}y_{i}-x_{j}y _{j}}{2l_{B}^{2}}, \tag{11}\] and \[\omega_{\ell}^{LL}=\omega_{b}+\omega_{B}\left(\ell+\frac{1}{2}\right)+\omega_ {\ell}^{(2)} \tag{12}\] is the frequency of the \(\ell\)-th Landau level. The electric field enters in the dynamics only through the time-dependent part of the propagator, \(I(t,\vec{r}_{i},\vec{r}_{j})\), which is given by \[\begin{split} I(t,\vec{r}_{i},\vec{r}_{j})=&\frac{ 2\sqrt{\pi}l_{B}}{L_{y}}\sum_{k}e^{-\left(l_{B}k-\frac{(x_{i}+x_{j})-i(y_{i}-y_ {j})}{x_{B}}+\frac{U_{B}}{2\kappa_{B}}\right)^{2}}\\ &\times e^{i\left[\alpha_{R}k+U_{B}^{2}/(2\hbar^{2}\omega_{B}) \right]t}.\end{split} \tag{13}\] Here the Hall speed of propagation is given by the usual formula \[c_{H}=\frac{U_{B}l_{B}}{\hbar}=\frac{E}{B}. \tag{14}\] While there is no simple compact form for Eq. (13) in the general case, it is possible to simplify the problem in the limit \(L_{y}\to\infty\). In this limit, the sum in Eq. (13) can be approximated by an integral by substituting \(2\pi/L_{y}\sum_{k}\mapsto\int dk\), and we obtain \[I(t,\vec{r}_{i},\vec{r}_{j})\simeq e^{-\frac{U_{B}^{2}t^{2}}{4\hbar^{2}}}e^{i \left(\frac{x_{i}+x_{j}+i(y_{i}-y_{j})}{2\hbar}-\frac{U_{B}}{2\kappa_{B}} \right)U_{B}t/\hbar}\,. \tag{15}\] For a vanishing magnetic field, \(U_{B}=0\), this expression reduces to a constant, \(I(t,\vec{r}_{i},\vec{r}_{j})|_{U_{B}=0}=1\), meaning that photons do not propagate. In the presence of emitters, this property gives rise to the formation of localized Landau-photon polaritons, as described in Ref. [43]. By combining Eq. (15) and Eq. (10) we obtain the total Green's function in the continuum for an infinitely large system, \[\begin{split} G_{0}(t,\Delta x,\Delta y)&=\frac{l_ {0}^{2}}{2\pi l_{B}^{2}}e^{-\frac{\Delta x^{2}}{4l_{B}^{2}}}e^{-\frac{1}{4}(U_{ B}t/\hbar-\Delta y/l_{B})^{2}}\\ &\times e^{i[\theta_{ij}-(\omega_{\text{ch}}(x_{j})+\omega_{\text {ch}}(x_{i}))t/2]}.\end{split} \tag{16}\] Here, \(\Delta x=x_{j}-x_{i}\) and \(\Delta y=y_{j}-y_{i}\), and we have introduced the position-dependent Hall-channel resonance frequency \[\omega_{\text{ch}}(x)=\omega_{\ell=0}^{LL}-\frac{U_{B}}{\hbar}\frac{x}{l_{B}} +\frac{U_{B}^{2}}{2\hbar^{2}\omega_{B}}. \tag{17}\] which includes a second order correction due to mixing with the higher Landau levels by the synthetic electric field. From Eq. (16) we see that for \(\Delta y>0\) the photon emitted in \(y_{j}\) can coherently propagate to \(y_{i}\) at the Hall speed \(c_{H}\), provided that \(|x_{i}-x_{j}|\lesssim l_{B}\). On the other hand, for \(\Delta y<0\) the propagation is exponentially suppressed (for \(t>0\)). This clearly shows that the photon propagation is unidirectional (or chiral) and without any dispersion. In summary, our calculations show that for each point in the bulk, the photonic lattice behaves as a unidirectional waveguide along the \(y\)-direction, perpendicular to both the electric and the magnetic field and with a Gaussian transverse size \(\Delta x\sim l_{B}\) fixed by the magnetic length. This is schematically shown in Fig. 1 (a). Each chiral channel at position \(x\) has its own resonance frequency \(\omega_{\text{ch}}(x)\), and it is detuned from neighboring channels at positions \(x\pm l_{0}\) by \(\hbar|\Delta\omega|=U_{0}\). ### Local density of states The photonic Green's function derived above is useful not only to describe the propagation of a photon through the lattice, but also to extract the local density of states, defined as \[\rho_{\rm ph}(\vec{r},\omega)=\int dt\,G(t,\vec{r},\vec{r})e^{i\omega t}. \tag{18}\] In the following we will show that this quantity is particularly important for describing the dynamics of a single emitter that is coupled to the photonic lattice at a position \(\vec{r}\). From Eq. (16) we can derive an analytic expression for the local density of states for the LLL, \[\rho_{\rm ph}^{\ell=0}(\vec{r},\omega)\approx\frac{2\sqrt{\pi}\alpha\hbar}{U_ {B}}e^{-\frac{\hbar^{2}[\omega-\omega_{\rm ch}(x)]^{2}}{U_{B}^{2}}}\,. \tag{19}\] This expression shows that as long as the relevant system dynamics takes place in a narrow range of frequencies \(\hbar|\omega-\omega_{\rm ch}(x)|\lesssim U_{B}\), the bulk behaves effectively as a continuous 1D waveguide, with an almost constant density of states. However, for a lower value of the electric field or when the dynamics involves a wider range of frequencies, \(\hbar|\omega-\omega_{\rm ch}(x)|>U_{B}\), the density of states decays very rapidly and non-Markovian effects start to play a relevant role. In this regime, the direct analogy with a conventional chiral waveguide breaks down and new phenomena appear, as we are going to see in the next Section. The shape of the density of states in Eq. (19) is quite surprising at first sight. Indeed, by looking at Fig. 1 (c) and at the analytic estimates given above, one would expect an approximately flat density of states over the whole width \(\sim U_{0}M_{x}\) of the tilted Landau level, where \(M_{x}=L_{x}/l_{0}\). Instead, our calculations show that the effective photonic bandwidth within each Landau level that is accessible to a localized emitter is determined by the Landau voltage \(U_{B}\), independently of the lattice size. This is due to the peculiar lateral localization of the chiral bulk modes at a \(k_{y}\)-dependent \(x\) position, which modulates the effective light-matter coupling. Before we proceed, it is important to emphasize how this behaviour is different from the one of standard systems in chiral quantum optics. A non-trivial shape of the photonic density of states is in fact also seen by an emitter coupled to the chiral edge states, the frequency variation arising from the frequency-dependent penetration length of the edge states into the bulk. In contrast to our case, however, this density of states is relatively smooth and does not display a quick Gaussian decay away from the central frequency. This seemingly minor difference is at the heart of the state-transfer application that we are going to discuss for the bulk modes in the next Sections. ## IV Coupling regimes in photonic quantum Hall systems Let us now go beyond the sole propagation of photons and consider an additional quantum emitter located at a position \(\vec{r}_{e}=(x_{e},y_{e})\) in the bulk of the lattice, as described by the total Hamiltonian in Eq. (3), with \(N=1\). Since there is only a single emitter here, we suppress the index \(n\) on the emitter frequency and light-matter coupling, \(\omega_{e}^{n},g_{n}\mapsto\omega_{e},g\). By assuming that the emitter is initially prepared in its excited state with no photons in the lattice, the resulting dynamics of the system is constrained to the single excitation subspace and can be described by a wavefunction of the form \[|\psi\rangle(t)=e^{-i\omega_{e}t}[c_{e}(t)\sigma_{+}+\sum_{i}\varphi(\vec{r}_ {i},t)\Psi^{\dagger}(\vec{r}_{i})]|g\rangle|{\rm vac}\rangle\,. \tag{20}\] By projecting the Schrodinger equation onto this subspace, we can derive a set of equations for the time evolution of the emitter amplitude \(c_{e}(t)\) and for the wavefunction of the photon, \(\varphi(\vec{r}_{i},t)\)[45; 46; 47]. These equations can be readily integrated numerically, which we use to produce most of the results discussed in the following. Alternatively, we can eliminate the dynamics of the emitted photon to derive a closed equation for the emitter amplitude [43], \[\dot{c}_{e}(t)=-\frac{g^{2}}{4}\int_{0}^{t}ds\,G(t-s,\vec{r}_{e},\vec{r}_{e})c _{e}(s)e^{i\omega_{e}(t-s)}. \tag{21}\] This is an integro-differential equation, with the photonic Green's function evaluated at the emitter position \(\vec{r}_{e}\) as the memory kernel. This memory kernel describes both the photon's emission from the atom and its eventual re-absorption. While in general there is no closed analytic solution of Eq. (21), we can use approximate expression for the photonic Green's function to obtain additional useful insights into the emitter-photon dynamics. In particular, with the help of the Gaussian approximation for the LLL in Eq. (19), we obtain \[\dot{c}_{e}(t)=-\frac{g^{2}\alpha}{4}\int_{0}^{t}ds\,e^{-U_{B}^{2}(t-s)^{2}/( 4\hbar^{2})}\,e^{i\Delta_{e}(t-s)}\,c_{e}(s)\,, \tag{22}\] where we have introduced the position-dependent detuning \[\Delta_{e}=\omega_{e}-\omega_{\rm ch}(x_{e}). \tag{23}\] Under the \(\Delta_{e}\approx 0\) assumption, this approximate form allows us to identify three qualitatively different coupling regimes: 1. _Weak-coupling_ regime, \(\hbar g\sqrt{\alpha}\ll U_{B}\). In this limit the density of states is almost flat and the Green's function decays on a timescale that is fast compared to the evolution of the emitter. This leads to an effectively Markovian dynamics, with an exponential decay of the excited state. 2. _Strong-coupling_ regime, \(\hbar g\sqrt{\alpha}\gg U_{B}\). Under this condition the coupling strength exceeds the relevant bandwidth \(U_{B}\) of the density of states and the emitted photons can be reabsorbed before they propagate away. Such conditions lead to the formation of so-called atom-photon bound states [48; 49], which do not decay. 3. _Critical-coupling_ regime, \(\hbar g\sqrt{\alpha}\simeq U_{B}\). For these parameter values, the absence of a sharp band-edge allows the excited state population to fully decay to zero, but the sizable frequency-dependence of the density of states makes a crucial difference from other narrow-band waveguide systems and. As we are going to discuss in detail below, this results in a strongly non-Markovian dynamics and, in particular, in an almost symmetric shape of the spontaneously emitted photon wavepacket. Note that the relevant coupling parameter in this discussion is \(g\sqrt{\alpha}\), rather than the bare light-matter coupling \(g\). This is related to the fact that the local density of states scales as \(\sim\alpha\), resulting in an additional factor \(\sqrt{\alpha}\) in the effective coupling strength [43]. Physically, this factor can also be understood from the spatial width \(l_{B}\sim 1/\sqrt{\alpha}\) of the waveguide mode in the transverse direction. In the following we proceed with a brief discussion of the photon-emission dynamics in those three regimes for a situation, where the emitter is far away from the boundary. ### Weak coupling regime: Markovian spontaneous emission The linear dispersion of the photonic Landau levels introduced by the electric field \(E\) and captured by Eq. (7) implies that photons can propagate across the lattice. Therefore, a photon locally created by the emitter can leave the interaction region, which leads to spontaneous decay. This is in stark contrast to the case \(E=0\), where spontaneous emission is forbidden and is replaced by the formation of bound polaritonic states between the emitter and the localized Landau photons [43; 50]. In the standard theory of spontaneous decay [51], the exponential decay of the atomic population arises from a Markov approximation. The validity of this approximation requires that the density of states of the photonic is approximately constant over a sufficiently large frequency range. In the current setting, this is the case when the relevant timescale of the emitter's dynamics given by \(g\sqrt{\alpha}\) is slow compared to the inverse of the effective bandwidth \(U_{B}/\hbar\) identified above. Under this assumption we can approximate the photonic Green's function appearing in Eq. (21) as \[G(t-t^{\prime},\vec{r},\vec{r})\approx\rho_{\rm ph}(\vec{r},\omega_{e})\, \delta(t-t^{\prime}), \tag{24}\] and obtain a Markovian, i.e., memoryless equation for the emitter amplitude \[\dot{c}_{e}(t)\approx-\frac{g^{2}}{4}\rho_{\rm ph}(\vec{r}_{e},\omega_{e})\,c _{e}(t). \tag{25}\] By using the continuum approximation for the Green's function in Eq. (19), we find that the excited state amplitude decays exponentially, \[c_{e}(t)\approx e^{-\Gamma_{e}t}, \tag{26}\] where the decay rate \[\Gamma_{e}=\frac{\sqrt{\pi}}{2}\frac{\hbar g^{2}\alpha}{U_{B}}e^{-\hbar^{2} \Delta_{e}^{2}/U_{B}^{2}} \tag{27}\] is inversely proportional to the applied electric field \(U_{B}\sim E\) and has the expected Gaussian dependence on the position-dependent emitter detuning \(\Delta_{e}\). In Fig. 2 (a) we perform an exact numerical simulation of the decay of an excited emitter in a finite lattice with periodic boundary conditions along the \(y\)-direction. These results are compared to the dynamics predicted by Eq. (22) for the continuum limit. We see that in this weak-coupling regime, the continuum approximation is in excellent agreement with exact results. This simulation also confirms that apart from small deviations at the initial stage, the temporal profile of the decay is very well captured by an exponential decay with a rate \(\Gamma_{e}\) given in Eq. (27). Figure 2: Spontaneous emission dynamics of a single emitter in the weak-coupling regime. (a) Plot of the excited-state population \(P_{e}=|c_{e}(t)|^{2}\) as a function of time. The blue crosses represent the results from an exact numerical simulation, while the solid black line shows the prediction from Eq. (22). The green dashed line indicates an exponential decay with a rate \(\Gamma_{e}\) given in Eq. (27). (b) Snapshot of the photon density \(|\varphi(\vec{r},t)|^{2}\) of the emitted wavepacket at a time \(t>0\) well after the spontaneous emission process. The white arrow marks the direction of propagation. The parameters for both plots are \(\alpha=1/20\), \(N_{z}=31\), \(N_{y}=100\), \(U_{0}/J=0.001\), \(\Delta_{e}/J\approx 0\) and \(\hbar g=0.4U_{B}/\sqrt{\alpha}\approx 0.003\hbar J\). The emitter is located at the position \(\vec{r}_{e}/l_{0}=(15,50)\). In Fig. 2 (b) we also show a snapshot of the emitted photonic wavepacket, \(|\varphi(\vec{r},t)|^{2}\). This plot confirms that the emitted photon is localized along the \(x\)-axis within a magnetic length \(l_{B}\) and propagates along the \(y\)-axis with Hall speed \(c_{H}\). As it is typical for spontaneous emission, the wavepacket is asymmetrically stretched along the propagation direction, with a sharp front edge and a long exponential tail of characteristic length \(\sim c_{H}/\Gamma_{e}\). ### Strong coupling regime: Atom-photon bound states in the chiral continuum For very large values of the light-matter coupling, \(\hbar g>U_{B}/\sqrt{\alpha}\), the emitter dynamics is dominated by non-Markovian effects, which arise from the finite width of the density of states given in Eq. (19). In particular, the density of states is strongly suppressed outside a frequency band of width \(\sim U_{B}\), meaning that the Green's function in Eq. (21) can be approximated by a constant, \(G(t-s,\vec{r}_{e})\approx\alpha\), over the relevant timescale of the emitter dynamics. This allows us to approximate the equation for the emitter amplitude \(c_{e}(t)\) by a second-order differential equation. \[\ddot{c}_{e}(t)\approx-\frac{\Omega^{2}}{4}c_{e}(t), \tag{28}\] where the Rabi frequency is given by \[\Omega=g\sqrt{\alpha}. \tag{29}\] This equation indicates the presence of an atom-photon bound-state [48; 49], as an exact eigenstate of the system, and recovers the Landau-photon polariton (LPP) picture described in [43]. Indeed, numerical simulations confirm that the photonic component of this bound state has the shape of a standard Landau orbital with no visible effect from the electric field. However while the \(E=0\) LPP of [43] already appear at arbitrarily small values of the coupling strength, in the present configuration bound states require a minimal coupling strength that exceeds \(U_{B}\). In Fig. 3 (a) and Fig. 3 (b) we show the emitter dynamics as we progressively enter the strong coupling regime. Already for \(\hbar g\sqrt{\alpha}=2U_{B}\) we see clear non-Markovian effects and marked oscillations, but at longer times the emitter dynamics keeps being dominated by a monotonous decay. For higher coupling strengths, \(\hbar g\sqrt{\alpha}=8U_{B}\), we only observe a small initial decay followed by persistent Rabi oscillations between the photonic and the matter components of the bound state. The fact that the Rabi oscillations in Fig. 3 (b) do not reach unity again, \(P_{e}(t=2\pi n/\Omega)<1\), \(n=1,2\ldots\), is due to the smoothly decaying tail of the density of states, which still allows a small fraction of the excitation to be propagated away into the lattice. By further increasing the light-matter coupling \(\hbar g\sqrt{\alpha}\gg U_{B}\) the Rabi oscillations progressively reach their maximum value \(P_{e}(t=2\pi n/\Omega)\approx 1\). These observations are consistent with the formation of atom-photon bound states in other narrow-band waveguide QED systems. However, we re-emphasize that in our case this effect appears under conditions where the effective coupling is still small compared to the total width of the LLL and in general the total lattice band-width, \(\hbar g\sqrt{\alpha}\ll U_{0}N_{x}\ll 8\hbar J\). ### The critical-coupling regime As we increase the coupling strength from the weak to the strong coupling regime, the system passes through a critically coupled regime where \[\hbar g\approx\frac{U_{B}}{\sqrt{\alpha}}\,. \tag{30}\] As shown in Fig. 4 (a), under this condition the decay dynamics of the excited emitter is no longer Markovian, but Rabi oscillations, as characteristic for the strong-coupling regime, are not yet visible either. This condition is of interest for two reasons. First of all, setting \(\hbar g\approx U_{B}/\sqrt{\alpha}\) is the largest coupling that still allows spontaneous emission before being suppressed in the strong-coupling regime, resulting in the fastes way to fully excite the emitter in a time \(\Gamma_{e}^{-1}\approx U_{B}^{-1}\). Secondly, when looking at the shape of the emitted photon in Fig. 4 (b), we find that the wavepacket is very compact, also along the \(y\)-direction. This property is shown in more detail in Fig. 4 (c), where we plot a cut of the emitted photon wavefunction along the \(y\)-direction and we compare it with the one obtained in the weak-coupling regime: we see that in the critical coupling case the wavepacket is not only highly localized, but also almost symmetric around its maximum and travels through the lattice without any significant dispersion. While the formation of effective 1D chiral channels is a property of the photonic latti Figure 3: Evolution of the excited state population \(P_{e}=|c_{e}(t)|^{2}\) of an initially excited emitter in the strong-coupling regime, where (a) \(\hbar g=2U_{B}/\sqrt{\alpha}\) and (b) \(\hbar g=8U_{B}/\sqrt{\alpha}\). The blue crosses represent the results from an exact numerical simulation, while the solid black line shows the prediction from Eq. (22) in the continuum limit. The other parameters assumed for both plots are \(\alpha=1/20\), \(N_{x}=N_{y}=40\), \(U_{0}/J=0.01\) and \(\Delta_{e}/J\approx 0\). of such symmetric wavepackets is connected to a specific light-matter interaction regime and goes beyond the usual quantum Hall physics. While this seems to be a minor detail for the emission process, in what follows we will show how this symmetry becomes an essential property when studying the reabsorption of the photon by other emitters in the system. ### Beyond the single Landau level approximation Our discussion of the different coupling regimes was so far based on the assumption that the emitter is primarily coupled to the states in the LLL. This assumption is justified as long as the light-matter coupling \(g\) is small compared to the gap \(\Delta_{\rm gap}\sim\omega_{B}\) between the Landau levels. This corresponds to \(g\ll\omega_{B}\). In order to reach the critical or strong coupling conditions, we require \(\hbar g\gg U_{B}/\sqrt{\alpha}\), so that \(U_{B}/(\hbar\omega_{B})=U_{0}/(2(2\pi\alpha)^{3/2}J)\) can be of order \(\sim O(1)\) already for moderately strong electric fields. This means that the restriction to the LLL might not be well justified in this regime. To investigate the influence of higher Landau levels, we analyze the excitation spectrum of the emitter, \[\mathcal{S}_{e}(\omega)=\sum_{\nu}|\bra{\nu}\sigma_{+}^{e}|G\rangle\,|^{2} \delta(\omega-\omega_{\nu}). \tag{31}\] Here \(|\nu\rangle\) and \(\omega_{\nu}\) are the \(\nu\)-th eigenstate and eigenfrequency of the full coupled Hamiltonian in Eq. (3), while \(|G\rangle\) is its ground state. The excitation spectrum \(\mathcal{S}_{e}(\omega)\) is plotted in Fig. 5 as a function of the coupling strength \(g\) and for different strengths of the electric field. In the strong coupling regime, where \(\hbar g\gg U_{B}/\sqrt{\alpha}\), \(\mathcal{S}_{e}(\omega)\) displays two branches which are split by \(\Omega\) and corresponds to the bound states discussed above. For very small electric fields, i.e., \(U_{B}\ll\hbar\omega_{B}\), the two branches are very narrow and almost symmetric, with only a small downward shift of the frequency of the upper bound state. This shift of the upper branch can be understood as a second-order correction given by the presence of the higher Landau level. However, since the states in the higher \(\ell=1\) Landau level are spatially well separated from \(\ell=0\) LLL states at the same energy, as one can see in Fig. 1 (c), their effect is still small, acting only perturbatively on Figure 5: Plot of the emitter excitation spectrum \(\mathcal{S}_{e}\) as a function of the excitation frequency \(\omega\) and the light-matter coupling strength \(g\). The three panels show this spectrum for different values of the electric field, expressed in terms of the Landau voltage \(U_{B}=eEI_{B}\). In each panel, the green dashed line marks the critical coupling value \(\hbar g=U_{B}/\sqrt{\alpha}\). The grey-shaded area in the last panel is the weak-coupling (Markovian) regime, whose boundary is given by \(\hbar g\approx U_{B}/(2\sqrt{2\alpha})\), as it is defined in traditional cavity QED setups with a Gaussian inhomogeneous broadening of the emitters [52, 53, 54]. The other parameters are \(\alpha=1/10\), \(N_{x}=N_{y}=30\) and \(\Delta_{e}/J\approx 0\). In all plots an artificial broadening \(\gamma_{s}/J=0.015\) of all states has been introduced to obtain a smooth spectrum. Figure 4: (a) Excited state population of a single emitter \(P_{e}=|c_{e}(t)|^{2}\) as a function of time. The blue crosses represent the results from an exact numerical simulation, while the solid black line shows the prediction from Eq. (22) in the continuum limit. The green dashed line indicates an exponential decay with a rate \(\Gamma_{e}\) given in Eq. (27). (b) A snapshot of the photon density \(|\varphi(\vec{r},t)|^{2}\) of the emitted wavepacket at a time \(t\approx 202J^{-1}\approx 23\Gamma_{e}^{-1}\). The white arrow marks the direction of propagation. In both plots we assume \(\alpha=1/10\), \(N_{x}=N_{y}=31\), \(U_{0}/(\hbar J)=0.1\), \(\Delta_{e}/J=0\) and \(\hbar g=U_{B}/\sqrt{\alpha}\approx 0.4\hbar J\). The emitter is located at \(\vec{r}_{e}/l_{0}=(15,15)\), as indicated by the green dot. (c) Integrated photon density profile \(h_{t}(y)=\int dx|\varphi(x,y,t)|^{2}\) for a photon emitted in the Markovian regime with \(\hbar g=0.3U_{B}/\sqrt{\alpha}\) (green line) and from a critically coupled emitter with \(\hbar g=U_{B}/\sqrt{\alpha}\) (red line). The black arrow indicates the propagation direction while the green dashed line marks the emitter position at \(\vec{r}_{e}/l_{0}=(10,70)\). The two snapshots are taken at the same time \(Jt\approx 64\). In this plot we assume \(\alpha=1/10\), \(N_{x}=20\), \(N_{y}=80\), \(U_{0}/(\hbar J)=0.1\) and \(\Delta_{e}/J=0\). Both results in (c) have been obtained from a numerical simulation of the full lattice dynamics. the bound-state dynamics. By increasing the electric field to values of \(U_{B}\approx\hbar\omega_{B}\), the coupling to the higher Landau level becomes more relevant, since resonant states in different \(\ell\)-manifolds have a larger spatial overlap. This is most visible for the upper bound-state. As its energy approaches the next Landau level, it becomes progressively more broadened, due to the increasing possibility to decay into propagating modes in the \(\ell=1\) manifold. The effect on the lower bound-state is much weaker as this state is further detuned from the \(\ell=1\) levels and thus the effective tunneling barrier to resonant propagating states is wider. More quantitatively, the higher Landau levels result in additional peaks in the density of states, which are separated by multiples of \(\hbar\omega_{B}\) and can be well approximated by \[\begin{split}\rho^{\ell}_{\text{ph}}(\vec{r},\omega)\approx& \frac{2\sqrt{\pi}\alpha\hbar}{\sqrt{2^{\ell}\,\ell!\,U_{B}}}H_{ \ell}^{2}\left(\omega-\omega_{\text{ch}}(x)-\ell\omega_{B}\right)\\ &\times e^{-\frac{\hbar^{2}\left[\omega-\omega_{\text{ch}}(x)- \ell\omega_{B}\right]^{2}}{U_{B}^{2}}},\end{split} \tag{32}\] where \(H_{\ell}(x)\) is the \(\ell\)-th Hermite polynomial. From this approximate expression for the density of states, we can interpret higher Landau levels in the presence of an electric field as regular Landau levels that are shifted in space by \(\Delta x_{B}\approx\ell\hbar\omega_{B}/U_{0}\). In this picture, the negligible coupling to neighboring Landau levels can be explained in terms of the reduced spatial overlap \(\sim\exp(-\ell\omega_{B}^{2}/U_{B}^{2})\) between the wavefunctions, which is strongly suppressed, unless the electric field is very strong. Finally, for very strong electric fields, \(U_{B}\gtrsim\hbar\omega_{B}\), as shown in the right panel of Fig. 5, the strong-coupling regime cannot be reached without having the light-matter coupling comparable or even larger than the energy gap between the neighboring Landau levels. As a consequence, the upper bound-state is strongly broadened by a significant hybridization with a wide band of propagating states in the \(\ell=1\) Landau level. ## V Quantum Revivals and State Transfer In this section we now explore in more detail one of the most remarkable features of the critical coupling regime, namely the symmetry between emission and absorption processes. Due to the symmetric shape of the emitted wavepacket and its unidirectional propagation, the emission of a photon in this regime is indistinguishable from the time-reverse of the reabsorption process of the same photon. In the quantum communication literature [23], this symmetry argument has been used to derive specific control pulses \(g(t)\), which produce such symmetric wavepackets and thus allow for high-fidelity state transfer operations in unidirectional Markovian channels. Here we find that in our proposed configuration this symmetry emerges naturally and without any time-dependent control from the non-Markovian dynamics of a critically coupled photonic quantum Hall system. ### Quantum revivals To analyze reabsorption processes in our system, let us stick to the case of a single emitter, but now in a lattice with periodic boundary conditions (PBC) along the \(y\)-direction. In this case the emitted photon still propagates unidirectionally with a group velocity set by the Hall speed \(c_{H}\) and without any significant dispersion. However, after a round-trip time \[\tau_{\text{rev}}=L_{y}/c_{H} \tag{33}\] the photon will reach again its initial position, where it can be partially or fully reabsorbed by the emitter. In Fig. 6 (a) and (b) we simulate this emission and reabsorption process under weak-coupling and critical-coupling conditions, respectively. In the first case, we see that after each round-trip only a fraction of about \(60\%\) of the initial excitation is reabsorbed. This is very similar to what is expected for photon reabsorption in a 1D Markovian channel without any time-dependent control. In contrast, for a critically coupled emitter, the photon is reabsorbed with more than \(P=95\%\) probability, and significant revivals can still be observed after multiple roundtrips. This near perfect reabsorption Figure 6: Evolution of the excited state population \(P_{\epsilon}=|c_{\epsilon}(t)|^{2}\) (left panels) and snapshot of the emitted photon density \(|\sigma(\vec{r},t)|^{2}\) (right panels). In (a) the emitter is weakly coupled with \(g\sqrt{\alpha}=0.3\,U_{B}/\hbar\approx 0.038\,J\), while (b) shows the case of a critically coupled emitter with \(g\sqrt{\alpha}=U_{B}/\hbar\approx 0.126\,J\). In both cases \(J_{\text{rev}}\approx 390\). The other parameters for these plots are \(\alpha=1/10\), \(N_{x}=31\), \(N_{y}=61\), \(U_{0}/(\hbar J)=0.1\), \(\Delta_{\epsilon}=0\) and the emitter is located at \(\vec{r}_{e}/l_{0}=(15,53)\). All results have been obtained from a numerical simulation of the full lattice dynamics. can also be interpreted as a coherent quantum revival effect, where all the eigenmodes forming the initial state in Eq. (20) periodically rephase, i.e., \(c_{e}(t=n\tau_{\rm rev})\simeq c_{e}(0)\) for \(n=1,2\ldots\). As mentioned above, we can understand this high reabsorption probability from the symmetry and the dispersion-free propagation of the emitted wavepacket, as shown in the right panels in Fig. 6. To see, under which conditions this effect occurs, we plot in Fig. 7 the maximal revival probability \(P_{\rm rev}\) as a function of the coupling strength \(g\) and the (local) detuning \(\Delta_{e}\) of the emitter from the LLL. This plot confirms that high revival probabilities of \(P_{\rm rev}>0.9\) can be observed within an extended parameter regime and without the need for a precise fine-tuning of any of the system parameters. ### Bulk-edge quantum channels Let us now switch to the experimentally more realistic scenario of a lattice with open boundary conditions. In this case the emitted photon cannot simply return to the emitter by propagating along a straight line. Instead, once it reaches the lattice boundary, the propagation via edge-modes, which so far we have omitted from our analysis, becomes important. Surprisingly, we find that the essential features discussed for periodic systems survive for photonic quantum Hall systems with edges. This is illustrated in Fig. 8, where we compare the characteristic shape of an eigenmode of a periodic lattice with that of a lattice with open boundary conditions. While within the bulk region, both mode functions are very similar, in the case of open boundary conditions the modefunction continues along the edges. Therefore, also in this case the photons can travel in loops and return to the emitter. In order to verify that this peculiar shape of the photonic eigenmodes creates an equivalence between lattices with periodic and open boundary conditions, we consider again a critically coupled emitter, but now located in Figure 7: Maximum value of the revival probability \(P_{\rm rev}\) of the excited state population after a time \(t\sim\tau_{\rm rev}\). This probability is plotted as a function of the light-matter coupling \(g\) and the local detuning \(\Delta_{e}\). These results were derived from Eq. (21), using the continuum Green’s function with PBC and for \(\alpha=1/10\) and \(L_{y}/l_{0}=200\). Figure 8: (a) Sketch of a lattice with PBC along the \(y\)-axis and OBC on the \(x\)-axis, and with an homogeneous out-of-plane magnetic field and an homogeneous in-plane electric field along the \(x\)-axis (left panel). In this case the photonic eigenmodes \(f_{\lambda}(\vec{r}_{i})\) are approximately described by the Landau wavefunctions in Eq. (5), which are homogeneous stripes along the \(y\)-direction with lateral size \(\sim l_{B}\) (right panel). (b) Sketch of the same lattice, but with OBC along both \(x\) and \(y\) (left panel). In such a lattice, photonic eigenmodes form closed loops along the edge (right panel). The lattice parameters assumed for both cases are \(\alpha=1/20\) and \(N_{x}=N_{y}=41\). Figure 9: (a) Snapshots of the photon density \(|\varphi(\vec{r},t)|^{2}\) at three different times after the excitation is released by an emitter located at the center of the lattice. (b) Excited state population \(P_{e}=|c_{e}(t)|^{2}\) as a function of time in units of \(\tau_{\rm rev}\). The three dashed lines mark the times of the snapshots shown in (a). The other parameters for theses plots are \(\alpha=1/10\), \(N_{x}=N_{y}=21\), \(U_{0}/(hJ)=0.1\), \(\Delta_{e}/J=0\) and \(\hbar g=U_{B}/\sqrt{\alpha}\approx 0.4hJ\). the middle of a finite-size lattice. In Fig. 9 (a) we then show three snapshot of the emitted photonic wavefunction, similar to the situation considered in Fig. 6 (b) for PBC. We see that when the photon reaches the lower boundary of the system, it is transported along the edge to the upper boundary. Here, it makes a turn and propagates again through the bulk towards the emitter. In Fig. 9 (b) we plot the corresponding time evolution of the excited state population \(P_{e}(t)=|c_{e}(t)|^{2}\). In exactly the same way as in the case of PBC, the emitter population undergoes almost complete revivals after a time that, in spite of the much longer path, is still close to \(\tau_{\rm rev}\) given in Eq. (33) as a result of the much higher propagation speed along the edges. ### Guiding centers, equipotential lines and photonic demultiplexing The appearance of closed-loop channels, where photons propagate both in the bulk and along the edges, is puzzling at first, since it seems to break the familiar concept of topologically protected edge states. However, the existence of such loops can be understood from an essential property of Landau wavefunctions in external potentials [55], namely that wavepackets move along _guiding center_ trajectories that follow the equipotential lines of this external potential landscape. In the bulk of the lattice, this principle readily explains the motion of the photons along straight lines, which are the equipotential lines of the linear potential gradient along the \(x\)-direction. The behavior of the photons near the edges can then be understood by considering the potential shown in Fig. 10 (a), where on top of the constant electric field the edges are modelled by a smooth confining potential \(V_{\rm conf}\), such that \[\epsilon_{i}=-eEx_{i}+V_{\rm conf}(\vec{r}_{i}). \tag{34}\] The potential \(V_{\rm conf}(\vec{r}_{i})\) is taken to be small and smooth in the bulk and to grow very rapidly near the boundaries of the lattice. This example illustrates very clearly, how the equipotential lines in this system form closed loops, represented by straight lines in the bulk, which then continue around the edges. Therefore, these equipotential lines explain the shape of the modefunction shown in Fig. 8 (b), but also the fact that the photons propagating along the edge reenter the bulk region exactly at the position \(x_{e}\), where they have been emitted. In electronic systems, this guiding-center principle is crucial to understand the quantized Hall transport and the existence of extended states in the bulk, even in the presence of strong disorder [56]. This latter aspect is ultimately related to the non-trivial topology of the electron wavefunctions [57] and is linked to the integer quantum Hall effect through percolation theory [58]. Even though extended states are sensitive to the boundaries and are different for every different realization of disorder, at least one of them must always exist and connect one boundary to the other. In the current setting, this last feature is exactly what gives rise to the observed periodic photonic orbits, even in a finite-size lattice. As another consequence of this principle and a key difference from the \(E=0\) case, photons injected at the boundary of the quantum Hall lattice can penetrate into the bulk while following specific equipotential lines. Since every equipotential line identifies a unique resonance frequency, photons of different frequency are transported to different regions in the bulk. More specifically, for a configuration shown in Fig. 10 (b), photons that are injected at the upper-right corner of the lattice with a frequency \(\omega_{\rm in}\) will propagate along the edge before making a turn into the bulk region at a position \[x_{\rm out}=-l_{B}\,\frac{\hbar(\omega_{\rm in}-\omega_{\rm ch})}{U_{B}}\,. \tag{35}\] Therefore, this system realizes in a natural way a frequency-demultiplexing element for photons. By requiring that the separation between the output channels exceeds the spatial width of the Landau orbitals, i.e., \(\Delta x_{\rm out}>l_{B}\), we can estimate a frequency resolution of \(\delta\omega\simeq U_{B}/\hbar\) and a total number of \(N_{\omega}\approx L_{x}/l_{B}\) frequency components that can be spatially separate with such a basic device. ### Quantum state transfer: edge-to-edge versus bulk-to-bulk Let us now generalize the previous analysis to a multiple emitter case and discuss a basic application of photonic quantum Hall systems to the transfer of excitations and/or quantum superposition states between two such Figure 10: (a) Example of a photonic lattice with a smooth edge given by a confining potential \(V_{\rm conf}\) and a linear potential gradient along the \(x\)-axis. The equipotential lines of the total potential (black lines) are straight lines in the bulk, but bend into a closed loop along the edge. The white arrow marks the direction of the electric field in the bulk while the red arrows show an example of a photon trajectory along the red equipotential line. (b) Sketch of the proposed photonic demultiplexing device based on the guiding-center photonic motion along equipotential lines. emitters. As already mentioned in the introduction, such state-transfer schemes have been previously analyzed for situations, where the two emitters are coupled through the edge channels of 2D photonic or phononic lattice systems with synthetic magnetic fields. However, our findings in the previous Sections suggest that such transfer tasks could be implemented much more efficiently between emitters located in the bulk, once an additional synthetic electric field is implemented. To support this intuition we consider a small photonic lattice as depicted in Fig. 11, with two emitters located either on the edge of the lattice or in the bulk. By assuming that only one of the emitters is initially excited, i.e., \(c_{1}(0)=1\) and \(c_{2}(0)=0\), we can generalize Eq. (21) to include two (or multiple) emitters located at positions \(\vec{r}_{e}^{n}\) and obtain [43], \[\dot{c}_{n}(t)=\] \[-\sum_{m=1}^{N=2}\frac{g_{n}g_{m}}{4}\int_{0}^{t}ds\,G(t-s,\vec{r }_{e}^{n},\vec{r}_{e}^{m})c_{m}(s)e^{i(\omega_{e}^{n}t-\omega_{e}^{m}s)}. \tag{36}\] Provided the emitters are spatially separated, we can then again replace the full photonic Green's function by its continuum approximation in Eq. (16)and define the local detuning for each emitter as \(\Delta_{n}=\omega_{e}^{n}-\omega_{\text{ch}}(x_{e}^{n})\). This shows that all the considerations done above for a single emitter, in particular, the identification of the three different coupling regimes, remain valid for multiple emitters as well. In addition, even for multiple emitters, we can numerically integrate the exact dynamics within the single-excitation sector and use it to evaluate the excitation probabilities \(P_{e}^{(n=1,2)}(t)=|c_{n}(t)|^{2}\) shown in the right panels of Fig. 11. The simulations in Fig. 11 (a) confirm, first of all, that in a conventional lattice with a synthetic magnetic field, but no synthetic electric field \(U_{B}=0\), excitations can be efficiently transported along the edges and even undergo a full loop without any significant losses into the bulk modes [21, 22, 30, 31, 33, 25]. However, similar to the weak-coupling regime discussed above, the photons emitted into the edge channels have an spatially asymmetric wavefunction and, thus, are only partially reabsorbed by the second emitter, i.e., \(P_{e}^{(2)}\lesssim 0.6\) (as also observed in [31]). Further, as illustrated in Fig. 1 (c), the edge mode have a non-negligible dispersion [34], which is responsible for a broadening of the wavepacket during propagation and, thus, for a dependence of the transfer process on the distance between emitters. Therefore, the implementation of high-fidelity state transfer operations in this configuration requires additional control over the individual couplings, \(g_{n}\to g_{n}(t)\), to facilitate the reabsorption process and to compensate for propagation effects [21, 22, 30]. In Fig. 11 (b) and (c) we consider an alternative setup where a sizable synthetic electric field \(U_{B}\neq 0\) is introduced and the emitters are located in the bulk region of the lattice. In the situation assumed in Fig. 11 (b), where the upper emitter is initially excited, the photon propagates in a straight line through the bulk toward the second emitter. As discussed in Sec. V.1 above, under critical-coupling conditions, the photon wavepacket in this case is highly symmetric and can be reabsorbed by the second emitter with near perfect fidelity. This feature is fully reproduced by the exact numerical simulation. To demonstrate a two-way connectivity, we also consider the opposite case, where the lower emitter is initially excited. In this case, the photon must propagate along the edge, but nevertheless we observe an almost perfect transfer of the excitation. Remarkably, the broadening effect of the edge mode dispersion on the photon wavefunction is in fact of minor importance in this case of a symmetric Figure 11: Chiral excitation transfer in different configurations. The left panels depict the locations of the two emitters in the lattice, while the right panels show the evolution of the excited state populations, \(P_{e}^{(n)}(t)=|c_{n}(t)|^{2}\). (a) Excitation transfer via edge channels, where the two emitters are located on the edge (light green sites) at positions \(\vec{r}_{e}^{1}/l_{0}=(20,40)\) and \(\vec{r}_{e}^{2}/l_{0}=(20,0)\). The photon propagates along the edge from emitter 1 to emitter 2. The parameters in this example are \(U_{0}(\hbar J)=0\), \(g_{1}=g_{2}=0.1\omega_{B}/\sqrt{\alpha}\approx 0.4J\) and \(\Delta_{1}=\Delta_{2}=0.1\times\omega_{B}\approx 0.7J\). (b) Excitation transfer via the bulk, where the two emitters are located at positions \(\vec{r}_{e}^{1}/l_{0}=(20,36)\) and \(\vec{r}_{e}^{2}/l_{0}=(20,5)\). The photon propagates through the bulk from emitter 1 to emitter 2. The parameters in this example are \(U_{0}/(\hbar J)=0.05\), \(\hbar g_{1}=\hbar g_{2}=U_{B}/\sqrt{\alpha}\approx 0.2\hbar J\) and \(\Delta_{1}=\Delta_{2}=0\). (c) Excitation transfer between two emitters in the bulk, but with the photon propagating along the edge from emitter 2 to emitter 1. The other parameters are the same as in (b). In all three simulations we have assumed \(\alpha=1/10\) and \(N_{x}=N_{y}=41\). wavefunction. Interestingly, in these simulations, the transfer along the edge is faster than a direct transfer through the bulk. As already mentioned above, this can be attributed to the much higher group velocity in the edge channel. Indeed, the propagation time along the edges is almost negligible, compared to the propagation time in the bulk, which allows us to estimate the total transfer time by \[\tau_{\mathrm{T}}\approx\frac{\Delta y_{\mathrm{PBC}}}{c_{H}}+\frac{2}{\Gamma_ {e}}, \tag{37}\] where we have also included the emission/absorption time estimated by Eq. (27). Here, \(\Delta y_{\mathrm{PBC}}\) is the effective distance between the emitter and the receiver under PBC, i.e., by simply ignoring the path along the edges. For example, for the two configurations considered in Fig. 11 (b) and (c), we would expect \(J\tau_{\mathrm{T}}\approx 430\) and \(J\tau_{\mathrm{T}}\approx 175\). In the first case the agreement with the exact simulation is almost perfect, while in the second case we have around \(\sim 20\%\) of error as the transfer time observed in the simulation is around \(Jt\sim 220\). ### Effect of photon losses and disorder In view of experimental demonstrations and practical applications, it is important to assess the robustness of the state transfer with respect to photon losses and a static disorder. For this purpose, we extend our study of the transfer process in Sec. V.4 introducing a spatial inhomogeneity of the frequency of each site of the form \(\epsilon_{i}=-eEx_{i}+\hbar\omega_{p}^{i}\), where the offsets \(\omega_{p}^{i}\) are chosen randomly and independently according to a Gaussian distribution with mean value \(\omega_{p}\) and standard deviation \(\sigma_{p}\). Then, using the same approach as in [43], we model the effect of photon losses with rate \(\gamma_{p}\) by an additional damping term in the dynamics of the photon wavepacket, \[\partial_{t}\varphi(\vec{r}_{i},t)=\ldots-\frac{\gamma_{p}}{2}\varphi(\vec{r}_ {i},t)\,. \tag{38}\] This expression already shows that the effect of photon loss only introduces and overall exponential damping of the photon amplitude, but does not affect any of the topological properties of the system [6]. For state-transfer applications, it reduces the final transfer fidelity, \(P_{e}^{(2)}(\tau_{\mathrm{T}})\approx e^{-\gamma_{p}\tau_{\mathrm{T}}}P_{e}^{( 1)}(0)\), which sets a condition \(\gamma_{p}\ll\tau_{\mathrm{T}}\lesssim\tau_{\mathrm{rev}}=c_{H}/L_{y}\) for the maximally tolerable loss rate. Regarding the effect of disorder, we now investigate numerically its effect on the state-transfer process between two emitters that are coupled through the bulk. For this purpose, we define the fidelity of the state transfer as the maximum over time of the second emitter population \(\mathcal{F}=\max_{t}P_{e}^{(2)}(t)\), given that the first emitter is initially excited, \(P_{e}^{(1)}(0)=1\). In Fig. 12 we show the disorder-averaged infidelity \(1-\bar{\mathcal{F}}\) as a function of the inter-site voltage drop \(U_{0}\) and the strength of the lattice disorder, \(\sigma_{p}\). From the figure, we see that with increasing electric field, the transfer becomes increasingly robust with respect to local frequency disorder. This can be understood from the fact that for \(\hbar\sigma_{p}\ll U_{0}\) the disorder is not able to efficiently couple two neighbouring photonic eigenstates, which are spaced in energy by \(\Delta E\sim U_{0}\). Therefore, under this condition the linear slope of the Landau levels is preserved along with all the associated transport properties. This interpretation is confirmed by the simulations in Fig. 12, where we see the condition beyond which the quantum Hall physics is washed out by disorder indeed follows the line \(\sigma_{p}\simeq U_{0}\). It is interesting to note that the transfer fidelity \(\bar{\mathcal{F}}\) starts to slowly decrease again also for larger values of \(U_{0}\). This effect, however, is not related to the disorder, but is rather caused by spurious effects due to the lattice geometry and to mixing between Landau levels whenever the electric field energy starts to be compatible with the gap between Landau levels, \(U_{0}\sim\hbar\omega_{B}\). In the considered example, we empirically found that values of \(U_{0}\lesssim 0.1\hbar\omega_{B}\) are sufficient to suppress such imperfections. ### Experimental considerations As a final point, it is important to validate the actual experimental feasibility of our proposal. Experimental realizations of topological photonic systems are currently pursued both in the optical and in the microwave domain. Focusing for concreteness on the latter case, 2D photonic lattices can be fabricated out of superconducting \(LC\) resonators with frequencies of about \(\omega_{p}/(2\pi)\approx 5-10\) GHz, tunnel couplings \(J/(2\pi)\approx 10-100\) MHz and Figure 12: Disorder-averaged value of the transfer infidelity \(1-\bar{\mathcal{F}}\) (color scale) as a function of the electric field, \(U_{0}/(\hbar J)\), and the lattice disorder strength, \(\sigma_{p}/J\). The coupling strength of the two emitters are fixed to the critical-coupling condition \(\hbar g_{1}=\hbar g_{2}=U_{B}/\sqrt{\alpha}\). Emitter \(1\) is located at \(\vec{r}_{e}^{2}/l_{0}=(10,17)\) while the emitter \(2\) is located at \(\vec{r}_{e}^{2}/l_{0}=(10,4)\). The other parameters for this simulation are \(\alpha=1/10\), \(N_{x}=N_{y}=21\), \(\Delta_{e}^{1}=\Delta_{e}^{2}=0\) and \(\gamma_{p}/J=10^{-5}\) (this reduced value of the losses is chosen to highlight the effect of disorder). The dashed white line indicates the condition \(\hbar\sigma_{p}=U_{0}\). For each choice of parameters, the disorder average is performed over \(N_{\mathrm{dis}}=100\) realizations. quality factors of \(Q\approx 10^{4}-10^{5}\), which corresponds to \(\gamma_{p}=\omega_{p}/Q\approx 2\pi\times 50-500\) kHz [25, 59]. By engineering a lattice with \(N_{x}=N_{y}=20\) resonators along each side, tunnel couplings of \(J/(2\pi)\approx 100\) MHz, a voltage drop of \(U_{0}/J\approx 0.1\) and a magnetic flux of \(\alpha=0.1\), we obtain \(U_{B}/(2\pi\hbar)\approx 13\) MHz. Therefore, we require a coupling strength of \(g/(2\pi)\approx 40\) MHz to reach the critical-coupling regime, which can be readily achieved with superconducting qubits [59, 25]. For this setup we obtain a typical transport time of \(\tau_{\rm T}\sim\tau_{\rm rev}\approx 126\,J^{-1}\) such that \(\gamma_{p}\tau_{\rm rev}\sim 10^{-2}-10^{-3}\), meaning that the photon is able to do hundreds of roundtrips of the entire system before leaking out. At the same time, the fabrication of superconducting resonator arrays with a frequency disorder of \(\sigma_{p}/J\sim 10^{-2}-10^{-3}\) have already been demonstrated [59, 60, 61, 62, 25], which means that the condition \(\sigma_{p}<U_{B}\) can also be met. These simple estimates make us confident that experimental realizations of such photonic quantum Hall systems are within experimental reach. ## VI Photonic quantum Hall percolation networks As we have already discussed above, many aspects of photon propagation in the considered quantum Hall lattice can be understood from the fact that the photonic wavepackets move along equipotential lines. As an immediate consequence, most of the effects that we have discussed so far for the case of a constant electric field, can be generalized to generic lattice potentials of the form \[\epsilon_{i}=V(\vec{r}_{i})+\hbar\omega_{p}, \tag{39}\] where \(V(\vec{r}_{i})\) describes a smooth, but otherwise arbitrary profile of the on-site energy offsets. In particular, the equipotential lines of \(V(\vec{r}_{i})\) can be curved to connect different parts of the lattice or even intersect each other. In the following we show how this additional tunability can be used to realize _photonic networks_, in which many emitters can be coupled through fully configurable chiral channels. Interestingly, this idea is closely connected to the concept of the _quantum Hall percolation network_, which was introduced for electronic systems to obtain an intuitive explanation of the integer quantum Hall effect [63, 64]. In these electronic setups the network of equipotential lines is however provided by natural disorder making it impossible to control and use for any technological purpose. On the contrary, the photonic implementation allows for almost complete freedom in equipotential lines design, bringing the quantum Hall network idea on a completely new perspective. ### Configurable chiral channels In a first step, let us generalize the concept of chiral transfer channels to arbitrary potentials \(V(\vec{r}_{i})\), assuming, however, that variations are sufficiently smooth. In this case we can locally expand the potential around the positions \(\vec{r}_{e}^{\,n}\) of the emitters, \[\epsilon_{i}\simeq\hbar\omega_{p}+V(\vec{r}_{e}^{\,n})+\vec{\nabla}V(\vec{r}_ {e}^{\,n})\cdot(\vec{r}_{i}-\vec{r}_{e}^{\,n}). \tag{40}\] This simply means that each emitter sees a quantum Hall lattice with a different frequency offset and an effective field \(E\sim\vec{\nabla}V(\vec{r}_{n})\). Under this assumption, the same emission and absorption dynamics discussed above is recovered if we assume a local density of states as given in Eq. (19) and we replace the Landau voltage by \[U_{B}\longmapsto\tilde{U}_{B}(\vec{r}_{e}^{\,n})=|\nabla V(\vec{r}_{e}^{\,n} )|l_{B}, \tag{41}\] and the local emitter detuning by \[\Delta_{n}\longmapsto\tilde{\Delta}_{n}(\vec{r}_{e}^{\,n})=\omega_{e}^{\,n} -\tilde{\omega}_{\rm ch}(\vec{r}_{e}^{\,n})\,. \tag{42}\] Here, the channel frequency has been generalized to the \((x,y)\) space-dependent quantity according to Eq. (17) \[\omega_{\rm ch}(x)\longmapsto\tilde{\omega}_{\rm ch}(\vec{r})=\omega_{\ell= 0}^{LL}+\frac{V(\vec{r})}{\hbar}+\frac{\tilde{U}_{B}(\vec{r})^{2}}{2\hbar^{2} \omega_{B}}. \tag{43}\] Based on this local-field approximation, we can identify three criteria for an efficient state transfer between two emitters located at positions \(\vec{r}_{e}^{1}\) and \(\vec{r}_{e}^{2}\): 1. The two emitters are resonant and connected by a generalised equipotential line set by the channel frequency, \(\tilde{\omega}_{\text{ch}}(\vec{r}_{e}^{1})=\tilde{\omega}_{\text{ch}}(\vec{r}_{ e}^{2})\). 2. The coupling \(g_{n}\) of each emitter satisfies the local critical coupling condition, \(g_{n}\sqrt{\alpha}\approx\tilde{U}_{B}(\vec{r}_{e}^{n})/\hbar\). 3. The local synthetic electric field is the same for both emitters, \(\tilde{U}_{B}(\vec{r}_{e}^{1})=\tilde{U}_{B}(\vec{r}_{e}^{2})\). These conditions ensure the resonant emission of a symmetric wavepacket, which can be reabsorbed by the second emitter under the same critical-coupling condition. As long as the potential does not vary too abruptly, the photon moves along the equipotential line with a local Hall speed \(c_{H}\mapsto c(\vec{r})=|\nabla V(\vec{r})|l_{B}^{2}/\hbar\) proportional to the potential gradient. The third criterion specifically ensures that the transverse width of the emitted wavepacket matches the one that a spontaneously emitted photon by the second emitter would have, thus preserving the symmetry between emission and absorption. To illustrate and validate this working principle in terms of a concrete example, we simulate a state transfer between two emitters that are coupled to a photonic lattice with the more complicated potential \(V(\vec{r})\) shown in Fig. 13 (a). For this scenario, we compare the case where the two emitters see the same local potential gradient with the case where the gradient is different. However, in both situations the emitters are located along the same equipotential lines and are in local resonance, \(\tilde{\Delta}_{n}=0\). We find that in the first case, where all three conditions from above are satisfied, the state transfer occurs with an almost perfect fidelity, despite a very complicated energy landscape. In the other case both emitters are critically coupled to the same equipotential line, but they are located in regions of different field gradient, and so they are resonant at different channel frequencies \(\tilde{\omega}_{\text{ch}}(\vec{r}_{e}^{1})\neq\tilde{\omega}_{\text{ch}}( \vec{r}_{e}^{2})\). In this way the absorption cannot be maximized and remains always around \(\sim 60\%\) value. ### Beam splitters In our discussion above we have implicitly assumed that the spatial variations of the applied potential are sufficiently smooth and that equipotential lines never cross. In this case the whole lattices separates into a set of disjoint 1D channels. In order to achieve higher degrees of connectivity and non-local operations, we can violate those assumptions in order to realize beam splitter elements that coherently couple different channels. As illustrated in Fig. 14 (a), a beam splitter can be obtained at a crossing point of different equipotential lines, which exist, for example, for a potential that is locally of the form \[V(\vec{r})=eE(|x|-|y|). \tag{44}\] In the three plots in Fig. 14 (b), we show the propagation of a photonic wavepacket, which is emitted along the diagonal equipotential line in the north-east direction towards the crossing point at \(\vec{r}=0\). This photon is then coherently split into two wavepackets, which propagate into opposite north-west and south-east directions. Thanks to the chirality, this process occurs without any backscattering and, furthermore, preserves the symmetric shape of the outgoing wavepackets. Therefore, the ability of being fully reabsorbed by other emitters is not degraded by this operation. This provides a great flexibility for realizing a variety of connectivity patterns in such percolation networks, in particular when beam splitters are implemented with reconfigurable potentials. Note that for electronic systems, a closely related splitting mechanism has been previously analyzed for quadratic saddle-point potentials, \(V(\vec{r})\sim x^{2}-y^{2}\)[65; 66]. However, in this case curvature effects are always relevant [67], making the dynamics more complicated and less controlled than for the linear saddle potential given in Eq. (44). Specifically, we have numerically observed that the quadratic saddle point does not preserve the symmetry of the photon wavepacket, which is instead perfectly preserved by our choice. Moreover, away from the contact point, our linear saddle potential has the same gradient in all four branches marked by the white arrows in Fig. 14 (a), which ensures the conditions for high-fidelity state-transfer operations. This once more highlights the Figure 14: (a) Sketch of the lattice potential \(V(x,y)=eE(|x|-|y|)\) for the realization of a chiral beam splitter. The white arrows indicates the propagation direction along the specific equipotential line connected to an emitter located at \(\vec{r}_{e}/l_{0}=(6,6)\). (b) Snapshots of the photon density \(|\varphi(\vec{r},t)|^{2}\) for a photon that is emitted in the critical-coupling regime. The parameters for this simulation are \(\alpha=1/10\), \(N_{x}=N_{y}=31\), \(U_{0}/(\hbar J)=0.1\), \(\Delta_{e}=0\) and \(\hbar g=U(\vec{r}_{e})/\sqrt{\alpha}\). intriguing new possibilities offered by highly tunable photonic platforms, where the potential configurations can be engineered in an optimal way. ## VII Conclusions In summary, we introduced a new chiral quantum optics platform based on two-level emitters coupled to the bulk of a 2D photonic lattice subject to crossed synthetic magnetic and electric fields. The presence of the combined synthetic fields makes photons to unidirectionally propagate along an effective waveguide orthogonal to the electric field. The lateral position of the selected effective waveguide is controlled by the resonance frequency of the emitter. Depending on the strength of the emitter-light coupling, we identified and characterized three different regimes of light-matter interactions: weak-coupling (Markovian), strong-coupling (non-Markovian) and critical-coupling (non-Markovian). The Markovian weak-coupling regime corresponds to the usual light-matter coupling regime considered in the chiral quantum optics literature, and all existing results for generic chiral setups directly extend to our system. On the contrary, the strong-coupling and, even more, the critical-coupling regimes display radically new properties that stem from the frequency-dependent density of states: the strong-coupling regime supports atom-photon bound-states in a chiral continuum, which do not exist in standard setups. In the critical-coupling regime, the emission process displays strongly non-Markovian features due to the interplay between light-matter interactions and quantum Hall physics. The ensuing temporal symmetry of the emitted photon wavefunction is then exploited to implement state-transfer protocols between two emitters with a fidelity that largely exceeds standard chiral quantum optics configurations without the need for optimal control schemes. For generic, non-uniform synthetic electric field configurations, we related the photon propagation to the fundamental property of the quantum Hall current to flow along the equipotential lines of the single particle potential according to the guiding-center motion. This is the starting point to propose new photonic devices that include frequency-(de)-multiplexing elements, chiral waveguides with arbitrary paths within the 2D lattice, and beams splitters. Based on these analytic and numerical findings, we argue that all these elements can be combined to realize a fully-fledged chiral quantum optical network completely based on the quantum Hall effect for light. Such networks are of great fundamental interest and have promising technological applications. On one hand, the high tunability and general freedom offered by these structures could open new perspectives on fundamental studies of quantum Hall physics for light and, when the lattice is endowed with suitable optical nonlinearities, realise a photonic quantum simulator for fractional quantum Hall physics [6; 16]. On the other hand, a number of different quantum tasks [1] may benefit from the new paradigm of chiral quantum photonic circuits, which can be arbitrarily scaled up in size, implemented on-chip with standard photonics technology, and arbitrarily reconfigured with the simple application of suitably designed spatial patterns of the on-site detuning of the lattice. ###### Acknowledgements. We are grateful to Alberto Nardin, Giuseppe Calajo, and Alexander Szameit for fruitful discussions. We acknowledge financial support from the Provincia Autonoma di Trento, from the Q@TN initiative, and from PNRR MUR project PE0000023-NQSTI. ## Appendix A Landau photons in electric fields The mode functions of the lattice in the continuous limit are solution of the Schrodinger equation [43] \[\left[\frac{p_{x}^{2}}{2m}+\frac{(p_{y}+eBx)^{2}}{2m}-eEx\right]\Phi_{\ell k} (\vec{r})=(\omega-\omega_{b})\Phi_{\ell k}(\vec{r}), \tag{11}\] where \(p_{x/y}=-i\hbar\partial_{x/y}\), \(\omega_{b}=\omega_{p}-J/2\) and \(m=1/(2Jl_{0}^{2})\). By making the ansatz \(\Phi_{k}(\vec{r})=\phi(x)\exp(iky)/\sqrt{L_{y}}\) and completing the square we arrive at \[\left[\frac{p_{x}^{2}}{2m}+\frac{\hbar\omega_{B}}{2}\left(\frac{x}{l_{B}}+l_{ B}k-\frac{U_{B}}{\hbar\omega_{B}}\right)^{2}\right]\phi(x)=(\omega-\omega_{H}) \phi(x), \tag{12}\] where \(\omega_{H}=c_{H}k-U_{B}^{2}/(2\hbar\omega_{B})\). The eigenfrequencies are then given by \[\omega_{\ell k}=\hbar\omega_{B}\left(\ell+\frac{1}{2}\right)+U_{B}\left(l_{B} k-\frac{U_{B}}{2\hbar\omega_{B}}\right) \tag{13}\] while the eigenstates are just the displaced harmonic oscillator wavefunctions \[\Phi_{\ell k}(\vec{r})=\frac{\exp(iky)}{\sqrt{L_{y}}}\varphi_{\ell}^{\text{ h.o.}}\left(x+l_{B}^{2}k-U_{B}/(\hbar\omega_{B})l_{B}\right). \tag{14}\] Here \[\varphi_{\ell}^{\text{h.o.}}(x)=\frac{1}{\sqrt{2^{\ell}\ell!\sqrt{\pi}}}H_{ \ell}(x/l_{B})e^{-x^{2}/(4l_{B}^{2})}, \tag{15}\] and \(H_{\ell}(x)\) is the \(\ell\)-th Hermite polynomial.
2301.13484
CELEBI: The CRAFT Effortless Localisation and Enhanced Burst Inspection Pipeline
Fast radio bursts (FRBs) are being detected with increasing regularity. However, their spontaneous and often once-off nature makes high-precision burst position and frequency-time structure measurements difficult without specialised real-time detection techniques and instrumentation. The Australian Square Kilometre Array Pathfinder (ASKAP) has been enabled by the Commensal Real-time ASKAP Fast Transients Collaboration (CRAFT) to detect FRBs in real-time and save raw antenna voltages containing FRB detections. We present the CRAFT Effortless Localisation and Enhanced Burst Inspection pipeline (CELEBI), an automated software pipeline that extends CRAFT's existing software to process ASKAP voltages in order to produce sub-arcsecond precision localisations and polarimetric data at time resolutions as fine as 3 ns of FRB events. We use Nextflow to link together Bash and Python code that performs software correlation, interferometric imaging, and beamforming, making use of common astronomical software packages.
D. R. Scott, H. Cho, C. K. Day, A. T. Deller, M. Glowacki, K. Gourdji, K. W. Bannister, A. Bera, S. Bhandari, C. W. James, R. M. Shannon
2023-01-31T09:06:42Z
http://arxiv.org/abs/2301.13484v2
# CELEBI: The CRAFT Effortless Localisation and Enhanced Burst Inspection Pipeline ###### Abstract Fast radio bursts (FRBs) are being detected with increasing regularity. However, their spontaneous and often once-off nature makes high-precision burst position and frequency-time structure measurements difficult without specialised real-time detection techniques and instrumentation. The Australian Square Kilometre Array Pathfinder (ASKAP) has been enabled by the Commensal Real-time ASKAP Fast Transients Collaboration (CRAFT) to detect FRBs in real-time and save raw antenna voltages containing FRB detections. We present the CRAFT Effortless Localisation and Enhanced Burst Inspection pipeline (CELEBI), an automated software pipeline that extends CRAFT's existing software to process ASKAP voltages in order to produce sub-arcsecond precision localisations and polarimetric data at time resolutions as fine as 3 ns of FRB events. We use Nextflow to link together Bash and Python code that performs software correlation, interferometric imaging, and beamforming, making use of common astronomical software packages. keywords: Fast Radio Bursts, Radio Interferometry, Astronomy Software + Footnote †: journal: Astronomy & Computing ## 1 Introduction Fast radio bursts (FRBs) are micro- to millisecond duration radio transients (Lorimer et al., 2007; Thornton et al., 2013). Known to be extragalactic, they are extremely energetic (Bhandari et al., 2020). Although a Galactic magnetar is known to have produced FRB-like emission (CHIME/FRB Collaboration et al., 2020; Bochenek et al., 2020), no general emission mechanism nor progenitor has been identified, and it is possible that more than one progenitor type contributes to the observed population. Only a small fraction of FRB sources have been observed to repeat (CHIME/FRB Collaboration et al. (2023) report a repeater fraction tending to \(2.6^{+2.9}_{-2.6}\%\)) and there are indications of intrinsic differences between repeating and non-repeating FRBs (Pleunis et al., 2021). In order to gain greater insight into the nature of FRBs, their emission mechanisms, progenitors, and host environments, and to use them as probes of cosmological parameters (James et al., 2022) and extragalactic matter distributions (Macquart et al., 2020) it is highly desirable and often necessary to identify their host galaxies and measure the polarimetric morphologies of the bursts themselves at high temporal and spectral resolutions. For example, the current sample of host galaxies does not yet point to any preferred progenitor class (Bhandari et al., 2022) but patterns may emerge as the sample grows, and high-time resolution measurements can constrain the size of the emission region and therefore emission mechanisms (Nimmo et al., 2021). However, searches are typically restricted to incoherent methods due the large data volumes associated with the required temporal and spectral resolutions, and are therefore unable to localise almost all FRB sources with sufficient precision to identify a host galaxy (the exceptions being repeaters with enough bursts observed) or measure bursts with sufficient temporal or spectral resolutions to make detailed inferences on the emission mechanism. Exceptions to this (e.g. the Very Large Array's realfast system, Law et al. 2018) typically must compromise on temporal or spectral resolution to keep processing load tolerable. The Australian Square Kilometre Array Pathfinder (ASKAP, Hotan et al. 2021) has been enabled by the Commensal Real-time ASKAP Fast Transients Collaboration (CRAFT) to detect FRBs in real-time and save raw antenna voltages of FRB detections. This permits sub-arcsecond-precision localisation of FRBs, notably including non-repeating FRBs, via interferometric imaging, precise enough to identify a host galaxy and often a position within that galaxy (Bannister et al., 2019; Prochaska et al., 2019; Macquart et al., 2020; Heintz et al., 2020; Fong et al., 2021; Bhandari et al., 2022, 2022), and polarimetric measurements at time resolutions as fine as 3 ns (Cho et al., 2020; Day et al., 2020). However, to date, the post-processing of triggered FRB data products has been handled by an ensemble of processing scripts that are manually sequenced and which require significant human quality control. This process is time consuming and potentially error-prone, making it unsuitable for future FRB surveys with ASKAP with higher detection rates (as envisaged for the forthcoming "CRACO" coherent detection system; Bannister et al., in prep.). This paper describes the CRAFT Effortless Localisation and Enhanced Burst Inspection pipeline (CELEBI)1. CELEBI is an automated software pipeline that extends existing CRAFT post-processing code (Bannister et al., 2019; Cho et al., 2020) with new functionality and improved monitoring and control to produce sub-arcsecond-precision localisations and high-time resolution data products of FRBs detected with ASKAP. SS2 gives a high-level overview of the pipeline's structure and algorithm. SS3 describes in detail the processes performed by CELEBI to produce FRB localisations, each subsection corresponding directly to one of CELEBI's processes. SS4 similarly describes the processes that produce high-time resolution polarimetric data for FRB detections. SS5 gives a summary and discusses the improvements to CRAFT's voltage processing produced by CELEBI, as well as future improvements to the pipeline. Footnote 1: github.com/askap-craco/CELEBI/ ## 2 Overview ### Input data format CELEBI's primary inputs are sets of voltages acquired by the simultaneous freezing and downloading ("dumping") of the contents of a 3.1 s-duration ring buffer for each ASKAP antenna. The buffers record complex-valued electric field samples across a 336-MHz bandwidth in both orthogonal linear polarisations of each beam of an antenna's phased-array feed (PAF), although only the data for the beam in which the desired target is detected in is saved. Upon the real-time detection of an FRB, the voltages are dumped with sufficiently low latency to capture the FRB (Bannister et al., 2019). Voltages are then obtained for two other sources: a "flux" calibrator (a bright continuum source: typically PKS 0408\(-\)65 or PKS 1934\(-\)63), and a polarisation calibrator (a bright, highly linearly polarised pulsar: typically Vela or PSR J1644\(-\)4559). As described below, these two datasets are employed to derive the necessary calibration terms that enable astrometrically and polarimetrically correct images and time series to be formed for the FRB. ASKAP's polyphase filterbanks (PFBs) produce oversampled "coarse" channels, meaning adjacent channels overlap slightly. Each coarse channel is composed of many "fine" channels, the precise number of which is dependent on the amount of data read from file which is dynamically determined during processing. Each PFB produces data across 784 course channels, each separated by \(B_{C}=1\,\mathrm{MHz}\). The oversampled bandwidth of each channel is \(B_{OS}=(32/27)B_{C}\approx 1.19B_{C}\). Each channel has a region of locally rippled but overall constant frequency response with width \(B_{C}\), and an oversampled region of width \((B_{OS}-B_{C})/2\) on either side that tapers off, as seen in Figure 1, which shows the fine spectrum amplitudes in three adjacent coarse channels. Of these, 336 are available for real-time analysis by the incoherent sum (ICS) pipeline (Bannister et al., 2019) and recorded to voltage buffers for offline analysis. Voltages are stored as 8-bit complex numbers (4 bit real, 4 bit imaginary) in "VCRAFT" files, each of which contains two sets of four 1 MHz oversampled coarse channels. Each set of four channels are internally contiguous in frequency, but the two sets within a VCRAFT file are not necessarily contiguous with each other. Each antenna for which voltages are saved has 42 VCRAFT files (for a total bandwidth of 336 MHz) whose headers contain frequency and timestamp information, for each of the two orthogonal linear polarisations. As well as the raw voltage data, a set of metadata associated with the FRB trigger is also provided as input. This includes the real-time search candidate that triggered the voltage dump, the parameters of the observation the detection was made in, and a preliminary FRB position derived from multibeam analysis with a precision of a few arcminutes. The candidate that triggered the voltage dump is stored as a text file containing the candidate's signal-to-noise ratio (S/N), arrival time, dispersion measure (DM), and boxcar width. This candidate is the first that passes the S/N, DM, and width filters of the real-time search and as such the parameters are considered preliminary. The observational parameters provided includes the name, location, and fixed delay associated with each ASKAP antenna. Figure 1: Amplitude response as a function of frequency within three adjacent coarse channels pre-PFB inversion, averaged over 4096 fine channels. \(B_{C}=1\,\mathrm{MHz}\) is the coarse channel bandwidth and \(B_{OS}=(32/27)B_{C}\approx 1.19B_{C}\) is the oversampled bandwidth. A predictable ripple in the passband amplitude is present due to the response function of the polyphase filterbank used to form these 1 MHz channels. ### Algorithm overview Figure 4: polcal workflow DFD. Figure 5: FBB workflow DFD. Figure 3: fluxcal workflow DFD. #### 3.1.1 Load coarse dynamic spectra This process loads the voltages from the FRB data set and constructs a "coarse" dynamic spectrum with frequency resolution 1 MHz and time resolution 1 ms for a particular polarisation and antenna. For every coarse channel, we trim the oversampled regions (Figure 1) and inverse Fast Fourier Transform (FFT) to obtain a 1 us time resolution complex voltage time series. We account for geometric delays in signal arrival time between antennas by applying offsets in time in the data read from the VCRAFT files based on an interferometer model calculated using the DiFX program difxcalc(Gordon et al., 2016) for the initial rough FRB position. We take the square of the absolute power of this time series to obtain a power time series, and reduce the time resolution ("time scrunch") to 1 ms by summing blocks of 1000 time samples. The time scrunched power time series in each coarse channel is then arranged into a two-dimensional dynamic spectrum, which is passed as output of the process. An instance of this process is run for each unique polarisation-antenna pair. This process also calculates the time axis of the dynamic spectra in units of Modified Julian Day (MJD) based on the start time of the data in the VCRAFT headers and the geometric delays. #### 3.1.2 Refine candidate This process sums the power dynamic spectra across both polarisations and all antennas as output by 3.1.1, then searches the resulting incoherent sum (ICS) dynamic spectrum for a single dispersed pulse. We search over DMs in a configurable range of DM values, defaulting to \(\pm 10\,\mathrm{pc\,cm^{-3}}\) around the detection candidate's DM with a step size of \(0.01\,\mathrm{pc\,cm^{-3}}\). For each DM, we incoherently dedisperse the ICS dynamic spectrum by shifting each coarse channel by an integer number of time samples. The number of 1 ms samples a coarse channel of central frequency \(f\) is shifted in the direction of increasing time is given by \[t_{\mathrm{shift}}(\mathrm{DM},f)=\left\lfloor k_{\mathrm{DM}}\mathrm{DM} \left(f_{0}^{-2}-f^{-2}\right)\cdot 1000\right\rfloor, \tag{1}\] where \(k_{\mathrm{DM}}=(2.41\times 10^{-4})^{-1}\,\mathrm{MHz\,cm^{3}\,pc^{-1}}\) and \(f_{0}\) is a reference frequency, which we choose as the central frequency of the lowest-frequency coarse channel. Note that this choice of reference frequency results in the lowest-frequency coarse channel not being shifted at all, and as such we can use the MJD time array produced by process 3.1.1 to measure the arrival time of the burst at the bottom of the observing band. We then sum this dedispersed dynamic spectrum along the frequency axis to get a 1 ms-resolution dedispersed profile. To improve the S/N of the FRB in the event that it is spread out over multiple time samples, either due to the burst's intrinsic width or dispersive smearing within channels, we smooth the profile by convolving with top-hat functions with widths between 1 and 10 samples. We then calculate the S/N of each sample in each smoothed profile by dividing each profile by its standard deviation. We take the DM, time, and width with the maximum S/N value and create a refined candidate file with these values updated. #### 3.1.3 Generate bin configs The gate and RFI correlation modes performed in FRB require the specification of time and frequency dependent weights that are used to select only certain windows of data for correlation and imaging. The windows are shaped according to the expected time delay due to dispersion across the band based on the arrival time and DM of the refined candidate generated by process 3.1.2. These take the form of "bin config" files, that record the windows and their weights, and a "polyco" file that records a reference time & frequency and the DM of the FRB. Figure 6 illustrates how the bins are defined for the gate and RFI modes. Seven gate bins are generated, each with a duration of \(10\,\mathrm{ms}\). The central bin is expected to be the only bin to contain the FRB, but all are imaged in case part or all of the FRB signal falls outside of the central bin due to small unexpected errors in identifying the burst arrival time. The RFI bins are each \(16\,\mathrm{ms}\) wide and follow the same dispersive sweep as the FRB, leaving a 4 ms buffer on either side of the gate bin. This width was chosen as a compromise between the counter-posed goals of minimising the noise contribution (which favours a longer duration) and measuring the RFI environment as close in time as possible to the FRB itself (which favours a shorter duration). ### Get beam centre We wish to centre the field image on the beam centre, rather than the preliminary FRB position. This process parses the FRB voltage headers in order to obtain the sky coordinates of the beam centre, which are passed to the field mode's correlate instance. Figure 6: FRB200430 dynamic spectrum (time scrunched to 1 ms) with example gate bin region (green) and RFI bins (orange hatched) overlaid. The green region is split into seven equal-width bins. For illustrative purposes, the durations of each of the bins are ten times larger than is actually used. ### Correlate The correlate workflow takes as input a VCRAFT voltage dataset (as described in SS2.1) and an optional bin configuration (as generated by process 3.1.3), and outputs correlated visibilities in the FITS format. Figure 7 shows the DFD for the workflow. #### 3.3.1 Get start MJD Because the voltage dump does not happen perfectly simultaneously across all the antennas, the data will have slightly different start times on a per-antenna basis. This process parses the voltage headers to find the earliest start time so that it can be provided to process 3.3.2 to be used as a reference time. This ensures that all the correlations are using the same reference time. This process is executed a single time, and its output passed to each instance of process 3.3.2. #### 3.3.2 Do correlation This process uses the "DiFX" software correlator (Deller et al., 2011) to produce visibility datasets from the saved voltage data. One DiFX instance is executed for each unique card-FPGA pair, taking in two voltage files per antenna (one for each polarisation). This produces 8 \(\times\) 1.185 MHz output subbands (which are spaced by 1 MHz, but are wider than 1 MHz due to the ASKAP over-sampling), each with 128 frequency channels. The instance for the lowest-frequency card-FPGA pair is executed first, and its output passed to the other 41 instances (which are executed in parallel) to avoid issues caused by the FRB dispersion, as described below. Because the 1 MHz-wide ASKAP coarse channels are over-sampled by a factor of 32/27, there is redundant data that can be discarded after correlation, and it is convenient to do so prior to the assembly of the DiFX output into FITS files. We achieve this with a python script _mergeOverSampledDiFX.py_; from each 1 MHz coarse channel, it retains 108 of the 128 frequency channels, corresponding to the non-overlapped portion of the band, and assembles the retained channels into a contiguous block. Since each card-FPGA pair provides two blocks of four adjacent ASKAP coarse channels, the result is two 4 MHz subbands, no longer oversampled, each with 432 frequency channels. If a bin configuration file, as generated by process 3.1.3, is provided for the FRB, multiple de-dispersed visibility datasets are produced corresponding to different time slices around the FRB arrival. Due to the short length of the voltage files, the correction of the dispersive delays can lead to a frequency-dependent population of the output bin datasets -- at higher frequencies, the time correction can exceed the difference between the voltage file start and the FRB time at the low end of the band. In this case, some bins may not have any visibility data for some frequencies, which leads to issues when subsequently assembling FITS files. To counter this, dummy data (with zero weight) is generated for any baselines, times, and frequencies for which no visibilities were produced. The lowest frequency card-FPGA pair, which is guaranteed to have data due to having the latest FRB arrival time, is used as a template to provide the dummy data. #### 3.3.3 Convert DiFX to FITS This process collects the output of all instances of process 3.3.2 to combine visibilities and convert the data into the Flexible Image Transport System (FITS) format. This produces a single FITS visibility file containing the full 336 MHz bandwidth, assembled into a single subband. During this process, frequency averaging is undertaken to reduce the data volume by a factor of 27, leading to a final spectral resolution of 250 kHz. ### Flag RFI-affected data Parts of the visibility data are often found to be corrupted by RFI or other systematic effects. Identification and flagging of corrupted visibilities are essential for calibration and imaging. This process performs data-flagging in three steps as described below. 1. Frequency channels that are known to be always affected by persistent RFI (from satellites) are flagged for all baselines. 2. Data from each baseline are independently inspected for RFI affected channels. Identification of corrupted channels is performed based on the average (median) power and noise (median-absolute-deviation) in each channel. A frequency channel is identified as corrupted if its average power (or noise) is an outlier of the distribution for all channels. Since presence of RFI in a significant number of channels may bias the statistics, flagging is performed with a high outlier threshold in the first round. This step is repeated several times, lowering the threshold in each subsequent round. 3. The statistics (average power and noise) of all baselines are compared together. Baselines having average power or noise which are outliers of the distribution for all baselines are identified as RFI-affected baselines, and flagged. An antenna is completely flagged if all its baselines are identified as RFI-affected baselines. This process is independently executed on visibilities corresponding to the flux calibrator, the polarization calibrator and the field correlation. A flag file containing affected antennas, frequency channels, and/or baselines to be excised (on the basis of the above steps) is also generated for diagnostic purposes. Each of the calibration and imaging processes may also be provided an optional user-defined flag file if more flagging than is done automatically is required. Figure 7: Correlation workflow DFD. ### Find flux calibration solutions After flagging, we derive frequency-dependent complex gain solutions from the flux calibrator visibilities using three AIPS tasks: FRING (solves for delay, i.e., a phase slope linearly proportional to frequency, with the solution amplitude fixed at unity), CPASS (solves for frequency-dependent complex gain as a polynomial with frequency, normalising the average amplitude solution for each antenna to unity), and CALIB (solves for a single frequency-independent complex gain per antenna, effectively setting the flux density scale and correcting for antenna-to-antenna signal level variations). While it would in principle be possible to combine these three solutions into a single stage, this separation allows for an easier identification of outliers based on delay and/or average amplitude correction. We also apply these solutions to the flux calibrator visibilities themselves and convert to a CASA measurement set for diagnostic purposes. The solutions are finally passed to the polcal and FRB workflows for imaging. ### Image polarisation calibrator After flagging, we apply the delay and bandpass calibration tables as derived by process 3.5 to the polarisation calibrator visibilities using AIPS. We then convert the calibrated visibilities to a CASA measurement set and create an image of a 128" square region centred on the expected position of the polarisation calibrator with a 1" resolution using the CASA routine tclean. We search the image for a single point source and fit its apparent position with JMFIT, which is passed as an output to the beamform workflow. ### FRB localisation #### 3.7.1 Subtract RFI Because FRB emission is often restricted to only a portion of the observing bandwidth and a significant amount of the signal is often in channels that are contaminated with RFI, we cannot simply flag channels as for the other visibility sets without losing significant signal-to-noise at best, or removing the FRB signal entirely at worst. Instead, we subtract the visibilities correlated in the RFI mode from those correlated in the gate mode, under the assumption that any RFI present is constant over the \(\sim\)50 ms surrounding the FRB. This process performs this subtraction, weighting the RFI visibilities by the ratio of the gate duration to the total duration of the RFI bins. #### 3.7.2 Image FRB Using the RFI-subtracted visibilities from the FRB gate correlation, we calibrate, image, and fit the apparent position of the FRB (right ascension RAFRB, declination DecFRB) in the same way as the polarisation calibrator (process 3.6), but creating a 1024" square image due to the larger initial positional uncertainty. The FRB's apparent position is passed both directly to beamform and to process 3.7.5. #### 3.7.3 Image field and find sources The field visibilities are calibrated, and imaged as for the polarisation calibrator and FRB, creating a 3000" square image. We identify up to 50 point sources in this image through the CASA task findsources and fit their apparent positions with JMFIT. These positions are then passed to process 3.7.4. #### 3.7.4 Find offset For each of the point sources identified in the field image, we search the Rapid ASKAP Continuum Survey (RACS) catalog (Hale et al., 2021) with Astroquery for point sources within a 5" radius of the apparent position. If multiple sources are found in the catalog, we take the brightest, and if none are found we discard the source. For each source in the remaining ensemble, we calculate the offset between our measured position and the RACS catalogue position, and then estimate a systematic positional correction (right ascension \(\overline{\text{RA}}_{\text{offset}}\), declination \(\overline{\text{Dec}}_{\text{offset}}\)) for the field (and the FRB itself) using a weighted mean of these offsets multiplied by an empirical scaling factor (which accounts for differences in the angular resolution and frequency of our observations compared to the reference catalogue); the process is described in detail by Day et al. (2021). #### 3.7.5 Apply offset Finally, the mean offset and its error are added to the FRB's ASKAP position to obtain the corrected position (right ascension RAcorrected, declination Deccorrected) and error: \[\text{RA}_{\text{corrected}} =\text{RA}_{\text{FRB}}+\overline{\text{RA}}_{\text{offset}}\pm \sqrt{\text{AR}}\text{A}_{\text{FRB}}^{2}+\overline{\text{RA}}_{\text{offset}}^{2}, \tag{2}\] \[\text{Dec}_{\text{corrected}} =\text{Dec}_{\text{FRB}}+\overline{\text{Dec}}_{\text{offset}} \pm\sqrt{\text{ADC}_{\text{FRB}}^{2}+\text{\Delta Dec}_{\text{offset}}^{2}}. \tag{3}\] ## 4 Obtaining high-time resolution data via beamforming VCRAFT voltages can be used to reconstruct complex-valued time series of the electric field in the X and Y polarisations at the bandwidth-limited sample rate of \((336\,\text{MHz})^{-1}\approx 3\,\text{ns}\), coherently summed across antennas and coherently dedispersed to eliminate dispersion and associated smearing. This allows for construction of the Stokes parameters I, Q, U, V and measurements of the polarisation properties at high-time resolution and high S/N of FRBs detected and localised by ASKAP. Because we have access to the electric fields in X and Y directly, we can also construct arbitrarily-shaped dynamic spectra in I, Q, U, and V with freely-chosen time and frequency resolutions \(\Delta t\) and \(\Delta f\), constrained only by \(\Delta t\Delta f\geq 1\). These dynamic spectra allow for polarimetric measurements across frequency and time, including the rotation measure (RM) and polarisation fractions. In order to obtain these data products, the following operations must be performed on the voltages: 1. Beamforming: the application of per-antenna time delays to account for the difference in signal arrival times due to the geometry of antennas and hardware signal propagation delays (process 4.1.2) 2. PFB inversion: undoing the coarse channelisation performed by hardware before the voltages are recorded to obtain a single complex fine spectrum per polarisation per antenna (process 4.1.2) 3. Calibration: the application of per-antenna bandpass calibration solutions, obtained during burst localisation, to the fine spectra (process 4.1.2) 4. Summation: coherent summation of fine spectra across antennas to obtain a single fine spectrum per polarisation (process 4.1.3) 5. Derippling: removing systematic rippling in the fine spectra (processes 4.1.4 and 4.1.5) 6. Coherent dedispersion (process 4.1.6) 7. Inverse Fourier transform: obtain complex-valued time series at \((336\,\mathrm{MHz})^{-1}\approx 3\,\mathrm{ns}\), in the X and Y linear polarisation bases, via inverse Fourier transform of the fine spectra (process 4.1.7) 8. Construct Stokes parameters and dynamic spectra (process 4.1.8) ### Beamform The beamform workflow (Figure 8) takes in a set of voltages, localised source position, flux calibration solutions, and optionally polarisation calibration solutions, and performs the operations listed above to produce a HTR data set, which includes: complex \(\sim\)3 ns-resolution time series in X and Y; Stokes I, Q, U, and V time series at the same time resolution; and arbitrarily-shaped I, Q, U, and V dynamic spectra (typically with \(\Delta t=1\,\mathrm{\SIUnitSymbolMicro\secondsecond}\) and \(\Delta f=1\,\mathrm{MHz}\)). The beamform workflow is invoked within the polcal and FRB workflows, in both cases after the respective source has been localised. The position provided to the FRB instance of beamform is the apparent position, i.e. the position that is fit from the FRB image without astrometric correction. The method for PFB inversion has also been described by Morrison et al. (2020), and the full method for obtaining high-time resolution FRB data by Cho et al. (2020). We describe these methods again here to reflect changes to the methods and describe the specific implementations applied in CELEBI. #### 4.1.1 Create calcfiles This process uses difxcalc to calculate the antenna-dependent geometric delays used in used by process 4.1.2 to align each antenna's datastream in time, given the previously-determined FRB position. #### 4.1.2 Do beamform This process prepares a fine spectrum (a spectrum of fine channels across the entire observing bandwidth) from voltages in each antenna for beamforming. This involves the application of antenna-dependent time delays, PFB inversion, and application of flux calibration solutions. An instance of this process is run for each unique polarisation-antenna pair, which we index with \(p\in\{\mathrm{X},\mathrm{Y}\}\) and \(a\in[1\,..\,n_{\mathrm{ant}}]\) respectively, where \(n_{\mathrm{ant}}\) is the number of antennas available. The initial duration of the data across all antennas is equal, but because we are applying offsets to each antenna's data we must slightly reduce the duration of data loaded so that the data for all antennas occupies the same time range after applying the delays. We choose a number of samples \(n_{\mathrm{samp}}\) that maximises the duration loaded while satisfying this condition. In order to coherently sum signals across antennas, we must apply antenna-dependent time delays \(\Delta t_{a}\) to the data. These have two components: a time-dependent geometric delay \(G_{a}(t)\) that accounts for the differing path lengths of a signal between antennas; and a fixed delay \(F_{a}\), which is measured during normal ASKAP operations and accounts for delays in hardware (mostly due to signal propagation delay in cables of different lengths for each antenna). These delays are provided by process 4.1.1. The geometric delay changes with time to account for the Earth's rotation changing the difference in arrival times over the duration of the 3.1-second voltage dump. difxcalc fits a polynomial to model the required geometric delay in each antenna as a function of time. Due to the short (\(\sim\) seconds) duration of data included in processing FRB voltages, the delay applied is well-approximated as being linear in time. \(G_{a}(t)\) is evaluated via the interferometer model's polynomial at \(t_{\mathrm{start}}\) and \(t_{\mathrm{end}}\), corresponding to the start and end times of the data, and is linearly interpolated to give \(G_{a}(t)\propto t\). The coarse channelised voltages are loaded from all available VCRAFT files to obtain a complex time series for each coarse channel index \(c\in[1\,..\,n_{\mathrm{chan}}]\) where in general the number of channels \(n_{\mathrm{chan}}=336\). When reading the data from disk, we load \(n_{\mathrm{samp}}\) samples, offset from the beginning of the data by a number of samples equivalent to \(F_{a}\). We then Fourier transform the complex time series to obtain a fine spectrum \(s_{p,\mathrm{c},\mathrm{c}}(f)\). Each of these spectra are of oversampled coarse channels with central frequency \(f_{c}\). The geometric delay \(G_{a}(t)\) is applied to \(s_{p,\mathrm{c},\mathrm{c}}(f)\) to give an aligned spectrum: \[s_{p,\mathrm{c},\mathrm{c}}^{\mathrm{align}}(f)=s_{p,\mathrm{c},\mathrm{c}} (f)e^{2i\pi f_{c}G_{a}(t)}. \tag{4}\] By truncating each channel to remove the tapered regions and concatenating the flat regions of the fine spectra of the channels, we obtain a fine spectrum across the full bandwidth with a constant frequency response. First the truncation: \[s_{p,\mathrm{c},\mathrm{c}}^{\mathrm{trunc}}(f)=\begin{cases}0,&f<f_{c}-\frac {B_{C}}{2}\\ s_{p,\mathrm{c},\mathrm{c}}^{\mathrm{align}}(f),&f_{c}-\frac{B_{C}}{2}\leq f<f _{c}+\frac{B_{C}}{2}\\ 0,&f_{c}+\frac{B_{C}}{2}\leq f\end{cases}, \tag{5}\] and then the concatenation: \[S_{p,\mathrm{c}}(f)=\sum_{c=1}^{n_{\mathrm{ant}}}s_{p,\mathrm{c},\mathrm{c}} ^{\mathrm{trunc}}(f). \tag{6}\] Then we apply the flux calibration: \[S_{p,\mathrm{c}}^{\mathrm{cal}}(f)=P_{\mathrm{flux};a}(f)S_{p,\mathrm{c}}(f), \tag{7}\] where \(P_{\mathrm{flux};a}(f)\) is an antenna-dependent phasor applying the flux calibration solutions as derived by process 3.5. #### 4.1.3 Sum antennas This process takes in the calibrated, beamformed fine spectra for each polarisation in each antenna output by process 4.1.2 and coherently sums these spectra to produce a single spectrum per polarisation: \[S_{p}(f)=\sum_{\alpha=1}^{\pi_{\mathrm{amb}}}S_{p\alpha}^{\mathrm{cal}}(f). \tag{8}\] #### 4.1.4 Generate deripple coefficients The design of ASKAP's PFB leads to the recovered fine spectra having a non-uniform, rippled frequency response (see Figure 1). However, the exact shape of this rippling is predictable and it can be mitigated by dividing by a set of deripple coefficients. The deripple coefficients are the inverse of the coarse channel bandpass. This is determined by the FFT of the 24,576 ASKAP PFB coefficients \(C_{\mathrm{PFB}}(\delta f)\), where \(\delta f\) is frequency relative to the centre of the coarse channel, themselves a sinc function which is smoothed at the edges to reduce artefacts from the finite size of the filter. Fluctuations in the response are within 0.2 dB over the nominal 1 MHz coarse channel bandwidth (Tuthill et al., 2015). These coefficients are constant within the ASKAP system, and identical for each coarse channel and antenna. They have been generated once, and are hard-coded within CELEBI. The derippling coefficients for each channel are \[C_{\mathrm{derip}}(\delta f)=\frac{1}{|\mathcal{F}(C_{\mathrm{PFB}}(\delta f)) |}. \tag{9}\] Because the exact number of samples in the fine spectra (\(n_{\mathrm{samp}}\)) differs between input datasets, we linearly interpolate the denominator of this fraction to match the number of samples in each channel's truncated fine spectrum, i.e. \([n_{\mathrm{samp}}(B_{C}/B_{OS})]\). #### 4.1.5 Apply deripple coefficients Because the deripple coefficients \(C_{\mathrm{derip}}(\delta f)\) are identical for each coarse channel, we apply them by iterating over the central frequencies \(f_{c}\) of each of the \(n_{\mathrm{chan}}\) coarse channels: \[S_{p}^{\mathrm{derip}}(f_{c}+\delta f)=S_{p}(f_{c}+\delta f)C_{\mathrm{derip} }(\delta f). \tag{10}\] This produces fine spectra \(S_{p}^{\mathrm{derip}}(f)\) with uniform frequency responses. #### 4.1.6 Coherently dedisperse Dispersion is a well-modelled process, and one that is straightforward to account for in FRB data. Having access to the complex spectra of the X and Y polarisations enables coherent dedispersion, instead of imperfect incoherent dedispersion. Coherent dedispersion is able to perfectly compensate for and remove the frequency-dependent time delay introduced by the ionised interstellar medium (assuming cold plasma dispersion) by acting on the voltage data that samples the electromagnetic wave in each of the two linear polarisations. This is because dispersion, as a physical process, effectively acts as a frequency-dependent rotation of phase in the spectral domain that manifests as a frequency-dependent time delay in the temporal domain. Therefore, with access to the spectral domain of the radiation being dispersed (the FRB signal), the phases can be de-rotated to obtain the signal as it would have been without any dispersion. Assuming cold plasma dispersion, the transfer function for coherent dedispersion to a dispersion measure DM is: \[H(f;\,\mathrm{DM})=\exp\left(2i\pi k_{\mathrm{DM}}\mathrm{DM}\frac{(f-f_{0}) ^{2}}{ff_{0}^{2}}\right), \tag{11}\] (Lorimer and Kramer, 2005) where \(f_{0}\) is a reference frequency, which we choose as the minimum frequency of the observing bandwidth. We apply this transfer function to the spectrum of each polarisation to coherently dedisperse them: \[S_{p\mathrm{DM}}(f)=H(f;\,\mathrm{DM})S_{p}^{\mathrm{derip}}(f). \tag{12}\] #### 4.1.7 Inverse fast Fourier transform This process applies the inverse fast Fourier transform to the dedispersed fine spectra to obtain the complex electric field in each polarisation in the time domain: \[E_{p}(t)=\mathcal{F}^{-1}\left(S_{p\mathrm{DM}}(f)\right). \tag{13}\] Figure 8: Beamform workflow DFD. #### 4.1.8 Calculate Stokes parameters We now calculate time series for the Stokes parameters: \[I(t) =|E_{X}(t)|^{2}+|E_{Y}(t)|^{2}, \tag{14}\] \[Q(t) =|E_{X}(t)|^{2}-|E_{Y}(t)|^{2},\] (15) \[U(t) =2\,\mathrm{Re}(E_{X}^{*}E_{Y}),\] (16) \[V(t) =2\,\mathrm{Im}(E_{X}^{*}E_{Y}). \tag{17}\] The electric field time series can also be used to generate dynamic spectra with frequency resolution \(\Delta f\) and time resolution \(\Delta t\) such that \(\Delta f\Delta t=1\). Typically, this is done with \(\Delta v=1\,\mathrm{MHz}\implies\Delta t=1\,\mu\mathrm{s}\), but is in general only constrained by \(\Delta t=N_{\mathrm{chan}}\delta t\), where \(N_{\mathrm{chan}}\) is a positive integer representing the number of channels desired in the dynamic spectra and \(\delta t=(336\,\mathrm{MHz})^{-1}\approx 3\,\mathrm{ns}\) is the bandwidth-limited time resolution. Once \(\Delta f\) and \(\Delta t\) are selected, the dynamic spectra in each polarisation are generated by taking the discrete Fourier transform of \(N_{\mathrm{chan}}\) samples at a time. This process is demonstrated visually in Figure 9, and gives the dynamic spectra \(E_{X}(t,f)\) and \(E_{Y}(t,f)\). The Stokes dynamic spectra are then calculated as before in Equations 14 - 17. If polarisation calibration solutions as determined by process 4.2 have been provided, we also apply these to the Stokes U and V dynamic spectra: \[U^{\prime}(t,f) =U(t,f)\cos(\Phi(f))-V(t,f)\sin(\Phi(f)), \tag{18}\] \[V^{\prime}(t,f) =U(t,f)\sin(\Phi(f))+V(t,f)\cos(\Phi(f)). \tag{19}\] ### Derive polarisation calibration solutions In order to correct for instrumental frequency-dependent leakage between Stokes \(U\) and \(V\), we take the Stokes dynamic spectra produced by process 4.1.8 for the polarisation calibrator data and derive a correction angle to apply to the FRB data using the method described by Prochaska et al. (2019): \[\Phi(f)=\Delta\tau f+\Phi_{0}, \tag{20}\] where \(\Delta\tau\) and \(\Phi_{0}\) are leakage terms representing a time and phase offset respectively. We derive models for the linear and circular polarisation ratios \(L(f)/I(f)\) (where \(L(f)=\sqrt{Q(f)^{2}+U(f)^{2}}\)) and \(V(f)/I(f)\) of each polarisation calibrator as second-order polynomials in \(f\) by fitting spectra obtained with the Murriyang radio telescope. ### Plot FRB high time resolution data The final process of the high time resolution processing is to plot the data for the FRB. We plot the Stokes I time series averaged to 1 ms time resolution (e.g. Figure 10) and each of the Stokes dynamic spectra with a range of time averaging values (e.g. Figure 11). ## 5 Summary The bringing together of CRAFT's voltage processing software into CELEBI has led to several significant improvements to the software overall. Most importantly, voltage processing is now almost entirely automated. This has reduced the turnaround between FRB detection and obtaining the final data products (high-precision localisation and high-time resolution data) from a week or more of processing requiring a close level of human oversight and manual execution, to as little as less than a day with very little direct human supervision. The precise time required for processing depends on the resources available on the supercomputing cluster CELEBI is being run on (we have largely been using the OzStar supercomputer). Also, processing can be impeded by unexpected irregularities in the data or observations that CELEBI is not yet robust to. Nextflow's method of process execution, where each instance of a process is executed in its own directory, combined with its detailed logging and reporting, makes diagnosis of problems quite straightforward and has greatly improved reproducability in processing. Processing CRAFT voltages is also now much more accessible than it was previously, as the user-end interactions are much simpler and require less technical knowledge. A primary motivation for the development of CELEBI was the forthcoming CRACO upgrade for ASKAP's real-time detection system (Bannister et al., in prep). CRACO is expected to increase the rate at which ASKAP detects FRBs from of order \(\sim\)1 per month to of order \(\sim\)1 per day. This much higher Figure 9: Visual representation of the process of converting a complex time series into a dynamic spectrum. Top panel: a simulated complex time series with the real component in black and the imaginary component in blue. Green lines separate sets of samples into bins of width \(\Delta t\). Middle panels: the complex Fourier transforms of each of the bins, again with the real component in black and imaginary component in blue. Bottom panel: The amplitude of the dynamic spectrum created by plotting each bin’s spectrum vertically, with lighter cells representing higher values. Figure 11: CELEBI output plot of Stokes dynamic spectra for FRB210117. Each row shows dynamic spectra for a Stokes parameter, labelled on the right, with the time averaging length labelled at the top of each column. The top row is the frequency-integrated pulse profile for each of the Stokes parameters. The rightmost column is the spectrum at the peak time index in the largest time averaging length. Figure 10: CELEBI output plot of Stokes I time series for FRB210117, averaged to a time resolution of 1 ms. detection rate will require automated processing and logging, and standardised data outputs, all of which are now provided via CELEBI. While the primary functionality of CELEBI is now complete, development is ongoing. The processing of each FRB still requires a degree of human oversight, and processing errors are handled on a case-by-case basis. The robustness of CELEBI to issues such as data corruption, unusual antenna behaviour, calibration errors, and unexpected RFI environments is continually improving, but this can only occur as the issues arise. There are also remaining improvements to be made to optimise the quality of the pipeline outputs. The current method of imaging an FRB and subtracting RFI could be improved from the current binning (process 3.1.3) by instead correlating of order 100 bins each 1 ms in duration, and using a matched filter post-correlation to image the burst while removing RFI. This would simplify the pipeline structure by making the RFI correlation branch obsolete, give more flexibility in imaging the FRB, and maximise the S/N of the FRB in its image, therefore minimising its positional uncertainty. The high-time resolution data products are currently output as Numpy arrays. Incorporating conversion to other standard formats, such as PSRCHIVE archives or FITS, would be convenient for using the data products with already-existing analysis software. CELEBI also does not currently include any functionality for measuring FRB RMs, a burst property which is considered a standard measurement when possible, i.e. when polarimetric data is available. ## Acknowledgements ATD and KG acknowledge support through ARC Discovery Project DP200102243. CWJ and MG acknowledge support from the Australian Government through the Australian Research Council's Discovery Projects funding scheme (project DP210102103). SB is supported by a Dutch Research Council (NWO) Veni Fellowship (VLVeni.212.058). RMS acknowledges support through Australian Research Council Future Fellowship FT190100155 and Discovery Project DP220102305. This work was performed on the OzSTAR national facility at Swinburne University of Technology. The OzSTAR programme receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government. This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamaji People as the Traditional Owners and native title holders of the Observatory site. CSIRO's ASKAP radio telescope is part of the Australia Telescope National Facility ([https://ror.org/05qajvd42](https://ror.org/05qajvd42)). Operation of ASKAP is funded by the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Research Centre. Establishment of ASKAP, Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory and the Pawsey Supercomputing Research Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund.
2309.04902
Transformers in Small Object Detection: A Benchmark and Survey of State-of-the-Art
Transformers have rapidly gained popularity in computer vision, especially in the field of object recognition and detection. Upon examining the outcomes of state-of-the-art object detection methods, we noticed that transformers consistently outperformed well-established CNN-based detectors in almost every video or image dataset. While transformer-based approaches remain at the forefront of small object detection (SOD) techniques, this paper aims to explore the performance benefits offered by such extensive networks and identify potential reasons for their SOD superiority. Small objects have been identified as one of the most challenging object types in detection frameworks due to their low visibility. We aim to investigate potential strategies that could enhance transformers' performance in SOD. This survey presents a taxonomy of over 60 research studies on developed transformers for the task of SOD, spanning the years 2020 to 2023. These studies encompass a variety of detection applications, including small object detection in generic images, aerial images, medical images, active millimeter images, underwater images, and videos. We also compile and present a list of 12 large-scale datasets suitable for SOD that were overlooked in previous studies and compare the performance of the reviewed studies using popular metrics such as mean Average Precision (mAP), Frames Per Second (FPS), number of parameters, and more. Researchers can keep track of newer studies on our web page, which is available at \url{https://github.com/arekavandi/Transformer-SOD}.
Aref Miri Rekavandi, Shima Rashidi, Farid Boussaid, Stephen Hoefs, Emre Akbas, Mohammed bennamoun
2023-09-10T00:08:29Z
http://arxiv.org/abs/2309.04902v1
# Transformers in Small Object Detection: A Benchmark and Survey of State-of-the-Art ###### Abstract Transformers have rapidly gained popularity in computer vision, especially in the field of object recognition and detection. Upon examining the outcomes of state-of-the-art object detection methods, we noticed that transformers consistently outperformed well-established CNN-based detectors in almost every video or image dataset. While transformer-based approaches remain at the forefront of small object detection (SOD) techniques, this paper aims to explore the performance benefits offered by such extensive networks and identify potential reasons for their SOD superiority. Small objects have been identified as one of the most challenging object types in detection frameworks due to their low visibility. We aim to investigate potential strategies that could enhance transformers' performance in SOD. This survey presents a taxonomy of over 60 research studies on developed transformers for the task of SOD, spanning the years 2020 to 2023. These studies encompass a variety of detection applications, including small object detection in generic images, aerial images, medical images, active millimeter images, underwater images, and videos. We also compile and present a list of 12 large-scale datasets suitable for SOD that were overlooked in previous studies and compare the performance of the reviewed studies using popular metrics such as mean Average Precision (mAP), Frames Per Second (FPS), number of parameters, and more. Researchers can keep track of newer studies on our web page, which is available at: [https://github.com/arekavandi/Transformer-SOD](https://github.com/arekavandi/Transformer-SOD). Object recognition, small object detection, vision transformers, object localization, deep learning, attention, MS COCO dataset. ## 1 Introduction Small Object Detection (SOD) has been recognized as a significant challenge for State-Of-The-Art (SOTA) object detection methods [1]. The term "small object" refers to objects that occupy a small fraction of the input image. For example, in the widely used MS COCO dataset [2], it defines objects whose bounding box is \(32\times 32\) pixels or less, in a typical \(480\times 640\) image (Figure 1). Other datasets have their own definitions, e.g. objects that occupy \(10\%\) of the image. Small objects are often missed or detected with incorrectly localized bounding boxes, and sometimes with incorrect labels. The main reason for the deficient localization in SOD stems from the limited information provided in the input image or video frame, compounded by the subsequent spatial degradation experienced as they pass through multiple layers in deep networks. Since small objects frequently appear in various application domains, such as pedestrian detection [3], medical image analysis [4], face recognition [5], traffic sign detection [6], traffic light detection [7], ship detection [8], Synthetic Aperture Radar (SAR)-based object detection [9], it is worth examining the performance of modern deep learning SOD techniques. In this paper, we compare transformer-based detectors with Convolutional Neural Networks (CNNs) based detectors in terms of their small object detection performance. In the case of outperforming CNNs with a clear margin, we then attempt to uncover the reasons behind the transformer's strong performance. One immediate explanation could be that transformers model the interactions between pairwise locations in the input image. This is effectively a way of encoding the context. And, it is well established that context is a major source of information to detect and recognize small objects both in humans and computational models [10]. However, this might not be the only factor to Fig. 1: Examples of small size objects from MS COCO dataset [2]. The objects are highlighted with color segments. explain transformers' success. Specifically, we aim to analyze this success along several dimensions including object representation, fast attention for high-resolution or multiscale feature maps, fully transformer-based detection, architecture and block modification, auxiliary techniques, improved feature representation, and spatio-temporal information. Furthermore, we point out approaches that could potentially enhance the performance of transformers for SOD. In our previous work, we surveyed numerous strategies employed in deep learning to enhance the performance of small object detection in optical images and videos up to the year 2022 [11]. We showed that beyond the adaptation of newer deep learning structures such as transformers, prevalent approaches include data augmentation, super-resolution, multi-scale feature learning, context learning, attention-based learning, region proposal, loss function regularization, leveraging auxiliary tasks, and spatiotemporal feature aggregation. Additionally, we observed that transformers are among the leading methods in localizing small objects across most datasets. However, given that [11] predominantly evaluated over 160 papers focusing on CNN-based networks, an in-depth exploration of transformer-centric methods was not undertaken. Recognizing the growth and exploration pace in the field, there is a timely window now to delve into the current transformer models geared towards small object detection. In this paper, our goal is to comprehensively understand the factors contributing to the impressive performance of transformers when applied to small object detection and their distinction with strategies used for generic object detection. To lay the groundwork, we first highlight renowned transformer-based object detectors for SOD, juxtaposing their advancements against established CNN-based methodologies. Since 2017, the field has seen the publication of numerous review articles. An extensive discussion and listing of these reviews are presented in our previous survey [11]. Another recent survey article [12] mostly focuses on the CNN-based techniques, too. The narrative of this current survey stands distinct from its predecessors. Our focus in this paper narrows down specifically to transformers -- an aspect not explored previously -- positioning them as the dominant network architecture for image and video SOD. This entails a unique taxonomy tailored to this innovative architecture, consciously sidelining CNN-based methods. Given the novelty and intricacy of this topic, our review prioritizes works primarily brought forth post-2022. Additionally, we shed light on newer datasets employed for the localization and detection of small objects across a broader spectrum of applications. The studies examined in this survey primarily presented methods tailored for small object localization and classification or indirectly tackled SOD challenges. What drove our analysis were the detection outcomes specified for small objects in these papers. However, earlier research that noted SOD outcomes but either demonstrated subpar performance or overlooked SOD-specific parameters in their development approach were not considered for inclusion in this review. In this survey, we assume the reader is already familiar with generic object detection techniques, their architectures, and relevant performance measures. If the reader requires foundational insight into these areas, we refer the reader to our previous work [11]. The structure of this paper is as follows: Section 2 offers an overview of CNN-based object detectors, transformers, and their components, including the encoder and decoder. This section also touches upon two initial iterations of transformer-based object detectors: DETR and ViT-FRCNN. In Section 3, we present a classification for transformer-based SOD techniques and delve into each category comprehensively. Section 4 showcases the different datasets used for SOD and evaluates them across a range of applications. In Section 5, we analyze and contrast these outcomes with earlier results derived from CNN networks. The paper wraps up with conclusions in Section 6. ## 2 Background Object detection and in particular SOD, has long relied on CNN-based deep learning models. Several single-stage and two-stage detectors have emerged over time, such as You Only Look Once (YOLO) variants [13, 14, 15, 16, 17, 18, 19], Single Shot multi-box Detector (SSD) [20], RetinaNet [21], Spatial Pyramid Pooling Network (SPP-Net) [22], Fast R-CNN [23], Faster R-CNN [24], Region-Based Fully Convolutional Networks (R-FCN) [25], Mask R-CNN [26], Feature Pyramid Networks (FPN) [27], cascade R-CNN [28], and Libra R-CNN [29]. Various strategies have been used in conjunction with these techniques to improve their detection performance for SOD, with multi-scale feature learning being the most commonly used approach. The transformer model was first introduced in [30] as a novel technique for machine translation. This model aimed to advance beyond traditional recurrent networks and CNNs by introducing a new network architecture solely based on attention mechanisms, thereby eliminating the need for recurrence and convolutions. The Transformer model consists of two main modules: the encoder and the decoder. Figure 2 provides a visual representation Fig. 2: Transformer architecture containing encoder (left module) and decoder (right module) used in sequence to sequence translation (figure from [30]). of the processing blocks within each module. The description of terminologies commonly used in Transformers for computer vision is provided in Table I for readers who are not familiar with the topic. Within the context of SOD, the encoder module ingests input tokens, which can refer to image patches or video clips, and employs various feature embedding approaches, such as utilizing pre-trained CNNs to extract suitable representations. The positional encoding block embeds positional information into the feature representations of each token. Positional encoding has demonstrated significant performance improvements in various applications. The encoded representations are then passed through a Multi-Head Attention block, which is parameterized with three main matrices, namely \(\textbf{W}_{q}\in\mathbf{R}^{d_{q}\times d}\), \(\textbf{W}_{k}\in\mathbf{R}^{d_{k}\times d}\), and \(\textbf{W}_{v}\in\mathbf{R}^{d_{v}\times d}\) to obtain query, key and value vectors, shown by **q**, **k**, **v**, respectively. In other words, \[\textbf{q}_{i}=\textbf{W}_{q}\textbf{x}_{i},\quad\textbf{k}_{i}=\textbf{W}_{k} \textbf{x}_{i},\quad\textbf{v}_{i}=\textbf{W}_{v}\textbf{x}_{i},\quad i=1, \cdots,T, \tag{1}\] where \(T\) is the total number of tokens and each token is denoted by **x**. The output of the Multi-Head Attention block is given by \[\text{MH Attention}(\textbf{Q},\textbf{K},\textbf{V})=\text{ Concat}(\text{head}_{1},\cdots,\text{head}_{h})\textbf{W}^{O}. \tag{2}\] where \(\textbf{W}^{O}\in\mathbf{R}^{hd_{v}\times d}\), \(d_{k}=d_{q}\), and \[\text{head}_{h}=\text{Attention}(\textbf{Q}_{h},\textbf{K}_{h},\textbf{V}_{h })=\text{Softmax}(\frac{\textbf{K}_{h}^{\top}\textbf{Q}_{h}}{\sqrt{d_{k}}}) \textbf{V}_{h}^{\top}. \tag{3}\] Finally, the results obtained from the previous steps are combined with a skip connection and a normalization block. These vectors are then individually passed through a fully connected layer, applying an activation function to introduce non-linearity into the network. The parameters of this block are shared across all vectors. This process is repeated for a total of \(N\) times, corresponding to the number of layers in the deep network. In the decoder module, a similar process is applied using the vectors generated in the encoder, while also consuming the previously generated predictions/outputs as additional input. Ultimately, the output probabilities for the possible output classes are computed. \begin{table} \begin{tabular}{p{142.3pt} p{284.5pt}} \hline Full Term & Description \\ \hline Encoder & Encoder in transformers consists of multiple layers of self-attention modules and feed-forward neural networks to extract local and global semantic information from the input data. \\ Decoder & Decoder module is responsible to generate the output (either sequence or independent) based on the concept of self and cross attention applied to the object queries and encoder’s output. \\ Token & Token refers to the most basic unit of data input into the transformers. It can be image pixels, patches, or video clips. \\ Multi-Head Attention & Multi-Head Attention is a mechanism in transformers that enhances the learning capacity and representational power of self-attention. It divides the input into multiple subspaces and performs attention computations independently on each subspace, known as attention heads. \\ Spatial Attention & Spatial attention in transformers refers to a type of attention mechanism that attends to the spatial positions of tokens within a sequence. It allows the model to focus on the relative positions of tokens and capture spatial relationships. \\ Channel Attention & Channel attention in transformers refers to an attention mechanism that operates across different channels or feature dimensions of the input. It allows the model to dynamically adjust the importance of different channels, enhancing the representation and modeling of channel-specific information in tasks \\ Object Query & It refers to a learned vector representation that is used to query and attend to specific objects or entities within a scene. \\ Positional Embedding & It refers to a learned representation that encodes the positional information of tokens in an input sequence, enabling the model to capture sequential dependencies. \\ \hline \end{tabular} \end{table} TABLE I: A list of terminologies used in this paper with their meanings. Fig. 3: Top: DETR (figure from [31]). Bottom: ViT-FRCNN (figure from [32]). Attention is achieved through the dot product operation between the key and query matrices, Eq. (3), which computes weights for the linear combination of the matrix \(\mathbf{V}\). An alternative representation for the transformer is also provided in \[\text{MH Attention}^{i}=\sum_{h}\mathbf{W}_{h}^{O}\big{[}\sum_{k=1}^{T}A_{hik} \mathbf{W}_{v}\mathbf{x}_{k}\big{]},i=1,\cdots,T, \tag{4}\] where \(\mathbf{W}_{h}^{O}\) is a submatrix of \(\mathbf{W}^{O}\) that corresponds to \(h-\)th head, and \(A_{hik}\) is the attention weight in \(h-\)th head which is the element in \(i-\)th row (corresponds to \(i-\)th query) and \(k-\)th column (corresponds to \(k-\)th key) of the matrix: \(\text{Softmax}(\frac{\mathbf{W}_{h}^{O}\mathbf{Q}_{h}}{\sqrt{d_{k}}})\). Dosovitskiy _et al._ were the first to utilize the architecture of transformers in computer vision tasks, including image recognition [33]. The remarkable performance exhibited by transformers in various vision tasks has paved the way for their application in the domain of object detection research. Two pioneering works in this area are the DEtection TRansformer (DETR) [31] (Figure 3, Top) and ViT-FRCNN [32] (Figure 3, Bottom). The best outcomes were achieved when the token size was set to \(16\times 16\), and all intermediate transformer states were concatenated with the final transformed layer. Additionally, both detectors rely on CNNs at different stages, in DETR as the backbone for feature extraction and in ViT-FRCNN for the detection head. To improve the results of small object detection, it is crucial to retain the image patches as small as possible to preserve spatial resolution, which consequently increases the computational costs. To address these limitations and challenges, further research has been conducted, which will be discussed in detail in the following sections. ## 3 Transformers For Small Object Detection In this section, we discuss transformer-based networks for SOD. A taxonomy of small object detectors is shown in Figure 4. We show that existing detectors based on novel transformers can be analyzed through one or a few of the following perspectives: object representation, fast attention for high-resolution or multi-scale feature maps, fully transformer-based detection, architecture and block modification, auxiliary techniques, improved feature representation, and spatio-temporal information. In the following subsections, each of these categories is discussed in detail separately. ### _Object Representation_ Various object representation techniques have been adopted in object detection techniques. The object of interest can be represented by rectangular boxes [23], points such as center points [36] and point sets [37], probabilistic objects [38], and keypoints [39]. Each object representation technique has its own strengths and weaknesses, with respect to the need for annotation formats and small object representation. The pursuit of finding the optimal representation technique, while keeping all the strengths of the existing representations, began with RelationNet++ [35]. This approach bridges various heterogeneous visual representations and combines their strengths via a module called Bridging Visual Representations (BVR). BVR operates efficiently without disrupting the overall inference process employed by the main representations, leveraging novel techniques of key sampling and shared location embedding. More importantly, BVR relies on an attention module that designates one representation form as the "master representation" (or query), while the other representations are designated as "auxiliary" representations (or keys). The BVR block is shown in Figure 5, where it enhances the feature representation of the anchor box by seamlessly integrating center and corner points (keys) into the anchor-based (query) object detection methodology. Different object representations are also shown in Figure 5. CenterNet++ [40] was proposed as a novel bottom-up approach. Instead of estimating all the object's parameters at once, CenterNet++ strategically identifies individual components of the object separately, i.e., top-left, bottom-left, and center keypoints. Then, post-processing methodologies are adopted to cluster points associated with the same objects. This technique has demonstrated a superior recall rate in SOD compared to top-down approaches that estimate entire objects as a whole. ### _Fast Attention for High-Resolution or Multi-Scale Feature Maps_ Previous research has shown that maintaining a high resolution of feature maps is a necessary step for maintaining high performance in SOD. Transformers, inherently exhibit a notably higher complexity compared to CNNs due to their quadratic increase in complexity with respect to the number of tokens (e.g., pixel numbers). This complexity emerges from requirement of pairwise correlation computation across all tokens. Consequently, both training and inference times exceed expectations, rendering the detector inapplicable for small object detection in high-resolution images and videos. In their work on Deformable DETR, Zhu _et al._[41] addressed this issue that had been observed in DETR for the first time. They proposed attending to only a small set of key sampling points around a reference, significantly reducing the complexity. By adopting this strategy, they effectively preserved Fig. 5: BVR uses different representations. i.e., corner and center points to enhance features for anchor-based detection (left figure). Object representations are shown for another image (cat) where red dashes show the ground truth (figure from [35]). Fig. 6: The block diagram for the Deformable attention module. \(\textbf{z}_{q}\) is the content feature of the query, **x** is the feature map and \(\textbf{p}_{q}\) is the reference point in 2-D grid. In short, the deformable attention module only attends to a small set of key sampling points around the reference point (different in each head). This significantly reduces the complexity and further improves the convergence (figure from [41]). spatial resolution through the use of multi-scale deformable attention modules. Remarkably, this method eliminated the necessity for feature pyramid networks, thereby greatly enhancing the detection and recognition of small objects. The \(i-\)th output of a multi-head attention module in Deformable attention is given by: \[\text{MH Attention}^{i}=\sum_{h}\mathbf{W}_{h}^{O}\big{[}\sum_{k=1}^{K}A_{hik} \mathbf{W}_{v}\mathbf{x}_{k}(\mathbf{p}_{i}+\Delta\mathbf{p}_{hik})\big{]}, \tag{6}\] where \(i=1,\cdots,T\) and \(\mathbf{p}_{i}\) is the reference point of the query and \(\Delta\mathbf{p}_{hik}\) is the sampling offset (in 2D) in \(h-\)th head with K samplings (K\(<<\)T=HW). Figure 6 illustrates the computation process within its multi-head attention module. Deformable DETR benefits from both its encoder and decoder modules, with the complexity order within the encoder being \(\mathcal{O}(HWC^{2})\) where \(H\) and \(W\) are the height and width of input feature map and \(C\) is the number of channels. In contrast for the DETR encoder, the order of complexity is \(\mathcal{O}(H^{2}W^{2}C)\), displaying a quadratic increase as \(H\) and \(W\) increase in size. Deformable attention has played a prominent role in various other detectors, e.g., in T-TRD [43]. Subsequently, Dynamic DETR was proposed in [44], featuring a dynamic encoder and a dynamic decoder that harness feature pyramids from low to high-resolution representations, resulting in efficient coarse-to-fine object detection and faster convergence. The dynamic encoder can be viewed as a sequentially decomposed approximation of full self-attention, dynamically adjusting attention mechanisms based on scale, spatial importance, and representation. Both Deformable DETR and Dynamic DETR make use of deformable convolution for feature extraction. In a distinct approach, O\({}^{2}\)DETR [45] demonstrated that the global reasoning offered by a self-attention module is actually not essential for aerial images, where objects are usually densely packed in the same image area. Hence, replacing attention modules with local convolutions coupled with the integration of multi-scale feature maps, was proven to improve the detection performance in the context of oriented object detection. The authors in [46] proposed the concept of Row-Column Decoupled Attention (RCDA), decomposing the 2D attention of key features into two simpler forms: 1D row-wise and column-wise attentions. In the case of CF-DETR [47], an alternative approach to FPN was proposed whereby C5 features were replaced with encoder features at level 5 (E5), resulting in improved object presentation. This innovation was named Transformer Enhanced FPN (TEF) module. In another study, Xu _et al._[48] developed a weighted Bidirectional Feature Pyramid Network (BiFPN) through the integration of skip connection operations with the Swin transformer. This approach effectively preserved information pertinent to small objects. ### _Fully Transformer-Based Detectors_ The advent of transformers and their outstanding performance in many complex tasks in computer vision has gradually motivated researchers to shift from CNN-based or mixed systems to fully transformer-based vision systems. This line of work started with the application of a transformer-only architecture to the image recognition task, known as ViT, proposed in [33]. In [42], ViDT extended the YOLOS model [49] (the first fully transformer-based detector) to develop the first efficient detector suitable for SOD. In ViDT, the ResNet used in DETR for feature extraction is replaced with various ViT variants, such as Swin Transformer [50], ViTDet [51], and DeiT [52], along with the Reconfigured Attention Module (RAM). The RAM is capable of handling \([\text{PATCH}]\times[\text{PATCH}]\), \([\text{DET}]\times[\text{PATCH}]\), and \([\text{DET}]\times[\text{DET}]\) attentions. These cross and self-attention modules are necessary because, similar to YOLOS, ViDT appends [DET] and [PATCH] tokens in the input. ViDT only utilizes a transformer decoder as its neck to exploit multi-scale features generated at each stage of its body step. Figure 7 illustrates the general structure of ViDT and highlights its differences from DETR and YOLOS. Recognizing that the decoder module is the main source of inefficiency in transformer-based object detection, the Decoder-Free Fully Transformer (DFFT) [53] leverages two encoders: Scale-Aggregated Encoder (SAE) and Task-Aligned Encoder (TAE), to maintain high accuracy. SAE aggregates the multi-scale features (four scales) into a single feature map, while TAE aligns the single feature map for object type and position classification and regression. Multi-scale feature extraction with strong semantics is performed using a Detection-Oriented Transformer (DOT) backbone. In Sparse RoI-based deformable DETR (SRDD) [54], the authors proposed a lightweight transformer with a scoring system to ultimately remove redundant tokens in the encoder. This is achieved using RoI-based detection in an end-to-end learning scheme. ### _Architecture and Block Modifications_ DETR, the first end-to-end object detection method, struggles with extended converge times during training and performs poorly on small objects. Several research works have addressed these issues to improve SOD performance. One notable contribution comes from Sun et al. [55], who, drawing inspiration from FCOS [56] (a fully convolutional single-stage detector) and Faster RCNN, proposed two encoder-only DETR variants with feature pyramids called TSP-FCOS and TSP-RCNN. This was accomplished by eliminating cross-attention modules from the decoder. Their Fig. 7: ViDT (c) mixes DETR (with ViT backbone or other fully transformer-based backbones) (a) with YOLOS architecture (b) in a multi-scale feature learning pipeline to achieve SOTA results (figure from [42]). findings demonstrated that cross-attention in the decoder and the instability of the Hungarian loss were the main reasons for the late convergence in DETR. This insight led them to discard the decoder and introduce a new bipartite matching technique in these new variants, i.e., TSP-FCOS and TSP-RCNN. In a combined approach using CNNs and transformers, Peng et al. [57, 58] proposed a hybrid network structure called "Conformer". This structure fuses the local feature representation provided by CNNs with the global feature representation provided by transformers at varying resolutions (see Figure 8). This was achieved through Feature Coupling Units (FCUs), with experimental results demonstrating its effectiveness compared to ResNet50, ResNet101, DeiT, and other models. A similar hybrid technique combining CNNs and transformers was proposed in [59]. Recognizing the importance of local perception and long-range correlations, Xu et al. [60] added a Local Perception Block (LPB) to the Swin Transformer block in the Swin Transformer. This new backbone, called the Local Perception Swin Transformer (LPSW), improved the detection of small-size objects in aerial images significantly. DIAG-TR [61] introduced a Global-Local Feature Interweaving (GLFI) module in the encoder to adaptively and hierarchically embed local features into global representations. This technique counterbalances for the scale discrepancies of small objects. Furthermore, learnable anchor box coordinates were added to the content queries in the transformer decoder, providing an inductive bias. In a recent study, Chen et al. [62] proposed the Hybrid network Transformer (Hyneter), which extends the range of local information by embedding convolutions into the transformer blocks. This improvement led to enhanced detection results on the MS COCO dataset. Similar hybrid approaches have been adopted in [63]. In another study [64], the authors proposed a new backbone called NeXtFormer, which combines CNN and transformer to boost the local details and features of small objects, while also providing a global receptive field. Among various methods, O\({}^{2}\)DETR [45] substituted the attention mechanism in transformers with depthwise separable convolution. This change not only decreased memory usage and computational costs associated with multi-scale features but also potentially enhanced the detection accuracy in aerial photographs. Questioning the object queries used in previous works, Wang et al. [46] proposed Anchor DETR, which used anchor points for object queries. These anchor points enhance the interpretability of the target query locations. The use of multiple patterns for each anchor point, improves the detection of multiple objects in one region. In contrast, Conditional DETR [65] emphasizes on the conditional spatial queries derived from the decoder content leading to spatial attention predictions. A subsequent version, Conditional DETR v2 [66], enhanced the architecture by reformulating the object query into the form of a box query. This modification involves embedding a reference point and transforming boxes with respect to the reference point. In subsequent works, DAB-DETR [67] further improved on the idea of query design by using dynamically adjustable anchor boxes. These anchor boxes serve as both reference query points and anchor dimensions (see Figure 9). In another work [47], the authors observed that while the mean average precision (mAP) of small objects in DETR is not competitive with state-of-the-art (SOTA) techniques, its performance for small intersection-over-union (IoU) thresholds is surprisingly better than its competitors. This indicates that while DETR provides strong perception abilities, it requires fine-tuning to achieve better localization accuracy. As a solution, the Coarse-to-Fine Detection Transformer (CF-DETR) has been proposed to perform this refinement through Adaptive Scale Fusion (ASF) and Local Cross-Attention (LCA) modules in the decoder layer. In [68] the authors contend that the suboptimal performance of transformer-based detectors can be attributed to factors such as using a singular cross-attention module for both categorization and regression, inadequate initialization for content queries, and the absence of leveraging prior knowledge in the self-attention module. To address these concerns, they proposed Detection Split Transformer (DESTR). This model splits cross-attention into two branches, one for classification and one for regression. Moreover, DESTR uses a mini-detector to ensure proper content query initialization in the decoder and enhances the self-attention module. Another research [48], introduced FEA-Swin, which leverages advanced foreground Fig. 8: Conformer architecture which leverages both local features provided by CNNs and global features provided by transformers in Feature Coupling Unit (FCU) (figure from [58]). enhancement attention in the Swin Transformer framework to integrate context information into the original backbone. This was motivated by the fact that Swin Transformer does not adequately handle dense object detection due to missing connections between adjacent objects. Therefore, foreground enhancement highlights the objects for further correlation analysis. TOLO [69] is one of the recent works aiming to bring inductive bias (using CNN) to the transformer architecture through a simple neck module. This module combines features from different layers to incorporate high-resolution and high-semantic properties. Multiple light transformer heads were designed to detect objects at different scales. In a different approach, instead of modifying the modules in each architecture, CBNet, proposed by Liang et al. [70], groups multiple identical backbones that are connected through composite connections. In the Multi-Source Aggregation Transformer (MATR) [71], the cross-attention module of the transformer is used to leverage other support images of the same object from different views. A similar approach is adopted in [72], where the Multi-View Vision Transformer (MVViT) framework combines information from multiple views, including the target view, to improve the detection performance when objects are not visible in a single view. Other works prefer to adhere to the YOLO family architecture. For instance, SPH-Yolov5 [73] adds a new branch in the shallower layers of the Yolov5 network to fuse features for improved small object localization. It also incorporates for the first time the Swin Transformer prediction head in the Yolov5 pipeline. In [74], the authors argue that the Hungarian loss's direct one-to-one bounding box matching approach might not always be advantageous. They demonstrate that employing a one-to-many assignment strategy and utilizing the NMS (Non-Maximum Suppression) module leads to better detection results. Echoing this perspective, Group DETR [75] implements K groups of object queries with one-to-one label assignment, leading to K positive object queries for each ground-truth object to enhance performance. A Dual-Key Transformer Network (DKTNet) is proposed in [76], where two keys are used--one key along with the **Q** stream and another key along with the **V** stream. This enhances the coherence between **Q** and **V**, leading to improved learning. Additionally, channel attention is computed instead of spatial attention, and 1D convolution is used to accelerate the process. ### _Auxiliary Techniques_ Experimental results have demonstrated that auxiliary techniques or tasks, when combined with the main task, can enhance performance. In the context of transformers, several techniques have been adopted, including: **(i)** Auxiliary Decoding/Encoding Loss: This refers to the approach where feed-forward networks designed for bounding box regression and object classification are connected to separate decoding layers. Hence individual losses at different scales are combined to train the models leading to better detection results. This technique or its variants have been used in ViDT [42], MDef-DETR [77], CBNet [70], SRDD [54]. **(ii)** Iterative Box Refinement: In this method, the bounding boxes within each decoding layer are refined based on the predictions from the previous layers. This feedback mechanism progressively improves detection accuracy. This technique has been used in ViDT [42]. **(iii)** Top-Down Supervision: This approach leverages human understandable semantics to aid in the intricate task of detecting small or class-agnostic objects, e.g., aligned image-text pairs in MDef-DETR [77], or text-guided object detector in TGOD [78]. **(iv)** Pre-training: This involves training on large-scale datasets followed by specific fine-tuning for the detection task. This technique has been used in CBNet V2-TTA [79], FP-DETR [80], T-TRD [43], SPH-Yolov5 [73], MATR [71], and extensively in Group DETR v2 [81]. **(v)** Data Augmentation: This technique enriches the detection dataset by applying various Fig. 9: DAB-DETR improves Conditional DETR and utilizes dynamic anchor boxes to sequentially provide better reference query points and anchor sizes (figure from [67]). augmentation techniques, such as rotation, flipping, zooming in and out, cropping, translation, adding noise, etc. Data augmentation is a commonly used approach to address various imbalance problems [82], e.g., imbalance in object size, within deep learning datasets. Data augmentation can be seen as an indirect approach to minimize the gap between train and test sets [83]. Several methods used augmentation in their detection task including T-TRD [43], SPH-Volov5 [73], MATR [71], NLFFTNet [84], DeoT [85], HTDet [86], and Sw-YoloX [63]. **(vi)** One-to-Many Label Assignment: The one-to-one matching in DETR can result in poor discriminative features within the encoder. Hence, one-to-many assignments in other methods, e.g., Faster-RCNN, RetinaNet, and FCOS have been used as auxiliary heads in some studies such as CO-DETR [87]. **(vii)** Denoising Training: This technique aims to boost the convergence speed of the decoder in DETR, which often faces an unstable convergence due to bipartite matching. In denoising training, the decoder is fed with noisy ground-truth labels and boxes into the decoder. The model is then trained to reconstruct the original ground truth (guided by an auxiliary loss). Implementations like DINO [88] and DN-DETR [89] have demonstrated the effectiveness of this technique in enhancing the decoder's stability. ### _Improved Feature Representation_ Although current object detectors excel in a wide range of applications for regular-size or large objects, certain use-cases necessitate specialized feature representations for improved SOD. For instance, when it comes to detecting oriented objects in aerial imagery, any object rotation can drastically alter the feature representation due to increased background noise or clutter in the scene (region proposal). To address this, Dai _et al._[90]! proposed AO2-DETR, a method designed to be robust to arbitrary object rotations. This is achieved through three key components: **(i)** the generation of oriented proposals, **(ii)** a refinement module of the oriented proposal which extracts rotational-invariant features, and **(iii)** a rotation-aware set matching loss. These modules help to negate the effects of any rotations of the objects. In a related approach, DETR++[91], uses multiple Bi-Directional Feature Pyramid layers (BiFPN) that are applied in a bottom-up fashion to feature maps from C3, C4, and C5. Then, only one scale which is representative of features at all scales is selected to be fed into DETR framework for detection. For some specific applications, such as plant safety monitoring, where objects of interest are usually related to human workers, leveraging this contextual information can greatly improve feature representation. PointDet++[92] capitalizes on this by incorporating human pose estimation techniques, integrating local and global features to enhance SOD performance. Another crucial element that impacts feature quality is the backbone network and its ability to extract both semantic and high-resolution features. GhostNet introduced in [93], offers a streamlined and more efficient network that delivers high-quality, multi-scale features to the transformer. Their Ghost module in this network partially generates the output feature map, with the remainder being recovered using simple linear operations. This is a key step to alleviate the complexity of the backbone networks. In the context of medical image analysis, MS Transformer [94] used a self-supervised learning approach to perform a random mask on the input image, which aids in reconstructing richer features, that are less sensitive to the noise. In conjunction with a hierarchical transformer, this approach outperforms DETR frameworks with various backbones. The Small Object Favoring DETR (SOF-DETR) [95], specifically favors the detection of small objects by merging convolutional features from layers 3 and 4 in a normalized inductive bias module prior to input into the DETR-Transformer. NLFFTNet [84] addresses the limitation of only considering local interactions in current fusion techniques by introducing a nonlocal feature-fused transformer convolutional network, capturing long-distance semantic relationships between different feature layers. DeoT [85] merges an encoder-only transformer with a novel feature pyramid fusion module. This fusion is enhanced by the use of channel and spatial attention in the Channel Refinement Module (CRM) and Spatial Refinement Module (SRM), enabling the extraction of richer features. The authors in HTDet [86] proposed a fine-grained FPN to cumulatively fuse low-level and high-level features for better object detection. Meanwhile, in MDCT [96] the author proposed a Multi-kernel Dilated Convolution (MDC) module to improve the performance of small object-related feature extraction using both the ontology and adjacent spatial features of small objects. The proposed module leverages depth-wise separable convolution to reduce the computational cost. Lastly, in [97], a feature fusion module paired with a lightweight backbone is engineered to enhance the visual features of small objects by broadening the receptive field. The hybrid attention module in RTD-Net [97] empowers the system to detect objects that are partially occluded, by incorporating contextual information surrounding small objects. ### _Spatio-Temporal Information_ In this section, our focus is exclusively on video-based object detectors that aim to identify small objects. While many of these studies have been tested on the ImageNet VID dataset 1[98], this dataset was not originally intended for small object detection. Nonetheless, a few of the works also reported their results for small objects of ImageNet VID dataset. The topic of tracking and detecting small objects in videos has also been explored using transformer architectures. Although techniques for image-based SOD can be applied to video, they generally do not utilize the valuable temporal information, which can be particularly beneficial for identifying small objects in cluttered or occluded frames. The application of transformers to generic object detection/tracking started with TrackFormer [99] and TransT [100]. These models used frame-to-frame (setting the previous frame as the reference) set prediction and template-to-frame (setting a template frame as the reference) detection. Liu _et al._ in [101] were among the first to use transformers specifically for video-based small object detection and tracking. Their core concept is to update template frames to capture any small changes induced by the presence of small objects and to provide a global attention-driven relationship between the template frame and the search frame. Footnote 1: [https://paperswithcode.com/sota/video-object-detection-on-imagenet-vid](https://paperswithcode.com/sota/video-object-detection-on-imagenet-vid) Transformer-based object detection gained formal recognition with the introduction of TransVOD, an end-to-end object detector, as presented in [102] and [103]. This model applies both spatial and temporal transformers to a series of video frames, thereby identifying and linking objects across these frames. TransVOD has spawned several variants, each with unique features, including capabilities for real-time detection. PTSEformer [104] adopts a progressive strategy, focusing on both temporal information and the objects' spatial transitions between frames. It employs multi-scale feature extraction to achieve this. Unlike other models, PT SEFormer directly regresses object queries from adjacent frames rather than the entire dataset, offering a more localized approach. Sparse VOD [105] proposed an end-to-end trainable video object detector that incorporates temporal information to propose region proposals. In contrast, DAFA [106] highlights the significance of global features within a video as opposed to local temporal features. DEFA showed the inefficiency of the First In First Out (FIFO) memory structure and proposed a diversity-aware memory, which uses object-level memory instead of frame-level memory for the attention module. VSTAM [107] improves feature quality on an element-by-element basis and then performs sparse aggregation before these enhanced features are used for object candidate region detection. The model also incorporates external memory to take advantage of long-term contextual information. In the FAQ work [108], a novel video object detector is proposed that uses query feature aggregation in the decoder module. This is different than the methods that focus on either feature aggregation in the encoder or the methods that perform post-processing for various frames. The research indicates that this technique improves the detection performance outperforming SOTA methods. ## 4 Results and Benchmarks In this section, we quantitatively and qualitatively evaluate previous works of small object detection, identifying the most effective technique for a specific application. Prior to this comparison, we introduce a range of new datasets dedicated to small object detection, including both videos and images for diverse applications. ### _Datasets_ In this subsection, in addition to the widely used MS COCO dataset, we compile and present 12 new SOD datasets. These new datasets are primarily tailored for specific applications excluding the generic and maritime environments (which have been covered in our previous survey [11]). Figure 10 displays the chronological order of these datasets along with their citation count as of June 15 2023, according to Google Scholar. **Uav123**[109]: This dataset contains 123 videos acquired with UAVs and it is one of the largest object-tracking datasets with more than 110K frames. **MRS-1800**[60]: l: This dataset consists of a combination of images from three other remote sensing datasets: DIOR [115], NWPU VHR-10 [116], and HRRSD [117]. MRD-1800 was created for the dual purpose of detection and instance segmentation, with 1800 manually annotated images which include 3 types of objects: airplanes, ships, and storage tanks. **SKU-110K [110]**: This dataset serves as a rigorous testbed for commodity detection, featuring images captured from various supermarkers around the world. The dataset includes a range of scales, camera angles, lighting conditions, etc. **BigDetection**[79]: This is a large-scale dataset that is crafted by integrating existing datasets and meticulously eliminating duplicate boxes while labeling overlooked objects. It has a balanced number of objects across all sizes making it a pivotal resource for advancing the field object detection. Using this dataset for pre-training and subsequently fine-tuning on MS COCO significantly enhances performance outcomes. **Tang et al.**[92]: Originating from video footage of field activities within a chemical plant, this dataset covers various types of work such as hot work, aerial work, confined space operations, etc. It includes category labels like people, helmets, fire extinguishers, gloves, work clothes and other relevant objects. **Xu et al.**[48]: This publicly available dataset focuses on UAV (Unmanned Aerial Vehicle)-captured images and contains 2K images aimed at detecting both pedestrians and vehicles. The images were collected using a DJI drone and feature diverse conditions such as varying light levels and densely parked vehicles. **DeepLesion**[111]: Comprising CT scans from 4,427 patients, this dataset ranks among the largest of its kind. It includes a variety of lesion types, such as pulmonary nodules, bone abnormalities, kidney lesions, and enlarged lymph nodes. The objects of interest in these images are typically small and accompanied by noise, making their identification challenging. **Udacity Self Driving Car**[112]: Designed solely for educational use, this dataset features driving scenarios in Mountain View and \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline **Dataset** & **Application** & **Video** & **Image** & **Shading Layer (Type)** & **Resolution (glass)** & **TObject Classes** & **Instances** & **Image/Video** & **Public?** \\ \hline \hline **UAV123**[107] & UAV Tracking & \(\tau\) & Aerial Propose/KOD & - & - & 123 \(\times\)110K frames & Train Class Here \\ \hline **MRS-1800**[60] & Remote Sensing Sensing & \(\tau\) & Satellite DeepKOD & NP & 5 & 16,318 & 1200 & - \\ \hline **SKU-110K**[110] & Community Detection & \(\tau\) & Normal & NP & 116,712 & 147.400 range & 11,762 & Yes. Click Here \\ \hline **BigDetection**[79] & Genetic & \(\tau\) & Normal & NP & 600 & 300 & 341K Tat & Yes. Click Here \\ \hline **Tang et al.**[92] & Clinical Plant Monitoring & \(\tau\) & Normal & - & 99 & - & 2520 & - \\ \hline **Xu et al.**[48] & UAV Detection & \(\tau\) & Aerial OD & 1920\(\times\)1000 & 2 & 12.55 & 25.5 & Yes. Click Here \\ \hline **DeepLesion**[111] & Latent Detection & \(\tau\) & (C1) & - & 8 & 32.7k & 32.1k & Yes. Click Here \\ \hline **KGADN filtering**[54] & SEIP tracking & \(\tau^{\prime}\) & - & Normal & 1920\(\times\)1200 & 3 & 68k & 9,423 & Yes. Click Here \\ \hline **AMAMW Dataset**[11] & Security Inspection & \(\tau\) & Normal (AMW) & 1600\(\times\)200 & - & - & -5,988 & - \\ \hline **UERC 2018 Dataset**[4] & Uncharacteristic Detection & \(\tau\) & Normal & - & 4 & - & 2,901 Training & - \\ \hline **UAV dataset**[97] & UAV-based detection & \(\tau\) & Aerial Perspective (KOD) & - & 7 & 530,634 & 9,650 & - \\ \hline **Dense-KOD**[114] & Photo Detection & \(\tau\) & Normal & NP & 2 & - & 77 Training & 134K Tat & Yes. Click Here \\ \hline \end{tabular} \end{table} TABLE II: Commonly used datasets for SOD. NF: Not fixed. Fig. 10: Chronology of SOD datasets with number of citations (based on Google Scholar). nearby cities captured at a 2Hz image acquisition rate. The category labels within this dataset include cars, trucks, and pedestrians. **AMMW Dataset**[113]: Created for security applications, this active millimetre-wave image dataset includes more than 30 different types of objects. These include two kinds of lighters (made of plastic and metal), a simulated firearm, a knife, a blade, a bullet shell, a phone, a soup, a key, a magnet, a liquid bottle, an absorbent material, a match, and so on. **URPC 2018 Dataset**: This underwater image dataset includes four types of objects: lobothurian, echinus, scallop and starfish [121]. **UAV dataset**[97]: This image dataset includes more than 9K images captured via UAVs in different weather and lighting conditions and various complex backgrounds. The objects in this dataset are sedans, people, motors, bicycles, trucks, buses, and tricycles. **Drone-vs-bird**[114]: This video dataset aims to address the security concerns of drones flying over sensitive areas. It offers labeled video sequences to differentiate between birds and drones under various illumination, lighting, weather, and background conditions. A summary of these datasets, including their applications, type, resolutions, number of classes/instances/images/frame, and a link to their webpage, is provided in Table 2. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & Backbone & GFLOPS/FPS & \#params\(\downarrow\) & mAP\({}^{(90.5,0.95)}\uparrow\) & Epochs\(\downarrow\) & URL \\ \hline Faster RCNN-DC5 (NeurIPS2015)[24] & ResNet50 & 320/16 & 166M & 21.4 & 37 & Link \\ Faster RCNN-FPN (NeurIPS2015)[24] & ResNet50 & 180/26 & 42M & 24.2 & 37 & Link \\ Faster RCNN-FPN (NeurIPS2015)[24] & ResNet101 & 246/20 & 60M & 25.2 & – & Link \\ RepPoints 5-DCN-MS (NeurIPS2020)[119] & ResNeXt101 & –/– & – & 34.5* & 24 & Link \\ FOCS (ICCV2019)[56] & ResNet50 & 177/17 & – & 26.2 & 36 & Link \\ CBNet V2-DCN(ATSS120) (TIP2022)[70] & Res2Net101 & –/– & 107M & 35.7* & 20 & Link \\ CBNet V2-DCN(Cascade RCNN) (TIP2022)[70] & Res2Net101 & –/– & 146M & 37.4* & 32 & Link \\ \hline DETR (ECCV2020)[31] & ResNet50 & 86/**28** & 41M & 20.5 & 500 & Link \\ DETR-DCS (ECCV2020)[31] & ResNet50 & 187/12 & 41M & 22.5 & 500 & Link \\ DETR (ECCV2020)[31] & ResNet101 & **52/20** & 60M & 21.9 & – & Link \\ DETR-DCS (ECCV2020)[31] & ResNet101 & 25/310 & 60M & 23.7 & – & Link \\ ViT-FRCNN (arXiv2020)[32] & –/– & – & 17.8 & – & – \\ RelationNet+Net (NeurIPS2020)[35] & ResNeXt101 & –/– & – & 32.8* & – & Link \\ RelationNet+MS (NeurIPS2020)[35] & ResNeXt101 & –/– & – & 35.8* & – & Link \\ Deformable DETR (ICLR2021)[41] & ResNet50 & 173/19 & 40M & 26.4 & 50 & Link \\ Deformable DETR-IBR (ICLR2021)[41] & ResNet50 & 173/19 & 40M & 26.8 & 50 & Link \\ Deformable DETR-TS (ICLR2021)[41] & ResNet50 & 173/19 & 40M & 28.8 & 50 & Link \\ Deformable DETR-TS-IBR-DCN (ICLR2021)[41] & ResNeXt101 & –/– & – & 34.4* & – & Link \\ Dynamic DETR (ICCV2021)[44] & ResNet50 & –/– & – & 28.6* & – & – \\ Dynamic DETR-DCN (ICCV2021)[44] & ResNeXt101 & –/– & – & 30.3* & – & – \\ TSP-FCOS (ICCV2021)[55] & ResNet101 & 255/12 & – & 27.7 & 36 & Link \\ TSP-RCNN (ICCV2021)[55] & ResNet101 & 254/9 & – & 29.9 & 96 & Link \\ Mask R-CNN (ICCV2021)[57] & Conformer-S/16 & 457.7/– & 56.9M & 28.7 & **12** & Link \\ Conditional DETR-DCS (ICCV2021)[65] & ResNet101 & 262/– & 63M & 27.2 & 108 & Link \\ SOF-DETR (2022)TVERR [95] & ResNet50 & –/– & – & 21.7 & – & Link \\ DETR (arXiv2022)[91] & ResNet50 & –/– & – & 22.1 & – & – \\ TOLO-MS (NC2022)[69] & – & – & -/57 & – & 24.1 & – & – \\ Anchor DETR-DCS (AAAI2022)[46] & ResNet101 & –/– & – & 25.8 & 50 & Link \\ DETR-DCS (CVPR2022)[68] & ResNet101 & 299/– & 88M & 28.2 & 50 & – \\ Conditional DETR v2-DFC (arXiv2022)[66] & ResNet101 & 228/– & 65M & 26.3 & 50 & – \\ Conditional DETR v2 (arXiv2022)[66] & Hourglass48 & 521/– & 90M & 32.1 & 50 & – \\ FP-DETR-IN (ICLR2022)[80] & – & – & –/– & **36M** & 26.5 & 50 & Link \\ DAB-DFP-DFC (arXiv2022)[67] & ResNet101 & 296/– & 63M & 28.1 & 50 & Link \\ Ghostformer-MS (Sensors2022)[93] & GhostNet & –/– & – & 29.2 & 100 & – \\ CF-DETR-DCN-TTA (AAAI2022)[47] & ResNeXt101 & –/– & – & 35.1* & – & – \\ CBNet V2-TTA (CVPR2022)[79] & Swin Transformer-base & –/– & – & 41.7 & – & Link \\ CBNet V2-TTA-BD (CVPR2022)[79] & Swin Transformer-base & –/– & – & 42.2 & – & Link \\ DETA (arXiv2022)[74] & ResNet50 & –/13 & 48M & 34.3 & 24 & Link \\ DINO (arXiv2022)[88] & ResNet50 & 860/10 & 47M & 32.3 & **12** & Link \\ CO-DIM Defending DETR-MS-IN (arXiv2022)[87] & Swin Transformer-large & –/– & – & 43.7 & 36 & Link \\ HYNETER (ICASSP2023)[62] & Hyneter-Max & –/– & 247M & 29.8* & – & – \\ Doeft (TIP2012)[38] & ResNet101 & 217/14 & 58M & 31.4 & 34 & – \\ ConformerDet-MS (TPAMI2023)[58] & Conformer-B & –/– & 147M & 35.3 & 36 & Link \\ \hline YOLOS (NeurIPS2021)[49] & DeiT-base & –/3.9 & 100M & 19.5 & 150 & Link \\ DETR(VIT) (arXiv2022)[42] & Swin Transformer-base & –/– & 100M & 18.3 & 50 & Link \\ Deformable DETR(VIT) (arXiv2021)[42] & Swin Transformer-base & –/– & 48.1 & 100M & 34.5 & 50 & Link \\ ViDT (arXiv2022)[42] & Swin Transformer-base & –/– & 100M & 30.6 & 50 & Link \\ DPFT (ECCV2022)[53] & DOT-medium & 67/– & – & 25.5 & 36 & Link \\ CenterR++MS (arXiv2022)[40] & Swin Transformer-large & –/– & – & 38.7* & – & Link \\ DETA-OB (arXiv2022)[74] & Swin Transformer-large & –/– & -46.1* & 24 & Link \\ Group DETR v2-MS-IN-OB (arXiv2022)[81] & ViT-Huge & –/–/- & 629M & **48.4*** & – & – \\ \hline Best Results & NA & DETR & FP-DETR & Group DETR v2 & DINO & NA \\ \hline \hline \end{tabular} \end{table} TABLE 3: Detection performance (%) for small-scale objects on MS COCO image dataset [2]. The top section shows results for CNN-based techniques, the middle section shows results for mixed architectures, and the bottom section presents from transformer-only networks. DCS: Dilated C5 stage, MS: Multi-scale network, IBR: Iterative bounding box refinement, TS: Two-stage detection, DCN: Deformable convnets, TTA: Test time augmentation, BD: Pre-trained on BigDetection dataset, IN: Pre-trained on ImageNet, OB: Pre-trained on Object-365 [118]. While * shows the results for COCO test-dev, the other values are reported for COCO val set. ### _Benchmarks in Vision Applications_ In this subsection, we introduce various vision-based applications where the detection performance of small objects is vital. For each application, we select one of the most popular datasets and report its performance metrics, along with details of the experimental setup. #### 4.2.1 Generic Applications For generic applications, we evaluate the performance of all small object detectors on the challenging MS COCO benchmark [2]. The choice of this dataset is based on its wide acceptance in the object detection field and the accessibility of performance results. The MS COCO dataset consists of approximately 160K images across 80 categories. While the authors are advised to train their algorithms using the COCO 2017 training and validation sets, they are not restricted to these subsets. In Table III, we examine and evaluate the performance of all the techniques under review that have reported their results on MS COCO (compiled from their papers). The table provides information on the backbone architecture, GFLOPS/FPS (indicating the computational overhead and execution speed), number of parameters (indicating the scale of the model), mAP (mean average precision: a measure of object detection performance), and epochs (indicating the inference time and convergence properties). Additionally, a link to each method's webpage is provided for further information. The methods are categorized into three groups: CNN-based, mixed, and transformer only methods. The top-performing methods for each metric are shown in the table's last row. It should be noted that this comparison was only feasible Fig. 11: Examples of detection results on COCO dataset [2] for transformer-based SOTA small object detectors compared with Convolutional networks. for methods that have reported values for each specific metric. In instances where there is a tie, the method with the highest mean average precision was deemed the best. The default mAP values are for the "COCO 2017 val" set, while those for the "COCO test-dev" set are marked with an asterisk. Please be aware that the reported mAP is only for objects with area\(<32^{2}\). Upon examining Table III, it is obvious that most techniques benefit from using a mix of CNN and transformer architectures, essentially adopting hybrid strategies. Notably, Group DETR v2 which relies solely on a transformer-based architecture, attains a mAP of 48.4\(\%\). However, achieving such a performance requires the adoption of additional techniques such as pre-training on two large-scale datasets and multi-scale learning. In terms of convergence, DINO outperforms by reaching stable results after just 12 epochs, while also securing a commendable mAP of 32.3\(\%\). Conversely, the original DETR model has the fastest inference time and the lowest GFLOPS. FP-DETR stands out for having the lightest network with only 36M parameters. Drawing from these findings, we conclude that pre-training and multi-scale learning emerge as the most effective strategies for excelling in small object detection. This may be attributed to the imbalance in downstream tasks and the lack of informative features in small objects. Figure 11 which spans two pages, along with its more detailed counterpart in Figure 12, illustrates the detection results of various transformers and CNN-based methods. These are compared to each other using selected images from the COCO dataset and implemented by us using their public models available on their GitHub pages. The analysis reveals that Faster RCNN and SSD fall short in accurately detecting small objects. Specifically, SSD either misses most objects or generates numerous bounding boxes with false labels and poorly located bounding boxes. While Faster RCNN performs better, it still produces low-confidence bounding boxes and occasionally assigns incorrect labels. In contrast, DETR has the tendency to over-estimate the number of objects, leading to multiple bounding boxes for individual objects. It is commonly noted t that DETR is prone to generating false positives. Finally, among the methods evaluated, CBNet V2 stands out for its superior performance. As observed, it produces high confidence scores for the objects it detects, even though it may occasionally misidentify some objects. #### 4.2.2 Small Object Detection in Aerial Images Another interesting use of detecting small objects is in the area of remote sensing. This field is particularly appealing because many organizations and research bodies aim to routinely monitor the Earth's surface through aerial images to collect both national and international data for statistics. While these images can be acquired using various modalities, this survey focuses only on non-SAR images. This is because SAR images have extensively been researched and deserve their own separate study. Nonetheless, the learning techniques discussed in this survey could also be applicable to SAR images. In aerial images, objects often appear small due to their significant distance from the camera. The bird's-eye view also adds complexity to the task of object detection, as objects can be situated anywhere within the image. To assess the performance of transformer-based detectors designed for such applications, we selected the DOTA image dataset [122], which has become a widely used benchmark in the field of object detection. Figure 13 displays some sample images from the DOTA dataset featuring small objects. The dataset includes a predefined Training set, Validation set, and Testing set. In comparison to generic applications, this particular application has received relatively less attention from Fig. 11: Examples of detection results on COCO dataset [2] for transformer-based SOTA small object detectors compared with Convolutional networks. transformer experts. However, as indicated in Table IV (results are compiled from papers), ReDet distinguishes itself through its multi-scale learning strategy and pre-training on the ImageNet dataset, achieving the highest precision value (80.89\(\%\)) and requiring only 12 training epochs. This mirrors the insights gained from the COCO dataset analysis, suggesting that optimal performance can be attained by addressing imbalances in downstream tasks and including informative features from small objects. #### 4.2.3 Small Object Detection in Medical Images In the field of medical imaging, specialists are often tasked with early detection and identification of anomalies. Missing even barely visible or small abnormal cells can lead to serious repercussions for patients, including cancer and life-threatening conditions. These small-sized objects can be found as abnormalities in the \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Model & Backbone & FPS \(\uparrow\) & \#params\(\downarrow\) & mAP \(\uparrow\) & Epochs\(\downarrow\) & URL \\ \hline Rotated Faster RCNN-MS (NeurIPS2015)[24] & ResNet101 & – & 64M & 67.71 & 50 & Link \\ SSD (ECCV2016) [20] & – & – & – & 56.1 & – & Link \\ RetinaNet-MS (ICCV2017)[21] & ResNet101 & – & **59M** & 66.53 & 50 & Link \\ ROI-Transformer-MS-IN (CVPR2019) [123, 124] & ResNet50 & – & – & 80.06 & **12** & Link \\ Yolov5 (2020)[17] & – & **95** & – & 64.5 & – & Link \\ ReDet-MS-FPN (CVPR2021)[125] & ResNet50 & – & – & 80.1 & – & Link \\ \hline Q2DETR-MS (arXiv2012)[45] & ResNet101 & – & 63M & 70.02 & 50 & – \\ Q2DETR-MS-FT (arXiv2021)[45] & ResNet101 & – & – & 76.23 & 62 & – \\ Q2DETR-MS-FPN-FT (arXiv2021)[45] & ResNet50 & – & – & 79.66 & – & – \\ SPH-Yolov5 (RS2022)[73] & Swin Transformer-base & 51 & – & 71.6 & 150 & – \\ A02-DETR-MS (TCSV2022)[90] & ResNet50 & – & – & 79.22 & – & Link \\ MDCT (RS2023)[96] & – & – & – & 75.7 & – & – \\ ReDet-MS-IN (arXiv2023)[124] & ViTDet, ViT-B & – & – & **80.89** & **12** & Link \\ \hline Best Results & NA & Yolov5 & RetinaNet & ReDet-MS-IN & ReDet-MS-IN & NA \\ \hline \hline \end{tabular} \end{table} TABLE IV: Detection performance (%) for small-scale objects on DOTA image dataset [122]. The top section shows results for CNN-based techniques, the middle section shows results for mixed architectures. MS: Multi-scale network, FT: Fine-tuned, FPN: Feature pyramid network, IN: Pre-trained on ImageNet. Fig. 12: Detection results on a sample image when zoomed in. First row from the left: Input image, SSD, Faster RCNN, DETR. Second row from the left: ViDT, DETA-OB, DINO, CBNet v2. Fig. 13: Example of small objects in DOTA image dataset. retain of diabetic patients, early tumors, vascular plaques, etc. Despite the critical nature and potential life-threatening impact of this research area, only a handful of studies have tackled the challenges associated with detecting small objects in this crucial application. For those interested in this topic, the DeepLesion CT image dataset [111] has been selected as the benchmark due to the availability of the results for this particular dataset [126]. Sample images from this dataset are shown in Figure 14. This dataset is divided into three sets: training (70\(\%\)), validation (15\(\%\)), and test (15\(\%\)) sets [94]. Table V compares the accuracy and mAP of three transformer-based studies against both two-stage and one-stage detectors (results are compiled from their papers). The MS Transformer emerges as the best technique with this dataset, albeit with limited competition. Its primary innovation lies in self-supervised learning and the incorporation of a masking mechanism within a hierarchical transformer model. Overall, with an accuracy of 90.3\(\%\) and an mAP of 89.6\(\%\), this dataset appears to be less challenging compared to other medical imaging tasks, especially considering that some tumor detection tasks are virtually invisible to the human eyes. #### 4.2.4 Small Object Detection in Underwater Images With the growth of underwater activities, the demand to monitor hazy and low-light environments has increased for purposes like ecological surveillance, equipment maintenance, and monitoring of wreck fishing. Factors like scattering and light absorption of the water, make the SOD task even more challenging. Example images of such challenging environments are displayed in Figure 15. Transformer-based detection methods should not only be adept at identifying small objects but also need to be robust against the poor image quality found in deep waters, as well as variations in color channels due to differing rates of light attenuation for each channel. Table VI shows the performance metrics reported in existing studies for this dataset (results are compiled from their papers). HTDet is the sole transformer-based technique identified for this specific application. It significantly outperforms the SOTA CNN-based method by a huge margin (3.4\(\%\) in mAP). However, the relatively low mAP scores confirm that object detection in underwater images remains a difficult task. It is worth noting that the training set of the URPC 2018 contains 2901 labeled images, and the testing set contains 800 unlabeled images [86]. #### 4.2.5 Small Object Detection in Active Milli-Meter Wave Images Small objects can easily be concealed or hidden from normal RGB cameras, for example, within a person's clothing at an airport. Therefore, active imaging techniques are essential for security \begin{table} \begin{tabular}{l c c c} \hline \hline Model & \(\#\)Param\(\downarrow\) & mAP\({}^{0.5,0.95}\uparrow\) & mAP\({}^{0.5}\uparrow\) \\ \hline Faster RCNN (NeurIPS2015)[24] & 33.6M & 16.4 & – \\ Cascade RCNN (CVPR2018)[28] & 68.9M & 16 & – \\ Dynamic RCNN (ECCV2020)[127] & 41.5M & 13.3 & – \\ Voloxity (17) & 61.5M & 19.4 & – \\ RollM (ICASSP2020)[121] & – & – & **74.92** \\ \hline HTDet (RS2023)[86] & **7.7M** & **22.8** & – \\ \hline Best Results & HTDet & HTDet & RoMrs \\ \hline \hline \end{tabular} \end{table} TABLE VI: Detection performance (%) for URPC2018 dataset [121]. The top section shows results for CNN-based techniques, the middle section shows results for mixed architectures. Fig. 14: Example of small abnormalities in DeepLesion image dataset [111]. \begin{table} \begin{tabular}{c c c} \hline \hline Model & Accuracy \(\uparrow\) & mAP\({}^{0.5}\uparrow\) \\ \hline Faster RCNN (NeurIPS2015)[24] & 83.3 & 83.3 \\ Voloxy [17] & 85.2 & 88.2 \\ \hline DETR (ECCV2020)[31] & 86.7 & 87.8 \\ Swin Transformer & 82.9 & 81.2 \\ MS Transformer (CIN2022)[94] & **90.3** & **89.6** \\ \hline Best Results & MS Transformer & MS Transformer \\ \hline \hline \end{tabular} \end{table} TABLE V: Detection performance (%) for DeepLesion CT image dataset [111]. The top section shows results for CNN-based techniques, the middle section shows results for mixed architectures. Fig. 15: Examples of low quality images in URPC2018 image dataset. purposes. In these scenarios, multiple images are often captured from different angles to enhance the likelihood of detecting even minuscule objects. Interestingly, much like in the field of medical imaging, transformers are rarely used for this particular application. In our study, we focused on the detection performance of existing techniques using the AMMW Dataset [113] as shown in Table VII (results are compiled from their papers). We have identified that MATR emerged as the sole technique that combines transformer and CNNs for this dataset. Despite being the only transformer-based technique, it could significantly improve the SOD performance (5.49\(\%\)\(\uparrow\) in mAP\({}^{0.5}\) with respect to Yolov5 and 4.22 \(\%\)\(\uparrow\) in mAP\({}^{\text{th}[0.5,0.95]}\) with respect to TridentNet) with the same backbone (ResNet50). Figure 16 visually compares MATR with other SOTA CNN-based techniques. Combining images from different angles largely helps to identify even small objects within this imaging approach. For training and testing, 35426 and 4019 images were used, respectively [71]. #### 4.2.6 Small Object Detection in Videos The field of object detection in videos gained considerable attention recently, as the temporal information in videos can improve the detection performance. To benchmark the SOTA techniques, the ImageNet VID dataset has been used with results specifically focused on the dataset's small objects. This dataset includes 3862 training videos and 555 validation videos with 30 classes of objects. Table VIII reports the mAP of several recently developed transformer-based techniques (results are compiled from their Fig. 16: Examples of detection results on AMMW image dataset [113] for SOTA small object detectors (figure from [71]). papers). While transformers are increasingly being used in video object detection, their performance in SOD remains less explored. Among the methods that have reported SOD performance on the ImageNet VID dataset, Deformable DETR with FAQ stands out for achieving the highest performance- although it is notably low at 13.2 \(\%\) for mAP\({}^{\text{g}[0.5,0.95]}\)). This highlights a significant research gap in the area of video-based SOD. ## 5 Discussion In this survey article, we explored how transformer-based approaches can address the challenges of SOD. Our taxonomy divides transformer-based small object detectors into seven main categories: object representation, fast attention (useful for high-resolution and multi-scale feature maps), architecture and block modification, spatio-temporal information, improved feature representation, auxiliary techniques, and fully transformer-based detectors. When juxtaposing this taxonomy with the one for CNN-based techniques [11], we observe that some of these categories overlap, while others are unique to transformer-based techniques. Certain strategies are implicitly embedded into transformers, such as attention and context learning, which are performed via the self and cross-attention modules in the encoder and decoder. On the other hand, multi-scale learning, auxiliary tasks, architecture modification, and data augmentation are commonly used in both paradigms. However, it is important to note that while CNNs handle spatio-temporal analysis through 3D-CNN, RNN, or feature aggregation over time, transformers achieve this by using successive spatial and temporal transformers or by updating object queries for successive frames in the decoder. We have observed that pre-training and multi-scale learning stand out as the most commonly adopted strategies, contributing to state-of-the-art performance across various datasets performance on different datasets. Data fusion is another approach widely used for SOD. In the context of video-based detection systems, the focus is on effective methods for collecting temporal data and integrating it into the frame-specific detection module. While transformers have brought about substantial advancements in the localization and classification of small objects, it is important to acknowledge the trade-offs involved. These include a large number of parameters (in the order of billions), several days of training (a few hundred epochs), and pretraining on extremely large datasets (which is not feasible without powerful computational resources). All of these aspects pose limitations on the pool of users who can train and test these techniques for their downstream tasks. It is now more important than ever to recognize the need for lightweight networks with efficient learning paradigms and architectures. Despite the number of parameters is now on par with the human brain, the performance in small object detection still lags considerably behind human capabilities, underscoring a significant gap in current research. Furthermore, based on the findings presented in Figures 11 and 12, we have identified two primary challenges in small object detection: missing objects or false negatives, and redundant detected boxes. The issue of missing objects is likely attributable to the limited information embedded in the tokens. This can be resolved by using high-resolution images or by enhancing feature pyramids although this comes with the drawback of increased latency--which could potentially be offset by using more efficient, lightweight networks. The problem of repeated detections has traditionally been managed through post-processing techniques such as Non-Maximum Suppression (NMS). However, in the context of transformers, this issue should be approached by minimizing object query similarity in the decoder, possibly through the use of auxiliary loss functions. We also examined studies that employ transformer-based methods specifically dedicated to Small Object Detection (SOD) across a range of vision-based tasks. These include generic detection, detection in aerial images, abnormality detection in medical images, small hidden object detection in active millimeter-wave images for security purposes, underwater object detection, and small object detection in videos. Apart from generic and aerial image applications, transformers are underdeveloped in other applications, echoing observations made in Rekavandi _et al._[11] regarding maritime detection. This is particularly surprising given the potentially significant impact transformers could have in life-critical fields like medical imaging. ## 6 Conclusion This survey paper reviewed over 60 research papers that focus on the development of transformers for the task of small object detection, including both purely transformer-based and hybrid techniques that integrate CNNs. These techniques have been examined from seven different perspectives: object representation, fast attention mechanisms for high-resolution or multi-scale feature maps, architecture and block modifications, spatio-temporal information, improved feature representation, auxiliary techniques, and fully transformer-based detection. Each of these categories includes several state-of-the-art (SOTA) techniques, each with its own set of advantages. We also compared these transformer-based approaches to CNN-based frameworks, discussing the similarities and differences between the two. Furthermore, for a range of vision applications, we introduced well-established datasets that serve as benchmarks for future research. Additionally, 12 datasets that have been used in SOD applications are discussed in detail, providing convenience for future research efforts. In future research, the unique challenges associated with the detection of small objects in each application could be explored and addressed. Fields like medical imaging and underwater image analysis stand \begin{table} \begin{tabular}{l c c c} \hline Model & Backbone & mAP\({}^{\text{g}[0.5,0.95]}\) \(\uparrow\) \\ \hline Faster RCNN (NeurIPS2015)[24] & ResNet50 & 70.7 & 26.83 \\ Cascade RCNN (CVPR2018)[28] & ResNet50 & 74.7 & 27.8 \\ TridentNet (ICCV2019)[128] & ResNet50 & 77.3 & 29.2 \\ Dynamic RCNN (ECCV2020)[127] & ResNet50 & 76.3 & 27.6 \\ Yolov [17] & ResNet50 & 76.6 & 28.48 \\ \hline MATR (TCSVT2022)[71] & ResNet50 & **82.16** & **33.42** \\ \hline Best Results & NA & MATR & MATR \\ \hline \end{tabular} \end{table} TABLE VII: Detection performance (%) for AMWW image dataset [113]. The top section shows results for CNN-based techniques, the middle section shows results for mixed architectures. \begin{table} \begin{tabular}{l c c} \hline Model & Backbone & mAP\({}^{\text{g}[0.5,0.95]}\) \(\uparrow\) \\ \hline Faster RCNN (NeurIPS2015)[24][rgb] ResNet50 & 85.5 \\ \hline Deformable-DETR-PT [41] & ResNet50 & 10.5 \\ Deformable-DETR+1[rgb]-TensVD-PT[103] & ResNet50 & 11 \\ DAB-DETR[rgb]-F420 [rgb]-F420 [rgb]-F420 [rgb]-F420 [rgb]-F420 [rgb]-F420 [rgb]-F420 [rgb]-F420 [rgb]-F420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420[rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420[rgb]-420 [rgb]-420 [rgb]-420[rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420[rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420[rgb]-420 [rgb]-420 [rgb]-420 [rgb]-420[rgb]-420 [rgb]-420[rgb]-420 [rgb]-420[rgb]-420 [rgb]-420[rgb]-420 [rgb]-420[rgb]-420 [rgb]-420[rgb]-420 [rgb]-420[rgb]-420[rgb]-420 [rgb]-420[rgb]-420 [rgb]-420[rgb]-420[rgb]-420 [rgb]-420[rgb]-420[rgb]-420 [rgb]-420[rgb]-420[rgb]- to gain significantly from the use of transformer models. Additionally, rather than increasing the complexity of transformers using larger models, alternative strategies could be explored to boost performance. ## 7 Acknowledgment We thank Likun Cai for providing the detection results for CBNet v2 in test images given in Figure 11 and 12. This research was partially supported by the Australian Research Council (ARC DP210101682, DP210102674) and Defence Science and Technology Group (DSTG) for the project "Low Observer Detection of Small Objects in Maritime Scenes".
2309.10963
Black-bounce solution in $k$-essence theories
In the present work, we construct black-bounce configurations in the context of $k$-essence theory. The solutions have a regular metric function at the origin. The area metric function is linked to the black-bounce area initially considered by Simpson-Visser, $\Sigma^2=x^2+a^2$. Subsequently, the expressions for the scalar field and scalar potential corresponding to the found solutions are determined, exhibiting phantom behavior everywhere due to violation of Null Energy Condition $(NEC^\phi)$. The Kretschmann scalar is regular throughout spacetime, and the geodesics are complete. The energy conditions are analyzed, verifying that the null $(NEC^\phi_1)$ and dominant energy conditions $(DEC^\phi_1)$ are violated inside and outside the event horizon. Finally, the extrinsic curvature method was applied to determine the sign of the mass on the junction surface.
Carlos F. S. Pereira, Denis C. Rodrigues, Júlio C. Fabris, Manuel E. Rodrigues
2023-09-19T23:22:28Z
http://arxiv.org/abs/2309.10963v1
# Black-bounce solution in \(k\)-essence theories ###### Abstract In the present work, we construct black-bounce configurations in the context of \(k\)-essence theory. The solutions have a regular metric function at the origin. The area metric function is linked to the black-bounce area initially considered by Simpson-Visser, \(\Sigma^{2}=x^{2}+a^{2}\). Subsequently, the expressions for the scalar field and scalar potential corresponding to the found solutions are determined, exhibiting phantom behavior everywhere due to violation of Null Energy Condition (\(NEC^{\phi}\)). The Kretschmann scalar is regular throughout spacetime, and the geodesics are complete. The energy conditions are analyzed, verifying that the null (\(NEC_{1}^{\phi}\)) and dominant energy conditions (\(DEC_{1}^{\phi}\)) are violated inside and outside the event horizon. Finally, the extrinsic curvature method was applied to determine the sign of the mass on the junction surface. Black-bounce, \(k\)-essence theory, energy conditions ## I Introduction Recently, Simpson and Visser [1] introduced a new class of solutions called "black-bounce" describing regular black holes and traversable wormholes. These solutions have a non-zero throat radius \(a^{2}\neq 0\) and reduce to the Schwarzschild metric when \(a\to 0\). Subsequent works have explored generalizations and applications of the black-bounce solutions. Lobo et al. [2] constructed new black-bounce solutions by modifying the mass function, recovering the original Simpson-Visser solution [1] for particular parameter values. Rodrigues and Silva [3] investigated the Simpson-Visser black-bounce metric with modifications to the metric function related to the black-bounce area. Junior and Rodrigues [4] obtained novel black-bounce solutions in the context of \(f(T)\) modified gravity theory. The search for exotic solutions like regular black holes and traversable wormholes requires violating standard energy conditions: minimally coupled canonical scalar field cannot describe such geometries. However, Bronnikov and Fabris showed a canonical scalar field with phantom behavior can allow regular black holes [5]. In this context, \(k\)-essence theory has emerged as an exotic matter alternative, with its non-canonical kinetic term displaying phantom behavior without exotic matter. \(k\)-Essence theories generalize the scalar field kinetic term, originally proposed for modeling primordial inflation with just a kinetic term [6; 7; 8]. Generalized kinetic terms are also motivated by string theory [9]. This work examines black-bounce solutions in \(k\)-essence theory with a power law kinetic term and potential, focusing on energy condition violations. In the studies of static, spherically symmetric configurations, exotic matter is frequently introduced in order to find regular black holes and wormholes solutions in nonlinear electrodynamics. These new regular metrics constitute exact solutions in general relativity, derived through a combined stress-energy tensor of a scalar field with non-zero self-interaction potential and a magnetic field [10; 11; 12; 13; 14]. However, rotating metrics have also been found to accommodate such regular objects [15; 16; 17]. This analysis investigates black-bounce solutions in \(k\)-essence theory to gain insights into \(k\)-essence and exotic solutions in general relativity. Futhermore, Bronnikov et al. [18] explored Ellis-Bronnikov wormhole solutions in extended gravity theories. The analysis shows the same wormhole metric emerges in Rastall gravity and k-essence theories but with different stability properties. Perturbation analysis reveals inconsistencies in Rastall gravity, while the \(k\)-essence solution is unstable for certain model parameters. The results highlight challenges in finding simple, traversable, and perturbatively stable wormhole solutions without exotic matter. The Simpson-Visser metric has been studied in other contexts, such as light deflection and lensing effects [19]. Gravitational lensing was analyzed using black-bounce solutions in a spherically symmetric and stationary spacetime [20; 21]. In the zero mass limit, this reduces to the Ellis-Bronnikov charged wormhole. Quantum dynamics have been studied using the Simpson-Visser metric [22; 23; 24; 25]. Phantom scalar fields are often studied as a source of exotic matter required to obtain wormhole solutions minimally coupled to general relativity [14; 26]. Their phantom properties are typically associated with violating energy conditions and sometimes instabilities [27; 28]. Additionally, ghost fields are commonly associated with dark energy candidates, further emphasizing the importance of investigations in this direction [29; 30]. From this perspective, phantom fields have been explored as a matter source for singular [31; 32] and regular black holes [5; 33; 34]. The paper first establishes in Section II the theoretical background of the \(k\)-essence model, including the key relationships and equations. Section III then derives the specific metric function corresponding to a defined black-bounce throat geometry, and determines the associated scalar field and potential solutions that satisfy the equations of motion. Next, Section IV, examines the geometric properties by defining the regular Kretschmann scalar and stress-energy tensor components inside and outside the horizon, as well as analyzing the energy conditions required for the black-bounce solutions. Finally, in Section V, summarizes the main conclusions from this analysis regarding the viability of constructing regular black-bounce geometries within \(k\)-essence theories. ## II General relations \(k\)-Essence theories are characterized by a non-canonical kinetic term for the scalar field, represented by the Lagrangian \[\mathcal{L}=\sqrt{-g}[R-F(X,\phi)]\,, \tag{1}\] where \(R\) is the Ricci scalar and \(X=\eta\phi_{;\rho}\phi^{;\rho}\) denotes the kinetic term. While \(k\)-essence models can include a potential term and non-trivial couplings, the scalar sector is generally minimally coupled to gravity. The parameter \(\eta=\pm 1\) avoids imaginary terms in the kinetic expression \(X\). By choosing different forms of the function \(F(X,\phi)\), \(k\)-essence theories can describe both phantom and standard scalar fields. The variation of the Lagrangian (1) with respect to the metric tensor and the scalar field yields the field equations. \[G^{\nu}_{\mu}=-T^{\nu}_{\mu}\left(\phi\right)=-\eta F_{X}\phi_{ \mu}\phi^{\nu}+\frac{1}{2}\delta^{\nu}_{\mu}F, \tag{2}\] \[\eta\nabla_{\alpha}\left(F_{X}\phi^{\alpha}\right)-\frac{1}{2}F_ {\phi}=0, \tag{3}\] where \(G^{\nu}_{\mu}\) is the Einstein tensor, \(T^{\nu}_{\mu}\) the stress-energy tensor, \(F_{X}=\frac{\partial F}{\partial X}\), \(F_{\phi}=\frac{\partial F}{\partial\phi}\) and \(\phi_{\mu}=\partial_{\mu}\phi\). The line element representing the most general spherically symmetric and static spacetime takes the form: \[ds^{2}=e^{2\gamma\left(u\right)}dt^{2}-e^{2\alpha\left(u\right)}du^{2}-e^{2 \beta\left(u\right)}d\Omega^{2}, \tag{4}\] where \(u\) is an arbitrary radial coordinate, \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\varphi^{2}\) the volume element, and \(\phi=\phi\left(u\right)\). The non-zero components of the stress-energy tensor are, \[T^{0}_{0}=T^{2}_{2}=T^{3}_{3}=-\frac{F}{2}, \tag{5}\] \[T^{1}_{1}=-\frac{F}{2}-\eta F_{X}e^{-2\alpha}\phi^{\prime 2}, \tag{6}\] with \(\phi^{\prime}=\frac{d\phi}{du}\). It is assumed that the function \(X=-\eta e^{-2\alpha}\phi^{\prime 2}\) is positive, which implies that \(\eta=-1\). As a result, the equations of motion take the form: \[2\left(F_{X}e^{-\alpha+2\beta+\gamma}\phi^{\prime}\right)^{\prime}-e^{\alpha+2 \beta+\gamma}F_{\phi} =0, \tag{7}\] \[\gamma^{\prime\prime}+\gamma^{\prime}\left(2\beta^{\prime}+\gamma^{\prime}- \alpha^{\prime}\right)-\frac{e^{2\alpha}}{2}\left(F-XF_{X}\right) =0, \tag{8}\] \[-e^{2\alpha-2\beta}+\beta^{\prime\prime}+\beta^{\prime}\left(2\beta^{\prime}+ \gamma^{\prime}-\alpha^{\prime}\right)-\frac{e^{2\alpha}}{2}\left(F-XF_{X} \right) =0, \tag{9}\] \[-e^{-2\beta}+e^{-2\alpha}\beta^{\prime}\left(\beta^{\prime}+2\gamma^{\prime} \right)-\frac{F}{2}+XF_{X} =0. \tag{10}\] The notation used here follows the same as used in the reference [35]. The following coordinate transformation is defined: \(u=:x\), and the _guasiglobal_ gauge \(\alpha\left(u\right)+\gamma\left(u\right)=0\) is employed. As a result, the line element in Eq. (4) can be expressed in the following form: \[ds^{2}=A\left(x\right)dt^{2}-\frac{dx^{2}}{A\left(x\right)}-\Sigma^{2}\left(x \right)d\Omega^{2}, \tag{11}\] where the metric functions are defined as \(A(x)=e^{2\gamma}=e^{-2\alpha}\) and \(e^{\beta}=\Sigma(x)\). The equations of motion defined in Eqs. (7-10) can then be rewritten in the new coordinates. Combining Eqs. (8-10) yields the expressions: \[2A\frac{\Sigma^{\prime\prime}}{\Sigma}-XF_{X} =0, \tag{12}\] \[A^{\prime\prime}\Sigma^{2}-A\left(\Sigma^{2}\right)^{\prime\prime }+2 =0, \tag{13}\] where the primes now represent derivatives with respect to \(x\). The two remaining equations, Eq. (7) and Eq. (10), are rewritten in the new coordinates as \[2\left(F_{X}A\Sigma^{2}\phi^{\prime}\right)^{\prime}-\Sigma^{2} F_{\phi} =0, \tag{14}\] \[\frac{1}{\Sigma^{2}}\left(-1+A^{\prime}\Sigma^{\prime}\Sigma+A{ \Sigma^{\prime}}^{2}\right)-\frac{F}{2}+XF_{X} =0. \tag{15}\] ## III General solution The analysis aims to find black-bounce solutions to the \(k\)-essence equations of motion [1; 36]. The metric function \(\Sigma^{2}(x)=x^{2}+a^{2}\) from the original work [1] is used, where the nonzero throat radius \(a\) gives regular black holes or wormholes, with the area function \(\Sigma^{2}(x)\) and the \(k\)-essence equations of motion, Eq. (13), the corresponding metric function \(A(x)\) is derived. The general solution of the differential equation Eq. (13) is given by \[A\left(x\right)=1+C_{1}\left[\left(x^{2}+a^{2}\right)\arctan\left(\frac{x}{a} \right)+xa\right]+C_{2}\left(x^{2}+a^{2}\right), \tag{16}\] where \(C_{1}\) and \(C_{2}\) are constants. Certain requirements were imposed on the solution Eq. (16), such as being asymptotically flat, leading to a constraint between the constants \(C_{2}=-\frac{\pi}{2}C_{1}\). Furthermore, the solution should approach the Simpson-Visser solution as \(x\to 0\), namely, \(A(x\to 0)=1-\frac{2m}{a}\). Hence, the constant is set as \(C_{1}=\frac{4m}{\pi a^{3}}\). The resulting solution is: \[A\left(x\right)=1+\left(\frac{4m}{\pi a^{3}}\right)\left[xa+\left(x^{2}+a^{2} \right)\left(\arctan\left(\frac{x}{a}\right)-\frac{\pi}{2}\right)\right]. \tag{17}\] Figure 1(a) shows curves of the metric function from Eq. (17) for various throat radii \(a\), inside and outside the event horizon. For all \(a\), \(A(x)\) diverges as \(x\rightarrow-\infty\) and is asymptotically flat as \(x\rightarrow\infty\). This general solution of Eq. (17) is regular at the origin and for \(x\rightarrow-\infty\), asymptotically approaching to de Sitter-Schwarzschild form. This requires considering the series expansion of \(\arctan\left(\frac{a}{a}\right)\) for \(x\rightarrow-\infty\) and discarding higher order terms \(\mathcal{O}\left(\frac{1}{x}\right)\). Taking the general metric function in Eq. (17) gives: \[A\left(x\right)=1-\frac{8m}{3\pi}\left(\frac{1}{x}\right)-\frac{4m}{a^{3}} \left(x^{2}+a^{2}\right). \tag{18}\] The general metric function in Eq. (17) is equivalent to the solution in Eq. (10) from [5], with redefinitions \(\rho_{0}=\frac{4m}{\pi}\) and \(c=-\frac{2m}{a}\). This corresponds to the canonical \(n=1\) phantom scalar field case in \(k\)-essence theory. The regularity of the general solution in Eq. (17) can be seen in the Kretschmann scalar (6(a)), which tends to zero as \(x\rightarrow\infty\) (Minkowski limit) and is constant and positive as \(x\rightarrow-\infty\). The behavior of the scalar field for the obtained \(k\)-essence solution, with \(n=\frac{1}{3}\), can be examined using the general metric solution in Eq. (17). The scalar field \(\phi(x)\) for this metric is given by \[\phi\left(x\right)=\frac{D_{1}}{4a^{5}}\left[\frac{xa^{3}}{\Sigma ^{4}}+\frac{3xa}{2\Sigma^{2}}+\frac{3}{2}\arctan\left(\frac{x}{a}\right) \right]-\frac{D_{1}m}{\pi a^{2}\Sigma^{4}}-\frac{D_{1}m}{\theta^{6}}\left[ \frac{ax}{\Sigma^{2}}+\arctan\left(\frac{x}{a}\right)\right] \tag{19}\] \[+\left(\frac{2D_{1}m}{\pi a^{6}}\right)\left[\frac{a^{2}}{2 \Sigma^{2}}+\arctan\left(\frac{x}{a}\right)\left(\frac{xa}{\Sigma^{2}}+\frac{ 1}{2}\arctan\left(\frac{x}{a}\right)\right)\right],\] where \(D_{1}=\left(\frac{6a^{2}}{F_{0}}\right)^{\frac{3}{2}}\) is a constant. As shown in Figure 2, \(\phi(x)\) approaches constant values depending on the throat radius \(a\) as \(x\rightarrow\pm\infty\), specifically \[\phi(x\rightarrow-\infty)=-\frac{9\pi\sqrt{\frac{3}{2}}}{4a^{3}}(a-4m)\quad \text{and}\quad\phi(x\rightarrow\infty)=\frac{3\pi\sqrt{\frac{3}{2}}}{4a^{3}} (3a-4m) \tag{20}\] where we set \(F_{0}=1\). Similarly, the potential \(V(x)\) can be analyzed. The potential for the metric in Eq. (17) is given by \[V\left(x\right)=\frac{2a^{2}}{\Sigma^{4}}-\frac{cax}{\Sigma^{4}}\left(\Sigma^ {2}+2x^{2}\right)-\frac{c}{\Sigma^{2}}\left(3x^{2}-a^{2}\right)\left[\arctan \left(\frac{x}{a}\right)-\frac{\pi}{2}\right], \tag{21}\] where \(c=\frac{4m}{\pi a^{3}}\) is a combination of constants. As Figure 2 exhibits, \(V(x)\) tends to the constant \(3\pi c\) as \(x\rightarrow-\infty\) and to zero as \(x\rightarrow\infty\). ### Black-Bounce solution In order to construct black-bounce solutions, the general solution in Eq. (17) will be matched to construct the appropriate geometry. First, the requirement was imposed that the metric function be asymptotically flat in both limits, to recover the Schwarzschild metric. To achieve this, the metric function Eq. (17) was bisected at \(x=0\) and mirrored, defining two regions (see Figure 1). The metric function is thus expressed as: \[A_{+}\left(x\right)=1+\left(\frac{4m}{\pi a^{3}}\right)\left[xa+ \left(x^{2}+a^{2}\right)\left(\arctan\left(\frac{x}{a}\right)-\frac{\pi}{2} \right)\right]\qquad x\geq 0,\] \[A_{-}\left(x\right)=1-\left(\frac{4m}{\pi a^{3}}\right)\left[xa+ \left(x^{2}+a^{2}\right)\left(\arctan\left(\frac{x}{a}\right)+\frac{\pi}{2} \right)\right]\qquad x\leq 0. \tag{22}\] Figure 3 shows curves of derivatives up to fourth order for the metric function Eq. (17). Figure 3 shows the derivatives for a throat radius \(a=m\) inside the event horizon. Figure 3 displays the derivatives for a radius \(a=4m\) outside the event horizon. The odd derivatives of the metric function Eq. (22) exhibit discontinuity at the origin, as shown in 3, while even derivatives are continuous, as expected for a smooth function. This arises due to the construction method in Eq. (22) and implies a spherically symmetric thin shell exists at the junction point \(x=0\). Consequently, only traversable wormhole black-bounce solutions are possible, eliminating black hole solutions. This restriction is further examined in Appendix A and is similar to previous studies [4]. At this stage, the metric functions have been constructed to meet all necessary conditions. The set of equations of motion, Eqs. (12-15), can be rewritten in terms of the metric function \(A_{\pm}(x)\) for each region. Equation (13) was used for the area metric function from the original work [1] to derive the corresponding function \(A_{\pm}(x)\). To obtain the associated scalar field, Eq. (12) is solved for the \(k\)-essence field, defined as \(X=-\eta A_{\pm}\phi^{\prime 2}\) and \(F(X)=F_{0}X^{n}-2V(\phi)\), where \(F_{0}\) is a constant, \(n\) is real, and \(V(\phi)\) is the potential. With fixed \(n=\frac{1}{3}\) and \(\eta=-1\), Eq. (12) becomes: \[\phi^{\prime}_{\pm}=\left(\frac{6}{F_{0}}\frac{\Sigma^{\prime\prime}}{\Sigma} \right)^{\frac{3}{2}}A_{\pm}. \tag{23}\] Figure 1: (a) shows curves for various throat radius values \(a\); the function is not asymptotically flat in both \(x\rightarrow\pm\infty\) limits. (b) shows radii inside and outside the horizon, with the metric function defined by matching asymptotically flat solutions at \(x=0\) for \(x\rightarrow\pm\infty\). Figure 3: (a) shows the odd and even derivatives of the asymptotically flat function Eq. (17) for a radius \(a=m\) inside the horizon. (b) shows the derivatives for a radius \(a=4m\) outside the horizon. Figure 2: Graphing for the scalar field and potential for the general metric function Eq. (17) with radius values of throats outside the inside event horizon. We fixed the constant \(F_{0}=1\). The above relation is a first order differential equation containing only the metric functions \(\Sigma\) and \(A_{\pm}\). Direct integration produces the scalar field \(\phi_{\pm}(x)\), already found in Eq. (19), now, for each region: \[\phi_{\pm}\left(x\right)=\frac{D_{1}}{4a^{5}}\left[\frac{xa^{3}}{ \Sigma^{4}}+\frac{3xa}{2\Sigma^{2}}+\frac{3}{2}\arctan\left(\frac{x}{a}\right) \right]\mp\frac{D_{1}m}{\pi a^{2}\Sigma^{4}}-\frac{D_{1}m}{a^{6}}\left[\frac{ ax}{\Sigma^{2}}+\arctan\left(\frac{x}{a}\right)\right] \tag{24}\] \[\pm\left(\frac{2D_{1}m}{\pi a^{6}}\right)\left[\frac{a^{2}}{2 \Sigma^{2}}+\arctan\left(\frac{x}{a}\right)\left(\frac{xa}{\Sigma^{2}}+\frac{1 }{2}\arctan\left(\frac{x}{a}\right)\right)\right],\] where \(D_{1}=\left(\frac{6a^{2}}{F_{0}}\right)^{\frac{3}{2}}\) is a constant. To satisfy the system Eqs. (12-15), a scalar potential is required. Eq. (15) is thus used together with the metric functions \(\Sigma\) and \(A_{\pm}(x)\) and the scalar field Eq. (24) to define the associated potential \(V_{\pm}(x)\): \[V_{\pm}\left(x\right)=A_{\pm}\frac{\Sigma^{\prime\prime}}{\Sigma}+\frac{1}{ \Sigma^{2}}-\frac{A_{\pm}^{\prime}\Sigma^{\prime}}{\Sigma}-\frac{A_{\pm}{ \Sigma^{\prime}}^{2}}{\Sigma^{2}}. \tag{25}\] The potential in Eq. (25) is obtained through a procedure analogous to the scalar field definition in Eq. (24). With some algebraic simplifications, it can be expressed explicitly as: \[V_{\pm}\left(x\right)=\frac{2a^{2}}{\Sigma^{4}}\mp\frac{cax}{\Sigma^{4}}\left( \Sigma^{2}+2x^{2}\right)\mp\frac{c}{\Sigma^{2}}\left(3x^{2}-a^{2}\right)\left[ \arctan\left(\frac{x}{a}\right)\mp\frac{\pi}{2}\right], \tag{26}\] where \(c=\frac{4m}{\pi a^{3}}\) is a combination of constants. With the scalar potential defined, verification shows all equations of motion are satisfied. In particular, Eq. (14), which was not used in the derivation, is also satisfied in both regions: \[\frac{dV_{\pm}}{dx}+\frac{F_{0}}{3}\left(\frac{\phi_{\pm}^{\prime}}{\Sigma^{2} }\right)\left(\sqrt{\frac{F_{0}\Sigma^{5}}{6\Sigma^{\prime\prime}}}\right)^{ \prime}=0. \tag{27}\] Figures 4 and 5 show the scalar field Eq. (24) and potential Eq. (26) for various throat radii. The discontinuity and symmetry in the curves reflects the match procedure for the metric function \(A_{\pm}(x)\). Discontinuities in odd derivatives are also showed. The scalar field exhibits oscillations resulting from interaction with the thin shell at \(x=0\) for radii inside the horizon, as shown in 4(a). In contrast, the potential acts as a barrier growing near the horizon and decaying at larger radii outside the horizon, as showed in 5(b). For inside radii in 5(a), the potential shape takes a form similar to the Poschl-Teller potential [37, 38, 39, 40]. Figure 4: Curves of the scalar field Eq. (24) for throat radii inside the horizon are displayed in (a), with constant \(F_{0}=1\). In (b), curves are exhibited for radii outside the horizon, also fixing \(F_{0}=1\). Geometric quantities With the solutions constructed, the focus now turns to investigating the geometric properties before analyzing the energy conditions. The spherically symmetric line element is defined as: \[ds^{2}=A_{\pm}\left(x\right)dt^{2}-\frac{dx^{2}}{A_{\pm}\left(x\right)}-\Sigma^{2 }\left(x\right)d\Omega^{2}. \tag{28}\] Constructing the Kretschmann scalar requires the non-zero Riemann tensor components. With the area metric function defined as \(\Sigma^{2}(x)=x^{2}+a^{2}\), the non-vanishing elements are: \[R^{tr}_{\phantom{tr}tr}=\frac{A^{\prime\prime}_{\pm}}{2},\quad R^{\theta\phi} _{\phantom{\theta\phi}\theta\phi}=\frac{A_{\pm}\Sigma^{\prime 2}-1}{\Sigma^{2}},\quad R^{t\theta}_{ \phantom{t\theta}t\theta}=R^{t\phi}_{\phantom{t\phi}t\phi}=\frac{A^{\prime}_{ \pm}\Sigma^{\prime}}{2\Sigma},\quad R^{r\theta}_{\phantom{r\theta}r\theta}=R^{r \phi}_{\phantom{r\phi}r\phi}=\frac{A^{\prime}_{\pm}\Sigma^{\prime}+2A_{\pm} \Sigma^{\prime\prime}}{2\Sigma}. \tag{29}\] Using the non-zero Riemann tensor components from Eq. (29), the Kretschmann scalar \(K=R_{\alpha\beta\mu\nu}R^{\alpha\beta\mu\nu}\) can be constructed in terms of the Riemann tensor as a semi-positive sum of quadratic terms [41; 2]: \[K=4\left(R^{tr}_{\phantom{tr}tr}\right)^{2}+4\left(R^{t\theta}_{\phantom{t \theta}t\theta}\right)^{2}+4\left(R^{t\phi}_{\phantom{t\phi}t\phi}\right)^{2}+ 4\left(R^{r\theta}_{\phantom{r\theta}r\theta}\right)^{2}+4\left(R^{r\phi}_{ \phantom{r\phi}r\phi}\right)^{2}+4\left(R^{\theta\phi}_{\phantom{\theta\phi} \theta\phi}\right)^{2}. \tag{30}\] That imposing the spherical symmetry conditions, can be written in a reduced form, by the expression below: \[K=4\left(R^{tr}_{\phantom{tr}tr}\right)^{2}+8\left(R^{t\theta}_{\phantom{t \theta}t\theta}\right)^{2}+8\left(R^{r\theta}_{\phantom{r\theta}r\theta} \right)^{2}+4\left(R^{\theta\phi}_{\phantom{\theta\phi}\theta\phi}\right)^{2}. \tag{31}\] The Riemann tensor components in Eq. (29) show the Kretschmann scalar must be defined piecewise due to its dependence on the metric function \(A_{\pm}(x)\). Thus, the Kretschmann scalar is: \[K_{+}\left(x\right)=\frac{\left(\Sigma^{2}A^{\prime\prime}_{+} \right)^{2}+2\left(\Sigma\Sigma^{\prime}A^{\prime}_{+}\right)^{2}+2\Sigma^{2 }\left(\Sigma^{\prime}A^{\prime}_{+}+2A_{+}\Sigma^{\prime\prime}\right)^{2}+4 \left(1-A_{+}{\Sigma^{\prime}}^{2}\right)^{2}}{\Sigma^{4}}\qquad x\geq 0,\] \[K_{-}\left(x\right)=\frac{\left(\Sigma^{2}A^{\prime\prime}_{-} \right)^{2}+2\left(\Sigma\Sigma^{\prime}A^{\prime}_{-}\right)^{2}+2\Sigma^{2 }\left(\Sigma^{\prime}A^{\prime}_{-}+2A_{-}\Sigma^{\prime\prime}\right)^{2}+4 \left(1-A_{-}{\Sigma^{\prime}}^{2}\right)^{2}}{\Sigma^{4}}\qquad x\leq 0. \tag{32}\] \[K\left(x\to 0\right)=\frac{4\left(3a^{2}-8am+12m^{2}\right)}{a^{6}} \tag{33}\] Note that in equation Eq. (33), that the Kretschmann scalar is regular in the limit of \(x\to 0\) and therefore, no singularity is present. Likewise, in the limit of \(x\rightarrow\pm\infty\), the scalar goes to zero. Figure 6(b) plots the Kretschmann scalar for throat radii inside and outside the horizon. For \(a=1.8m\) within the horizon, Eq. (33) demonstrates a finite value at the origin. Similarly, the curves for radii outside the horizon also exhibit finite values at the origin. Figure 5: Curves of the potential Eq. (26) for throat radii inside the horizon are showed in (a). In (b), curves are showed for radii outside the horizon. ### Energy conditions Analyzing the null energy conditions requires starting from Einstein's equation [42], previously defined in Eq. (2). This gives the non-zero stress-energy tensor components [43] as: \[{T^{\mu}}_{\nu}=\mathrm{diag}\left[\rho^{\phi},-p_{1}^{\phi},-p_{2}^{\phi},-p_{ 2}^{\phi}\right], \tag{34}\] where \(\rho^{\phi}\) is the scalar field energy density, \(p_{1}^{\phi}\) the radial pressure, and \(p_{2}^{\phi}\) the tangential pressure. Using the stress-energy tensor diagonal component expressions in Eqs. (5-6) for the \(k\)-essence configuration \(n=\frac{1}{3}\) from Eq. (23) and associated potential Eq. (25), \[\rho_{\pm}^{\phi} = -\frac{F_{0}}{2}\left[-\eta A_{\pm}\!\left(\phi_{\pm}^{\prime} \right)^{2}\right]^{\frac{1}{3}}+V_{\pm}\left(x\right)=-\frac{3A_{\pm}\Sigma^{ \prime\prime}}{\Sigma}+V_{\pm}\left(x\right), \tag{35}\] \[p_{1\pm}^{\phi} = -T_{1}^{1}=\frac{A_{\pm}\Sigma^{\prime\prime}}{\Sigma}-V_{\pm} \left(x\right),\] (36) \[p_{2\pm}^{\phi} = -T_{2}^{2}=-T_{0}^{0}=-\rho_{\pm}^{\phi}=\frac{3A_{\pm}\Sigma^{ \prime\prime}}{\Sigma}-V_{\pm}\left(x\right). \tag{37}\] The defined stress-energy tensor diagonal components are only valid outside the horizon where \(A_{\pm}>0\), with metric signature \(\left(+,-,-,-\right)\) and \(t\) timelike and \(x\) spacelike. Inside the horizon, \(t\) becomes spacelike and \(x\) timelike. The signature changes to \(\left(-,+,-,-\right)\) with \(A_{\pm}<0\), reversing the coordinate roles. The stress-energy tensor components must then be rewritten as: \[{T^{\mu}}_{\nu}=\mathrm{diag}\left[-p_{1}^{\phi},\rho^{\phi},-p_{2}^{\phi},-p_ {2}^{\phi}\right], \tag{38}\] and therefore, the equations for energy density, radial pressure, and tangential pressure must be rewritten as: \[\rho_{\pm}^{\phi} = -\frac{A_{\pm}\Sigma^{\prime\prime}}{\Sigma}+V_{\pm}\left(x\right), \tag{39}\] \[p_{1\pm}^{\phi} = \frac{3A_{\pm}\Sigma^{\prime\prime}}{\Sigma}-V_{\pm}\left(x\right),\] (40) \[p_{2\pm}^{\phi} = -T_{2}^{2}=-T_{0}^{0}=-\rho_{\pm}^{\phi}=-\left(-p_{1\pm}^{\phi} \right)=\frac{3A_{\pm}\Sigma^{\prime\prime}}{\Sigma}-V_{\pm}\left(x\right). \tag{41}\] The constructed geometric quantities depend on the metric function \(A_{\pm}(x)\), so they are defined piecewise. With the defined energy density and pressure components, the energy conditions for black-bounce solutions can now be examined [36]. Figure 6: The figure plots on the right represents the Kretschmann scalar for selected throat parameter values \(m\), with \(a=1.8m\) inside the horizon (blue curve) and \(a=3m\), \(4m\) outside the horizon (red and purple curves respectively). On the left, we have the Kretschmann scalar for some throat values inside and outside the horizon for the general expression Eq.(17) The commonly used energy conditions are inequalities relating the energy density and pressures [36]: \[NEC_{1,2} = WEC_{1,2}=SEC_{1,2}\Longleftrightarrow\rho_{\pm}^{\phi}+p_{(1,2) \pm}^{\phi}\geq 0, \tag{42}\] \[SEC_{3} \Longleftrightarrow\rho_{\pm}^{\phi}+p_{1\pm}^{\phi}+2p_{2\pm}^{ \phi}\geq 0,\] (43) \[DEC_{1,2} \Longleftrightarrow\rho_{\pm}^{\phi}+p_{(1,2)\pm}^{\phi}\geq 0 \qquad\text{and}\qquad\rho_{\pm}^{\phi}-p_{(1,2)\pm}^{\phi}\geq 0,\] (44) \[DEC_{3} = WEC_{3}\Longleftrightarrow\rho_{\pm}^{\phi}\geq 0. \tag{45}\] The energy conditions can be explicitly expressed in terms of the metric functions by substituting the stress-energy tensor components from Eqs. (35-37) into the defining inequalities Eqs. (42-45). This gives the energy conditions in the timelike region outside the event horizon where \(A_{\pm}>0\) as: \[NEC_{1}^{\phi} = WEC_{1}^{\phi}=SEC_{1}^{\phi}\Longleftrightarrow-\frac{2A_{\pm} \Sigma^{\prime\prime}}{\Sigma}\geq 0, \tag{46}\] \[NEC_{2}^{\phi} = WEC_{2}^{\phi}=SEC_{2}^{\phi}\Longleftrightarrow 0,\] (47) \[SEC_{3}^{\phi} \Longleftrightarrow\frac{4\Sigma^{\prime\prime}A_{\pm}}{\Sigma}-2 V_{\pm}\left(x\right)\geq 0,\] (48) \[DEC_{1}^{\phi} \Longleftrightarrow-\frac{4\Sigma^{\prime\prime}A_{\pm}}{\Sigma} +2V_{\pm}\left(x\right)\geq 0,\] (49) \[DEC_{2}^{\phi} \Longleftrightarrow-\frac{6\Sigma^{\prime\prime}A_{\pm}}{\Sigma} +2V_{\pm}\left(x\right)\geq 0,\] (50) \[DEC_{3}^{\phi} = WEC_{3}^{\phi}\Longleftrightarrow-\frac{3A_{\pm}\Sigma^{\prime \prime}}{\Sigma}+V_{\pm}\left(x\right)\geq 0. \tag{51}\] Likewise, the energy conditions inside the horizon where \(t\) is spacelike are obtained by substituting the stress-energy tensor components from Eqs. (39-41) into the inequalities Eqs. (42-45). This gives the energy conditions for \(A_{\pm}<0\) as: \[NEC_{1}^{\phi} = WEC_{1}^{\phi}=SEC_{1}^{\phi}\Longleftrightarrow\frac{2A_{\pm} \Sigma^{\prime\prime}}{\Sigma}\geq 0, \tag{52}\] \[NEC_{2}^{\phi} = WEC_{2}^{\phi}=SEC_{2}^{\phi}\Longleftrightarrow\frac{2A_{\pm} \Sigma^{\prime\prime}}{\Sigma}\geq 0,\] (53) \[SEC_{3}^{\phi} \Longleftrightarrow\frac{8A_{\pm}\Sigma^{\prime\prime}}{\Sigma} -2V_{\pm}\left(x\right)\geq 0,\] (54) \[DEC_{1}^{\phi} \Longleftrightarrow-\frac{4A_{\pm}\Sigma^{\prime\prime}}{\Sigma} +2V_{\pm}\left(x\right)\geq 0,\] (55) \[DEC_{2}^{\phi} \Longleftrightarrow-\frac{4A_{\pm}\Sigma^{\prime\prime}}{\Sigma} +2V_{\pm}\left(x\right)\geq 0,\] (56) \[DEC_{3}^{\phi} = WEC_{3}^{\phi}\Longleftrightarrow-\frac{A_{\pm}\Sigma^{\prime \prime}}{\Sigma}+V_{\pm}\left(x\right)\geq 0. \tag{57}\] Equations (46-52) demonstrate that the null energy condition (\(NEC_{1}^{\phi}\)) is violated both inside and outside the event horizon. Likewise, \(NEC_{2}^{\phi}\) given by Eq.(47) is satisfied outside the horizon but violated inside according to Eq.(53). Since \(DEC_{2}^{\phi}\) is connected to \(NEC_{2}^{\phi}\), it is also violated within the horizon through Eq. (56). Similarly, \(DEC_{1}^{\phi}\) is violated both outside and inside the horizon, which is tied to the violation of (\(NEC_{1}^{\phi}\)). Complementarily, Fig. 7(a) exhibits \(DEC_{2}^{\phi}\) violation for all radii outside the horizon. However, \(DEC_{3}^{\phi}\) violates outside but satisfies inside the horizon (Fig. 7(b)). Finally, \(SEC_{3}^{\phi}\) violates inside and outside, as shown in Fig. 8. ## V Conclusion The present analysis utilizes the \(k\)-essence field equations describing a phantom scalar field to construct black-bounce solutions not possible with an ordinary scalar field. It should be noted that k-essence does not constitute a modified theory of gravity; rather, it introduces a scalar field through a non-standard kinetic term. The analysis begins with the areal metric function \(\Sigma^{2}=x^{2}+a^{2}\) containing a throat radius \(a\) as in the original black-bounce proposals [1]. The corresponding metric function in \(k\)-essence theory is derived by applying boundary conditions to obtain an asymptotically flat spacetime. This defines the full metric and enables study of the black-bounce structures. The analysis attempts to satisfy the equations of motion with only a kinetic term for the scalar field. However, this is insufficient, requiring introduction of a scalar potential as well. Analytical expressions for the scalar field and necessary potential are derived, with the full set of equations satisfied. The possibility of using alternative black-bounce throat metric functions, as studied in [3], is also examined but leads to algebraically intractable solutions. With the derived analytical metric function and known black-bounce throat function, the Kretschmann scalar is verified to be regular at the origin for radii inside and outside the horizon. The mixed stress-energy tensor components are defined on each side of the horizon, with the roles of \(t\) and \(x\) reversed. Analysis of the energy conditions shows violation of the null energy condition (\(NEC_{1}^{\phi}\)) inside and outside the horizon, consistent with other black-bounce solutions. Violation of the null energy condition is the main ingredient for building regular black-bounce geometries. As is well known, in general relativity the strong energy condition (\(SEC_{3}\)) is typically violated within the event horizon for regular black hole solutions, while the weak energy condition (\(WEC\)) can be violated throughout spacetime in some cases [1, 4]. However, the solution presented in this work exhibits different behavior, with the strong energy condition (\(SEC_{3}^{\phi}\)) being violated both outside and inside the event horizon, as shown in Fig. 8. Meanwhile, the weak energy condition (\(WEC_{3}^{\phi}\)) is violated outside the horizon but satisfied inside, as depicted in Fig. 7(b). An interesting observation is that due to the visual form that the potential takes in the figures 5 we may be indicating a possible stability of the solutions when subjected to radial perturbations. Considering the possibility of having normal and quasi-normal modes [37, 38, 39, 40]. Figure 8: Strong energy condition (\(SEC\)) plot combining energy density and all pressure components, for various radii inside and outside the horizon. Figure 7: DEC plot relating energy density and tangential pressure, for radii inside and outside the event horizon. ###### Acknowledgements. We thank CNPq, CAPES and FAPES for financial support. The authors thank M. E. Rodrigues; M. V. S. Silva and E. L. Martins for their fruitful discussions. ## Appendix A Thin shell in back-bounce solution This section is reserved to demonstrate the sign of the mass of the surface at the point where the metric function was matched (Eq. 22). For the line element contained in Eq. (28), the following coordinate transformation will be considered, \(r^{2}=x^{2}+a^{2}\) which transforms into an equivalent line element and was adopted in [44; 45]. It is emphasized that in this last section, the same signature of the metric will be used as in the cited works above so that the results can be better compared. In this way, the line element Eq. (28) gets rewritten as \[ds^{2}=A_{\pm}\left(r\right)dt^{2}-\frac{dr^{2}}{A_{\pm}\left(r\right)\left(1 -\frac{a^{2}}{r^{2}}\right)}-r^{2}d\Omega^{2}. \tag{29}\] The metric function \(A_{\pm}\left(r\right)\) is defined in terms of the new coordinate as: \[A_{\pm}\left(r\right)=1\pm\frac{4m}{\pi a^{3}}\left[a\left(\sqrt{r^{2}-a^{2}} \right)+r^{2}\left(\arctan\left(\frac{\sqrt{r^{2}-a^{2}}}{a}\right)\mp\frac{ \pi}{2}\right)\right]. \tag{30}\] In the original metric (Eq. 28), the coordinates \(x\) and \(t\) are defined over the entire space \(x\in(-\infty,+\infty)\) and \(t\in(-\infty,+\infty)\)[1]. In the new coordinates (Eq. 29), the temporal part \(t\) retains the same range, but the radial part domain is modified to \(r\in(a,+\infty)\). The line element describing the thin shell is given by \[ds^{2}=d\tau^{2}-R^{2}\left(\tau\right)d\Omega^{2}, \tag{31}\] where the parameter \(\tau\) corresponds to the proper time for an observer in the shell. To compute the extrinsic curvature, the 4-velocity vector \(U^{\mu}=\left(\frac{dt}{d\tau},\frac{dR(\tau)}{d\tau},0,0\right)\) and normal vector \(n^{\mu}\) to the hypersurface are first defined. The 4-velocity vector can be expressed in terms of the metric components in Eq. (29) as \[U^{\mu}_{\pm}=\pm\left[\sqrt{\frac{\left(1+g_{11}\dot{R}^{2}\right)}{g_{00}}},\dot{R},0,0\right], \tag{32}\] on what \(\dot{R}=\frac{dR}{d\tau}\) and \(U\mu U^{\mu}=1\). In the same way, we will define the normal vector to the surface. For this, we will need to perform a parameterization in terms of the intrinsic coordinates \(\xi^{i}=(\tau,\theta,\phi)\) Eq. (31). Therefore, the parameterization is defined as \(f(x^{\mu}(\xi^{i}))=r-R(\tau)=0\), and the normal unit 4-vector is given by the expression \[n_{\mu}=\frac{\nabla_{\mu}f}{||\nabla f||}=\pm\left|g^{\alpha\beta}\frac{ \partial f}{\partial x^{\alpha}}\frac{\partial f}{\partial x^{\beta}}\right| ^{-\frac{1}{2}}\frac{\partial f}{\partial x^{\mu}}. \tag{33}\] The normal vector is unitary \(n_{\mu}n^{\mu}=-1\) and orthogonal to the vectors tangent to the surface \(n_{\mu}e^{\mu}_{i}=n_{\mu}\left(\frac{\partial\varepsilon^{\mu}}{\partial \xi^{i}}\right)=0\). Therefore, the normal vector written in terms of the components of the metric Eq. (29) is given by \[n^{\mu}_{\pm}=\pm\left[\dot{R}\sqrt{-\frac{g_{11}}{g_{00}}},\sqrt{-g^{11}+ \dot{R}^{2}},0,0\right]. \tag{34}\] With the constructed normal vector \(n^{\mu}\) and 4-velocity vector \(U^{\mu}\), the extrinsic curvature can be defined as \[K^{\pm}_{ij}=-n_{\mu}\left[\frac{\partial^{2}x^{\mu}}{\partial\xi^{i} \partial\xi^{j}}+\Gamma^{\mu\pm}_{\alpha\beta}\frac{\partial x^{\alpha}}{ \partial\xi^{i}}\frac{\partial x^{\beta}}{\partial\xi^{j}}\right]. \tag{35}\] The \(\theta\theta\) component of the extrinsic curvature is computed, as it is related to the surface energy density. Thus, its explicit form is given by \[K^{\theta\pm}_{\theta}=\pm\frac{1}{R}\left[A_{\pm}\left(1-\frac{a^{2}}{R^{2}} \right)+\dot{R}^{2}\right]^{\frac{1}{2}}. \tag{36}\] ### Lanczos Equation The discontinuity across the thin shell is characterized by the difference in extrinsic curvature outside and inside, \(k_{ij}=K^{+}ij-K^{-}ij\). The Einstein equation in the interior spacetime yields the Lanczos equation: \[S_{j}^{i}=-\frac{1}{8\pi}\left(k_{j}^{i}-\delta_{j}^{i}k_{k}^{k}\right), \tag{10}\] where \(S_{j}^{i}\) are the non-zero components of the surface stress-energy tensor, \(S_{j}^{i}=\text{diag}(-\sigma,\mathcal{P},\mathcal{P})\). Here, \(\sigma\) is the surface energy density and \(\mathcal{P}\) is the pressure. The \(\tau\tau\) component of the Lanczos equation yields the surface energy density: \[\sigma=-\frac{1}{4\pi}k_{\theta}^{\theta}=-\frac{1}{2\pi R}\left[A_{\pm}\left(1 -\frac{a^{2}}{R^{2}}\right)+\dot{R}^{2}\right]^{\frac{1}{2}}. \tag{11}\] At the junction point \(x=0\), the metric function takes the Simpson-Visser form \(A_{\pm}=\left(1-\frac{2m}{R}\right)\)[1]. For a static shell with \(\dot{R}=0\), the energy density in Eq. (11) becomes \[\sigma=-\frac{1}{2\pi R}\left[\left(1-\frac{2m}{R}\right)\left(1-\frac{a^{2}} {R^{2}}\right)\right]^{\frac{1}{2}}. \tag{12}\] Therefore, by analyzing the expression for the static energy density in Eq. (12), we see that the product of the terms inside the square root imposes two constraints to be positive: \(R>a\) and whether the shell is inside or outside the event horizon. Note that for any shell value greater than the throat radius \(a\), the second term inside the square root is always positive. However, the first term inside the square root depends on whether the shell is inside \(A_{\pm}<0\) or outside \(A_{\pm}>0\) the event horizon. This implies that only traversable wormhole solutions are allowed. The static energy density in Eq. (12) can be analyzed generally, without requiring evaluation specifically at the junction surface. Notably, the metric function in Eq. (10) is positive for throat radii outside the horizon, \(a>2m\), and negative inside, \(a<2m\) (Fig. 1(b)). With the surface mass defined as \(m_{s}=4\pi R^{2}\sigma\) and the negative energy density \(\sigma\), the surface mass is also negative. This signifies violation of the energy conditions at the junction surface.
2306.17651
Implicit 3D Human Mesh Recovery using Consistency with Pose and Shape from Unseen-view
From an image of a person, we can easily infer the natural 3D pose and shape of the person even if ambiguity exists. This is because we have a mental model that allows us to imagine a person's appearance at different viewing directions from a given image and utilize the consistency between them for inference. However, existing human mesh recovery methods only consider the direction in which the image was taken due to their structural limitations. Hence, we propose "Implicit 3D Human Mesh Recovery (ImpHMR)" that can implicitly imagine a person in 3D space at the feature-level via Neural Feature Fields. In ImpHMR, feature fields are generated by CNN-based image encoder for a given image. Then, the 2D feature map is volume-rendered from the feature field for a given viewing direction, and the pose and shape parameters are regressed from the feature. To utilize consistency with pose and shape from unseen-view, if there are 3D labels, the model predicts results including the silhouette from an arbitrary direction and makes it equal to the rotated ground-truth. In the case of only 2D labels, we perform self-supervised learning through the constraint that the pose and shape parameters inferred from different directions should be the same. Extensive evaluations show the efficacy of the proposed method.
Hanbyel Cho, Yooshin Cho, Jaesung Ahn, Junmo Kim
2023-06-30T13:37:24Z
http://arxiv.org/abs/2306.17651v2
# Implicit 3D Human Mesh Recovery using Consistency with ###### Abstract From an image of a person, we can easily infer the natural 3D pose and shape of the person even if ambiguity exists. This is because we have a mental model that allows us to imagine a person's appearance at different viewing directions from a given image and utilize the consistency between them for inference. However, existing human mesh recovery methods only consider the direction in which the image was taken due to their structural limitations. Hence, we propose "**Implicit 3D Human Mesh Recovery (ImpHMR)**" that can implicitly imagine a person in 3D space at the feature-level via Neural Feature Fields. In ImpHMR, feature fields are generated by CNN-based image encoder for a given image. Then, the 2D feature map is volume-rendered from the feature field for a given viewing direction, and the pose and shape parameters are regressed from the feature. To utilize consistency with pose and shape from unseen-view, if there are 3D labels, the model predicts results including the silhouette from an arbitrary direction and makes it equal to the rotated ground-truth. In the case of only 2D labels, we perform self-supervised learning through the constraint that the pose and shape parameters inferred from different directions should be the same. Extensive evaluations show the efficacy of the proposed method. ## 1 Introduction Human Mesh Recovery (HMR) is a task that regresses the parameters of a three-dimensional (3D) human body model (, SMPL [37], SMPL-X [45], and GHUM [61]) from RGB images. Along with 3D joint-based methods [7, 35, 49], HMR has many downstream tasks such as AR/VR, and computer graphics as a fundamental topic in computer vision. In recent years, there has been rapid progress in HMR, particularly in regression-based approaches [6, 26, 23, 26, 27, 28, 53, 59, 66]. However, despite these achievements, the existing algorithms still have a gap with the way humans do, so most of them do not show robust performance against the inherent ambiguity of the task. Consider the image of a baseball player running, as shown in Fig. 1. For the given single image, we can easily infer that the person's right elbow and left leg are extended backward in a 3D space, despite the presence of inherent ambiguity (, depth and occlusion). This is because we have a mental model that allows us to imagine a person's appearance at different viewing directions from a given image and utilize the consistency between them for inference. Recently, many state-of-the-art studies have successfully utilized knowledge similar to that used by humans such as human dynamics [21] and temporal information [8, 24, 38, 59]. However, to the best of our knowledge, there have been no studies proposed methods that consider 3D space for HMR similar to the way we infer pose and shape through appearance check between different views in 3D space. To overcome this issue, we propose "Implicit 3D Human Mesh Recovery (ImpHMR)" that can implicitly imagine a human placed in a 3D space via Neural Feature Fields [43]. Our assumption is that if the model is trained to infer a hu Figure 1: **Mental model of human that infers pose and shape from a single image.** From an image of a person, we infer pose and shape robustly by imagining the person’s appearance not only from the direction in which the image was taken, but also from other viewing directions (, left and right sides). man's pose and shape at arbitrary viewing directions in a 3D space from a single image, then the model learns better spatial prior knowledge about human appearance; consequently, the performance in the canonical viewing direction in which the image was taken is improved. To achieve this, we incorporate Neural Feature Fields into regression-based HMR methods. In ImpHMR, it generates feature fields using a CNN-based image encoder for a given image to construct a person in 3D space, as shown in Fig. 2. A feature field represented by a Multi-Layer Perceptron (MLP) is a continuous function that maps the position of a point in 3D space and a ray direction to a feature vector and volume density. In a feature field, which is implicit representation, all continuous points in a space can have a respective feature and volume density. Hence, the feature field is more expressive than explicit representation [63] and more suitable for representing human appearance from different viewing directions in 3D space. To infer the pose and shape parameters from the Feature Field, the 2D feature map is generated by volume rendering for a given viewing direction, and the parameters are regressed from the rendered feature. Unlike previous methods, our model can look at a person from an _arbitrary viewing direction_ by controlling the viewing direction determined by camera extrinsic (_i.e._, camera pose). Therefore, to utilize consistency with pose and shape from unseen-view, if there are 3D labels, ImpHMR predicts results including silhouette used as _geometric guidance_ from an arbitrary direction and makes it equal to the rotated ground-truth. In addition, in the case of only 2D labels, we perform self-supervised learning through the constraint that SMPL parameters inferred from different directions should be the same. These constraints help feature fields represent a better 3D space by disentangling human appearance and viewing direction; as a result, SMPL regression from canonical viewing direction in which the image was taken is improved. To verify the efficacy of our method, we conduct experiments on 3DPW, LSP, COCO, and 3DPW-OCC. The contributions of our work can be summarized as follows: * We propose a novel HMR model called "ImpHMR" that can implicitly imagine a human in 3D space from a given 2D observation via Neural Feature Fields. * To utilize consistency with pose and shape from unseen-view, we propose arbitrary view imagination loss and appearance consistency loss. * We propose the geometric guidance branch so that the model can learn better geometric information. * ImpHMR has \(2\sim 3\)_times faster_ fps than current SOTAs thanks to efficient spatial representation in feature fields. * We confirm that having the model imagine a person in 3D space and checking consistency between human appearance from different viewing directions improves the HMR performance in the canonical viewing direction in which the image was taken. ## 2 Related Work ### Human Mesh Recovery Human mesh recovery works have been conducted based on two approaches: _optimization-based_ approaches [3, 30] and _regression-based_ approaches [20, 44, 48]. Recent works tend to focus on regression-based approaches. **Optimization-based Approaches.** Early works in this field have mainly focused on the optimization-based approaches fitting parametric human body models. SMPLify [3] fits the parametric model, SMPL [37] to minimize errors between the projection of recovered meshes and 2D/3D evidence, such as silhouettes or keypoints. In addition, prior terms are adopted to penalize the unrealistic shape and pose. In subsequent studies, 2D/3D information was utilized in the fitting procedure, and optimization with more expressive models in the multi-view has been suggested [19, 65, 68, 30]. Recently, hybrid approach, which combines optimization and regression-based approaches, has been proposed and provides a more accurate pseudo-ground-truth 3D (_e.g._, SPIN [26] and EFT [18]) for 2D images. Despite the accurate results generated via optimization-based approaches, the fitting processes still remained slow and sensitive to initialization. **Regression-based Approaches.** To avoid the issues of optimization-based methods, recent works have adopted regression-based approaches and utilized the powerful learning capability of deep neural networks [6, 10, 15, 20, 44, 48, 9, 26]. Deep networks were directly used to regress model parameters from a single RGB image and supervised with 2D/3D annotations, such as 3D shape ground truth, keypoints, silhouettes, and parts segmentation. Regression-based methods have made significant advances by adopting network architectures that were suitable to learn different types of supervision signals [29, 42, 46, 47, 48, 50, 56, 62, 64, 44]. Zhang [66] proposed a pyramidal mesh alignment feedback that allows images and meshes to be well aligned, paying attention to the fact that there is no forward feedback when conventional regressors infer SMPL parameters iteratively. In addition, Li [31] proposed a hybrid approach with joint estimation using inverse kinematics, and [25, 67] and [23, 53] proposed a method for a situation with occlusion and multi-person, respectively. Furthermore, recent studies have successfully utilized knowledge similar to that used by humans, such as human dynamics [21] and temporal information [8, 24, 38, 59]. However, to the best of our knowledge, there have been no studies that proposed methods that consider 3D space for HMR similar to the way we infer pose and shape through appearance checks between different views in 3D space. ### Implicit Neural Representations Neural Radiance Fields.Previously, in 3D reconstruction, differentiable rendering techniques have been adopted to overcome the requirement for 3D supervision [36, 4]. A radiance field is a continuous function whose input is a set of a 3D location and a 2D ray direction, and its output is an RGB color value and a volume density [40, 5]. To exploit the effective non-linear mapping capability of deep neural networks, Mildenhall [41] proposed to learn Neural Radiance Fields (NeRFs) by parameterizing with a Multi-Layer Perceptron (MLP) and successfully combining with volume rendering for novel view synthesis. Neural Feature Fields.Since the success of NeRFs [41], generative models for neural radiance fields have been proposed [43, 51]. To get better representations of objects, Niemeyer [43] proposed Generative Neural Feature Fields (GIRAFFE) that replace the color output with a generic feature vector. In addition, neural feature fields condition the MLP on latent vectors of the shape and appearance of objects. Therefore, unlike NeRFs that fits the MLP to multi-view images of a single scene, neural feature fields have the capability to generate novel scenes. In this study, we adopt Neural Feature Fields to design a mental model that imagines a human in a 3D space from a single image. ## 3 Methodology The overall framework of the proposed method is shown in Fig. 2. In this section, we provide a detailed explanation of the proposed method. First, we recapitulate the outline of Neural Feature Fields [43] and SMPL body model [37]. Then, we describe the model architecture and training objective of the proposed method. ### Neural Feature Fields A Neural Feature Field [43] is a continuous function \(h\) that maps a 3D point \(\mathbf{x}\in\mathbb{R}^{3}\) and a ray direction \(\mathbf{r}\in\mathbb{S}^{2}\) to a volume density \(\sigma\in\mathbb{R}^{+}\) and an \(M_{f}\)-dimensional feature vector \(\mathbf{f}\in\mathbb{R}^{M_{f}}\). When \(h\) is parameterized by a deep neural network, the low-dimensional input \(\mathbf{x}\) and \(\mathbf{r}\) are first mapped to higher-dimensional features through the positional encoding [41, 55] so that they can be mapped to feature vectors \(\mathbf{f}\) capable of representing complex scenes. Concretely, each element of \(\mathbf{x}\) and \(\mathbf{r}\) is mapped to a high-dimensional vector through the positional encoding, as follows: \[\gamma(t,L)= \tag{1}\] \[(\sin(2^{0}t\pi),\cos(2^{0}t\pi),\ldots,\sin(2^{L}t\pi),\cos(2^{L} t\pi))\] where the scalar value \(t\) is an element of \(\mathbf{x}\) and \(\mathbf{r}\), and \(L\) is the number of frequency octaves. Unlike Neural Radiance Fields (NeRFs) [41] that outputs an RGB color value, Neural Feature Fields have the potential to be utilized in various downstream tasks because it outputs a feature vector for a given 3D point and a ray direction. For the generative perspective, Niemeyer [43] proposed a novel generative model for Neural Feature Fields. In their model, called GIRAFFE, the object representations are represented by Neural Feature Fields (denoted as \(h\)) parameterized by Multi-Layer Perceptron (MLP). In order to express different objects (in our case, _people with different poses and shapes_), the MLP is conditioned on latent vectors representing the object's shape (\(\mathbf{z}_{s}\)) and appearance (\(\mathbf{z}_{a}\)) as follows: \[h:\mathbb{R}^{L_{\mathbf{x}}}\times\mathbb{R}^{L_{\mathbf{r}}} \times\mathbb{R}^{M_{s}}\times\mathbb{R}^{M_{a}} \rightarrow\mathbb{R}^{+}\times\mathbb{R}^{M_{f}} \tag{2}\] \[(\gamma(\mathbf{x}),\gamma(\mathbf{r}),\mathbf{z}_{s},\mathbf{z }_{a}) \mapsto(\sigma,\mathbf{f})\] Figure 2: **Overview of ImpHMR architecture.** Given an image of a person, ImpHMR can implicitly imagine the person in 3D space and infer SMPL parameters viewed from an arbitrary viewing direction \(\phi\) through _Feature Fields Module_. The model infers parameters from arbitrary directions during training to have a better 3D prior about person; consequently, regression performance in _Canonical Viewing Direction_ is improved. For simplicity, we omit notation \(\phi\) and write loss functions in Sec 3.4 abstractly according to the form of the output. where \(L_{\mathbf{x}}\) and \(L_{\mathbf{r}}\) denote output dimensions of the positional encodings; \(M_{s}\) and \(M_{a}\) denote dimension of \(\mathbf{z}_{s}\) and \(\mathbf{z}_{a}\) respectively; \(\sigma\) and \(\mathbf{f}\) denote a volume density and a feature vector. Finally, the model generates a realistic and controllable image through volume and neural rendering from inferred feature fields representing a specific scene. In this work, we incorporate such a representation into conventional regression-based human mesh recovery so that the algorithm can implicitly imagine a person placed in a three-dimensional space. ### SMPL Body Model SMPL [37] is a parametric human body model. It provides a function \(\mathcal{M}(\boldsymbol{\theta},\boldsymbol{\beta})\) that takes pose and shape parameters (denoted as \(\boldsymbol{\theta}\in\mathbb{R}^{72}\) and \(\boldsymbol{\beta}\in\mathbb{R}^{10}\) respectively) as inputs and outputs a body mesh \(M\in\mathbb{R}^{6890\times 3}\). The pose parameters consist of a global body rotation and 23 relative joint rotations. The shape parameters are the first 10 coefficients in the PCA shape space. For a given mesh, 3D joints \(J\) can be obtained by linear combination of the mesh vertices, \(J=WM\), with pre-trained linear regressor \(W\). ### Model Architecture The intuition behind our method is that if the model is trained to infer a human's pose and shape at arbitrary camera viewing directions in 3D space, the model learns spatial prior knowledge about human's appearance; consequently, the performance in the _canonical viewing direction_ in which the image was taken is improved. To achieve this, as depicted in Fig. 2, our model mostly follows the HMR paradigm [20] where the input is an image of a person and output is the set of SMPL body model parameters, but there is a major difference in the _Feature Fields Module_. In this section, we first describe the operation of the Feature Fields Module consisting of _Feature Fields Generation_ and _Volume Rendering_, and then explain _Parameter Regression_ and _Geometric Guidance Branch_ using the module. Feature Fields Generation.The goal of the Feature Fields Module is to construct the person in 3D space at the feature-level so that the model can look at the person from an arbitrary viewing direction. Given an input image \(I\), we first encode the image using the CNN Encoder \(g\) (_i.e_., ResNet-50 [12]_before_ the global average pooling) and obtain the feature vector \(\mathbf{z}=g(I)\in\mathbb{R}^{2048\times 7\times 7}\). The encoded feature vector \(\mathbf{z}\) may contain both information about the foreground and background of the given image. Thus, we use the _Foreground Attention_[60]\(\mathcal{A}\) and obtain the human-related feature vector \(\mathbf{z}_{\text{fg}}=GAP(\mathcal{A}(\mathbf{z}))\in\mathbb{R}^{2048}\), where \(GAP(\cdot)\) denotes global average pooling. Finally, the MLP (denoted as \(h\)) representing feature fields is conditioned on the latent vector \(\mathbf{z}_{\text{fg}}\), and implicitly expresses the human in a 3D space. Volume Rendering.In the feature field, we can look at the person represented by the feature field from an arbitrary viewing direction by controlling the camera pose (_i.e_., camera extrinsic). For the camera pose, since the ambiguity of the human pose occurs in the horizontal direction, we fix the elevation of the camera pose to \(0^{\circ}\) and control only the azimuth (denoted as \(\phi\)). For simplicity, we denote the camera pose as \(\phi\). Also, we define the direction in which the image was taken as _Canonical Viewing Direction_ (\(\phi=0\)). To infer the human pose and shape viewed from a viewing direction \(\phi\), the 2D feature map \(\mathbf{f}_{\phi}\in\mathbb{R}^{2048\times H\times W}\) should be obtained from the feature field by volume rendering [41], where \(H\) and \(W\) denote the spatial resolutions of the feature. Given the camera pose \(\phi\), let \(\{\mathbf{x}_{i,j,n}\}_{n=1}^{N_{s}}\) be sample points on the ray direction \(\mathbf{r}_{i,j}\) for the \((i,j)\) location of the 2D feature map, where \(N_{s}\) is the number of sample points. We omit \(i\) and \(j\) for simplicity. Then, as shown in Fig. 3, we can obtain a feature vector \(\mathbf{f}_{n}\in\mathbb{R}^{2048}\) and a volume density \(\sigma_{n}\) for each 3D point \(\mathbf{x}\) as follows: \[(\sigma_{n},\mathbf{f}_{n})=h(\gamma(\mathbf{x}_{n}),\gamma(\mathbf{r}), \mathbf{z}_{\text{fg}}). \tag{3}\] where \(\gamma\) denotes the positional encoding. Finally, using _Numerical Integration_ as in [43], volume rendered feature vector \(\mathbf{f}_{\text{rend}}\in\mathbb{R}^{2048}\) is obtained as follows: \[\mathbf{f}_{\text{rend}}=\sum_{n=1}^{N_{s}}\tau_{n}\alpha_{n}\mathbf{f}_{n} \quad\tau_{n}=\prod_{k=1}^{n-1}(1-\alpha_{k})\quad\alpha_{n}=1-e^{-\sigma_{n} \delta_{n}} \tag{4}\] where \(\tau_{n}\) is the transmittance, \(\alpha_{n}\) the alpha value for \(\mathbf{x}_{n}\), and \(\delta_{n}=\left|\left|\mathbf{x}_{n+1}-\mathbf{x}_{n}\right|\right|_{2}\) the distance between neighboring sample points. We repeat this process for each spatial location \((i,j)\) and obtain the 2D feature map \(\mathbf{f}_{\phi}\in\mathbb{R}^{2048\times H\times W}\), as in [43]. Finally, to use the feature map for SMPL parameter regression, we generate feature vector \(\mathbf{z}_{\phi}=AGG(\mathbf{f}_{\phi})\in\mathbb{R}^{2048}\), where \(AGG(\cdot)\) denotes aggregation layer consisting of single depthwise convolution. Parameter Regression.We have obtained the feature vector \(\mathbf{z}_{\phi}\) that contains the information about the person Figure 3: **Volume rendering procedure in a neural feature field.** To extract a 2D feature map from the Feature Field, sample points on the ray direction \(\mathbf{r}_{i,j}\). From the volume density \(\sigma\) and feature vector \(\mathbf{f}\), the 2D feature map is obtained by Numerical Integration. viewed from the camera pose \(\phi\) through _Feature Fields Module_. From the feature vector \(\mathbf{z}_{\phi}\), ImpHMR predicts the SMPL model \(\Theta_{\phi}=\{\mathbf{\theta}_{\phi},\mathbf{\beta}_{\phi},\mathbf{\pi}_{\phi}\}\) using the regressor \(\mathcal{R}(\cdot)\) as \(\Theta_{\phi}=\mathcal{R}(\mathbf{z}_{\phi})\), where each element of \(\Theta_{\phi}\) denotes the pose, shape, and camera parameters inferred from the viewing direction \(\phi\) respectively. Note that, when \(\phi=0\) (in short, \(\phi_{0}\)), ImpHMR outputs \(\Theta_{\phi_{0}}\) that is the inference result viewed from the direction in which the image was taken, as in conventional regression-based methods, and otherwise outputs the inference result viewed from a viewing direction \(\phi\), as shown in Fig. 4. Therefore, after the training, we use the \(\phi\) fixed to \(0\) for testing. From the predicted parameters \(\Theta\), we can generate the body mesh with vertices \(M=\mathcal{M}(\mathbf{\theta},\mathbf{\beta})\in\mathbb{R}^{6890\times 3}\). Subsequently, using the pre-trained linear regressor, 3D joints \(J\in\mathbb{R}^{N_{j}\times 3}\) (\(\mathcal{J}_{3D}\) in Fig. 2) can be regressed from the mesh vertices \(M\), where \(N_{j}\) is the number of joints. Furthermore, 2D keypoints \(K\in\mathbb{R}^{N_{j}\times 2}\) (\(\mathcal{J}_{2D}\) in Fig. 2) are obtained as \(K=\mathbf{\Pi}(J)\), where \(\mathbf{\Pi}(\cdot)\) denotes the projection function from weak-perspective camera parameters \(\mathbf{\pi}\in[s,t]\) (\(s\) and \(t\) denote the scale and translation parameters, respectively). **Geometric Guidance Branch.** ImpHMR is trained to regress the rotated ground-truth viewed from an arbitrary viewing direction \(\phi\) to learn spatial prior of human's appearance (see Sec. 3.4). However, unlike GIRAFFE, which generates images, our model regresses parameters (_i.e_., SMPL), so there might not be enough information for the model to learn the geometry of 3D space. Thus, as shown in Fig. 4, we have the model reconstruct the silhouette \(S_{\phi}\) (viewed at direction \(\phi\)) from the 2D feature map \(\mathbf{f}_{\phi}\) using deconvolution \(\mathcal{D}(\cdot)\) as \(S_{\phi}=\mathcal{D}(\mathbf{f}_{\phi})\). To explicitly give geometric supervision for _unseen-view_, we generate the G.T. silhouette from the G.T. SMPL mesh rotated by the viewing direction using NMR [22], as shown in Fig. 5. Note that, the geometric guidance branch is used _only_ for training. ### Training Objective The final goal of our method is to improve the regression performance in the _canonical viewing direction_ (\(\phi_{0}\)) by having the model learn spatial prior about the person in 3D space. In this section, we describe the following three objectives for training: _Canonical View Regression_, _Arbitrary View Imagination_, and _Appearance Consistency Loss_. **Canonical View Regression Loss.** This is the constraint for inference from _canonical viewing direction_ (\(\phi_{0}\)) just like previous methods [26, 20]. 2D keypoints \(K_{\phi_{0}}\) and 3D joints \(J_{\phi_{0}}\) are obtained from inferred SMPL parameters (_i.e_., \(\mathbf{\theta}_{\phi_{0}}\), \(\mathbf{\beta}_{\phi_{0}}\), and \(\mathbf{\pi}_{\phi_{0}}\)), making them close to their G.T. as follows: \[\begin{split}\mathcal{L}_{reg}&=\lambda_{2d}||K_{ \phi_{0}}-\hat{K}||+\lambda_{3d}||J_{\phi_{0}}-\hat{J}||\\ &\quad\quad+\lambda_{pose}||\mathbf{\theta}_{\phi_{0}}-\hat{\mathbf{\theta }}||+\lambda_{shape}||\mathbf{\beta}_{\phi_{0}}-\hat{\mathbf{\beta}}||,\end{split} \tag{5}\] where \(||\cdot||\) is the squared L2 norm; \(\hat{K}\), \(\hat{J}\), \(\hat{\mathbf{\theta}}\), and \(\hat{\mathbf{\beta}}\) denote the ground-truth 2D keypoints, 3D joints, and SMPL pose and shapes, respectively following the notation of [66]. **Arbitrary View Imagination Loss.** To leverage the consistency of pose and shape from unseen-views, we train the model to infer human's appearance viewed at arbitrary directions. Thus, if _3D labels exist_, we use the constraint that the predicted result from an arbitrary viewing direction \(\phi\) (sampled from the distribution of camera pose \(p_{cam}\)) should be equal to the ground-truth rotated by \(-\phi\) as follows: \[\begin{split}\mathcal{L}_{imag}&=\mathbb{E}_{\phi \sim p_{cam}}[\lambda_{3d}||J_{\phi}-\hat{J}_{-\phi}||+\lambda_{silh.}||S_{ \phi}-\hat{S}_{-\phi}||\\ &\quad\quad+\lambda_{pose}||\mathbf{\theta}_{\phi}-\hat{\mathbf{\theta }}_{-\phi}||+\lambda_{shape}||\mathbf{\beta}_{\phi}-\hat{\mathbf{\beta}}||,\end{split} \tag{6}\] where \(\hat{J}_{-\phi}\) is ground-truth 3D joints rotated by \(-\phi\) in horizontal direction, \(\hat{S}_{-\phi}\) is G.T. silhouette viewed at \(-\phi\), and \(\hat{\mathbf{\theta}}_{-\phi}\) is ground-truth pose rotated by \(-\phi\) only for the global orientation, and \(p_{cam}\sim\mathcal{U}[0,2\pi]\). Using the constraint, we can disentangle human appearance and viewing direction, resulting in better spatial prior about humans in 3D space. Note that we rotate the ground-truth by \(-\phi\) because the viewing direction and shown person's appearance are rotated oppositely. In addition, for the ground-truth shape parameters \(\hat{\mathbf{\beta}}\), we do not apply the rotation because it is independent of the viewing direction. Figure 4: **SMPL parameter and silhouette regression with controlling camera viewing direction. Top:** regression from the _Canonical Viewing Direction_ (\(\phi=0\)), as in conventional methods. **Bottom:** regression from an arbitrary viewing direction. Figure 5: **Generating ground-truth silhouettes viewed from unseen-view.** G.T. silhouette from an arbitrary viewing direction is generated by rotating the mesh of SMPL G.T. and rendering it. Appearance Consistency Loss.ImpHMR can make predictions in various viewing directions from a single image. By utilizing this capability of the model, we perform _self-supervised_ learning through the constraint that SMPL parameters inferred from different directions (sampled from \(p_{cam}\)) should be the same if there are _only 2D labels_ as: \[\begin{split}\mathcal{L}_{cons}=\mathbb{E}_{\phi_{1},\phi_{2}\sim p _{cam}}[\lambda_{pose}||\mathbf{\theta}^{\prime}_{\phi_{1}}-\mathbf{\theta}_{\phi_{2}} ||\\ +\lambda_{shape}||\mathbf{\beta}_{\phi_{1}}-\mathbf{\beta}_{\phi_{2}}||], \end{split} \tag{7}\] where \(\mathbf{\theta}^{\prime}_{\phi_{1}}\) denotes the modified parameters where only the global orientation of the inferred pose parameters \(\mathbf{\theta}_{\phi_{1}}\) is changed by \(\phi_{2}-\phi_{1}\) amount. Finally, our overall loss function is \(\mathcal{L}_{all}=\mathcal{L}_{reg}+\mathcal{L}_{imag}+\mathcal{L}_{cons}\). We selectively use each loss function depending on whether 3D labels are available or not and our model is trained _end-to-end_ manner. ## 4 Experiments ### Datasets and Evaluation Metrics Following previous works [20, 25, 26], we use a mixture of 2D and 3D datasets. We use MPI-INF-3DHP [39] and Human3.6M [14] with ground-truth SMPL as our 3D datasets for training. Also, MPII [1], COCO [34], and LSPET [17] with the pseudo-ground-truth SMPL provided by [18] are used as 2D datasets. As in PARE [25], we divide the training process into two phases to reduce the overall training time. We first train our model on COCO for ablation studies, and then obtain the final performance using a mixture of all datasets for comparison with SOTA methods. For evaluation, we use 3DPW [58] and 3DPW-OCC [67] for quantitative evaluation. Our method is evaluated using mean per joint position error (MPJPE), Procrustes-aligned mean per joint position error (PA-MPJPE), and per-vertex error (PVE) metrics. For qualitative evaluation, we evaluate the quality of the inferred mesh on 3DPW, LSP [16], and COCO validation sets. More description about datasets is in the supplementary material. ### Experimental Results In this section, we validate the effectiveness of the proposed method. First, we compare the performance of ImpHMR with previous SOTA methods. Then, we confirm whether ImpHMR has the ability to infer human appearance viewed from different viewing directions in 3D space. Finally, the efficacy of each of the methods is validated. Comparison with State-of-the-Art.First, we evaluate the human mesh recovery performance of ImpHMR. Table 1 shows the quantitative results of previous state-of-the-art and our method on 3DPW test split. As shown in Tab. 1, our method (denoted as "ImpHMR (Ours)") shows superior performance compared to other methods for all metrics in both temporal- and frame-based approaches. In particular, ImpHMR shows an -4.4mm (8.1%) performance improvement in PA-MPJPE metric compared to HMR-EFT [18]. Also, it shows a -1.1mm (1.3%), -2.5mm (4.8%), and -3.3mm (3.3%) improvement in MPJPE, PA-MPJPE, and PVE, respectively, compared to PARE [25], the most previous best performing model. Also, we report the results of the model trained using 3DPW train split (denoted as "ImpHMR (Ours) w. 3DPW" in Tab. 1) to see the performance of the model when using the ground-truth SMPL labels. Compared to when the dataset is not used, ImpHMR shows a significant performance improvement and outperforms all methods by a large margin. We perform an evaluation on 3DPW-OCC [67], an \begin{table} \begin{tabular}{l l|r|r|r} \hline \hline & & \multicolumn{3}{c}{3DPW} \\ \cline{3-5} & **Method** & MPJPE \(\downarrow\) & PA-MPJPE \(\downarrow\) & PVE \(\downarrow\) \\ \hline \multirow{7}{*}{\begin{tabular}{} \end{tabular} } & HMMR [21] & 116.5 & 72.6 & 139.3 \\ & DSD [54] & - & 69.5 & - \\ & Arnab _et al._[2] & - & 72.2 & - \\ & Doersch _et al._[11] & - & 74.7 & - \\ & VIBE [24] & 93.5 & 56.5 & 113.4 \\ & TCMR [8] & 95.0 & 55.8 & 111.3 \\ & MPS-Net [59] & 91.6 & 54.0 & 109.6 \\ \hline \multirow{7}{*}{\begin{tabular}{} \end{tabular} } & HMR [20] & 130.0 & 76.7 & - \\ & GraphCMR [27] & - & 70.2 & - \\ & SPIN [26] & 96.9 & 59.2 & 116.4 \\ & PyMAF [66] & 92.8 & 58.9 & 110.1 \\ & DL-MeshNet [42] & 100.0 & 60.0 & - \\ & ROMP [53] & 89.3 & 53.5 & 105.6 \\ & HMR-EFT [18] & - & 54.2 & - \\ & PARE [25] & 82.9 & 52.3 & 99.7 \\ \hline \multirow{7}{*}{ \begin{tabular}{} \end{tabular} } & ImpHMR (Ours) & **81.8** & **49.8** & **96.4** \\ & ImpHMR (Ours) w. 3DPW & 74.3 & 45.4 & 87.1 \\ \hline \hline \end{tabular} \end{table} Table 1: **Results on 3DPW. Best in bold, second-best underlined. Values are in mm. “ImpHMR (Ours)” and “ImpHMR (Ours) w. 3DPW” denote the model trained _w/o_ and \(w\). 3DPW train set, respectively.** Figure 6: **Qualitative results. Qualitative comparison of the proposed method with SPIN [26] and HMR-EFT [18] on COCO validation set and 3DPW test split.** occlusion-specific dataset, to verify the performance of ImpHMR in the presence of ambiguity (_e.g_., occlusion). Table 2 shows the result. For fair comparison, all methods in the table are trained on the same datasets (_i.e_., Human3.6M [14], COCO [34], and, 3DOH [67]). As shown in Tab. 2, ImpHMR outperforms the occlusion-specific methods Zhang _et al_. [67] and PARE [25], including HMR-EFT [18], by a large margin. This demonstrates that the structure and learning method of ImpHMR is suitable for modeling situations in which ambiguity is present. For qualitative comparisons, we compare our method with SPIN [26] and HMR-EFT [18]. As shown in the Fig. 6, ImpHMR outputs a mesh that is well aligned with the image even when a person with extreme poses or ambiguity exists. Results from Different Viewing Directions.In this section, we verify that ImpHMR successfully imagines a person in 3D space. To do this, we report the mesh reconstruction results from different viewing directions \(\phi\) (_i.e_., \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\)) for a given image of a person. As shown in Fig. 7, we confirm that ImpHMR can imagine a person's appearance not only from the canonical viewing direction, but also from the image from the left, right, and back of the person. Note that the inference results are _not by rotating the mesh_ inferred from the canonical viewing direction, but the results of _directly inferring a person viewed from different directions in 3D space_. A viewing direction can be an arbitrary angle; herein, we report only the results from the \(4\) different angles. Additionally, the last row of Fig. 7 is the person's silhouettes in Sample \(3\) viewed from various directions by using \(\mathcal{D}(\cdot)\), and it can be seen that the inferred silhouettes are similar to what a human imagines. This demonstrates that spatially meaningful information is contained in a volume-rendered 2D feature map \(\mathbf{f}_{\phi}\) by an appropriate guide of Geometric Guidance Branch. To quantitatively verify the 3D spatial construction capability of ImpHMR, we measure the **E**ntanglement between the **S**hape and the **V**iewing direction (in short, ESV). If the 3D space is well constructed in Neural Feature Fields, the body shape should be consistent despite changes in viewing direction. Therefore, we change the viewing direction from \(0^{\circ}\) to \(360^{\circ}\) at \(1^{\circ}\) intervals and define the average of the standard deviations of the inferred shape parameters as ESV, which is the degree of entanglement. We measure the ESV for each sample image in Fig. 7 and the _entire_ LSP [16] dataset. As shown in Tab. 3, we can notice that the degree of entanglement decreases as the proposed constraints are added. This indicates that ImpHMR successfully disentangles body shape and viewing direction, as a result imagining a person in a 3D space well. A detailed description of ESV is in the supplementary material. \begin{table} \begin{tabular}{l l|r|r|r} \hline \hline & **Method** & MPJPE \(\downarrow\) & PA-MPJPE \(\downarrow\) & PVE \(\downarrow\) \\ \hline \multirow{3}{*}{\(\mathcal{L}_{reg}\)} & Baseline & 101.6 & 58.3 & 117.2 \\ \cline{2-5} & \(\mathcal{L}_{reg}\) & 96.5 & 58.9 & 116.0 \\ \cline{1-1} & \(\mathcal{L}_{reg}+\mathcal{L}_{imag}\) & 94.9 & 57.9 & 114.2 \\ \cline{1-1} & \(\mathcal{L}_{reg}+\mathcal{L}_{imag}+\mathcal{L}_{cons}\) & 93.5 & 57.5 & 113.1 \\ \cline{1-1} \cline{2-5} & \(\mathcal{L}_{reg}+\mathcal{L}_{imag}+\mathcal{L}_{cons}\) & 92.7 & 57.0 & 112.1 \\ \hline \hline \end{tabular} \end{table} Table 4: **Effectiveness of each proposed method.** The results are evaluated on the 3DPW dataset. Values for all metrics are in mm. w/o \(\mathcal{D}(\cdot)\) denotes the method trained without Geometric Guidance Branch. All methods are trained using COCO dataset. \begin{table} \begin{tabular}{l|r|r|r|r} \hline \hline **Method** & MPIPE \(\downarrow\) & PA-MPJPE \(\downarrow\) & PVE \(\downarrow\) \\ \hline Zhang _et al_. [67] & - & 72.2 & - \\ HMR-EFT [18] & 94.4 & 60.9 & 111.3 \\ PARE [25] & 90.5 & 56.6 & 107.9 \\ \hline ImpHMR (Ours) & **86.5** & **54.4** & **104.7** \\ \hline \hline \end{tabular} \end{table} Table 2: **Results on 3DPW-OCC.** For fair comparison, all methods are trained using the same datasets (_i.e_., Human3.6M, COCO, and 3DOH). Best in bold. Figure 7: **Inferred SMPL mesh and silhouettes viewed from different viewing directions.** Results inferred _by changing the viewing direction_ clockwise by \(90^{\circ}\) from canonical viewing direction. Note that the inference results are _not by rotating the mesh_ inferred from the canonical viewing direction, but _directly inferring a person viewed from different directions in 3D space_. Ablation Studies.To verify the efficacy of each of the proposed methods, we evaluate the performance change by adding each method. For a fair comparison, we train a baseline model (denoted as "Baseline" in Tab. 4) that has the same model architecture and the number of parameters as ImpHMR (except \(AGG(\cdot)\)) but does not perform feature fields generation and volume rendering. As shown in Tab. 4, we can notice that all methods provide a positive contribution. Compared to Baseline, it can be seen that there is an improvement even when just generating a feature vector through volume rendering within feature fields (denoted as \(\mathcal{L}_{reg}\)). This verifies the inference method of ImpHMR is suitable for HMR tasks. In addition, as shown in Tab. 3 and Tab. 4, by adding the proposed constraints, including silhouette loss with _geometric guidance branch_, the better the model disentangles the person's appearance and viewing direction in 3D space, and the performance increases accordingly. Through this, we can confirm that our assumption about the proposed method is valid. Table 5 shows the ablation of the model architecture (_i.e_., aggregation layer \(AGG\) and foreground attention \(\mathcal{A}\)). We use three types of \(AGG\): global average pooling (GAP), convolution (Conv.), and depth-wise convolution (DWConv.), and report the performance of combinations with \(\mathcal{A}\). As can be seen in Tab. 5, \(AGG\) shows good performance in the order of DWConv, GAP, and Conv. We can notice that foreground attention has a positive effect except for Conv. We finally adopted the best-performing set of Tab. 5 (e). ImpHMR uses neural feature fields, an implicit representation, to imagine a person in 3D. However, as a means of expressing 3D space, there is also an explicit representation such as the voxel-based method (_e.g_., PTN [63]). To explore the suitability of explicit representation, we check the performance of the baseline in which Feature Fields Module of ImpHMR is replaced by a voxel-based representation. For the volumetric representation, we use Perspective Transformer Nets [63] (PTN). For a fair comparison, we set the voxel resolution to \(4\times 4\times 4\), the same as ImpHMR. Since PTN can perspective project features for a given camera extrinsic, we train the baseline using the same constraints (_i.e_., \(\mathcal{L}_{reg}\), \(\mathcal{L}_{imag}\), and \(\mathcal{L}_{cons}\)) as in ImpHMR. Figure 8 shows the inference results of the baseline, and we can notice that it fails to model 3D space. This indicates that implicit representation is more suitable for modeling a person in 3D. Table 6 compares the inference speed between ImpHMR and the current SOTA methods. For fair evaluation, frames per second (fps) is calculated by averaging the time it took for each model to infer \(10000\) times of an input image of \(224\times 224\) size on RTX 2080Ti GPU. As shown in Tab. 6, we can notice that ImpHMR is slightly slower than HMR-EFT, but still has _real-time_ performance. Especially, ImpHMR has \(2\sim 3\)_times faster_ fps than PyMAF and PARE, which are current SOTA methods. This is because ImpHMR is capable of efficient spatial representation within neural feature fields compared to the latest SOTA methods that utilize spatial information. ## 5 Conclusion and Future Works We have introduced a novel HMR model called "ImpHMR" that can implicitly imagine a human in 3D space from a given 2D observation via neural feature fields. To utilize consistency with pose and shape from unseen-views, we propose arbitrary view imagination loss and appearance consistency loss. Also, we propose geometric guidance branch that helps the model can learn better geometric information. ImpHMR has \(2\sim 3\) times faster fps than current SOTAs thanks to efficient spatial representation in feature fields. Also, extensive evaluation proves that our method is valid. For future works, we can make a more occlusion-robust model by carefully modeling volume density. **Acknowledgements.** This work was conducted by Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD (UD190031RD). \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline **Method** & Res.1 & Res.2 & Res.4 & Res.6 \\ \hline HMR-EFT [18] & 115.5 & - & - & - \\ PyMAF [66] & 33.6 & - & - & - \\ PARE [25] & 27.5 & - & - & - \\ \hline ImpHMR (Ours) & 88.8 & 88.4 & 87.1 & 78.2 \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison of inference speed.** The numbers are in _frames per second_ (**fps**). The Res. denotes the spatial resolution of a 2D feature map in volume rendering for our method. Thanks to efficient spatial representation in feature fields, ImpHMR shows about \(2\sim 3\)_times faster_ fps compared to PyMAF and PARE. Figure 8: **Inferred SMPL mesh from different viewing directions with _Explicit_ representation.** Results inferred by changing the direction clockwise by \(90^{\circ}\) from canonical viewing direction. \begin{table} \begin{tabular}{l l|c|c|c} \hline \hline **Method** & \multicolumn{2}{c|}{MPIPE \(\downarrow\)} & \multicolumn{2}{c|}{PA-MPPE \(\downarrow\)} & \multicolumn{1}{c}{PVE \(\downarrow\)} \\ \hline \multicolumn{4}{c}{HMR-EFT [18]} & 99.0 & 59.9 & - \\ \hline \multicolumn{4}{c}{\(AGG\)**Aggregation**} & \(\mathcal{A}\)**Attention** & & & \\ \hline \multicolumn{4}{c}{(a) GAP} & with \(\mathcal{A}\) & 95.3 & 52.6 & 114.5 \\ \multicolumn{4}{c}{(b) GAP} & without \(\mathcal{A}\) & 95.8 & 57.9 & 114.9 \\ \hline \multicolumn{4}{c}{(c) Conv.} & with \(\mathcal{A}\) & 96.0 & 58.0 & 114.6 \\ \multicolumn{4}{c}{(d) Conv.} & without \(\mathcal{A}\) & 95.2 & 57.7 & 114.9 \\ \hline \multicolumn{4}{c}{(e) DWConv.} & with \(\mathcal{A}\) & **92.7** & **57.0** & **112.1** \\ \multicolumn{4}{c}{(f) DWConv.} & without \(\mathcal{A}\) & 94.7 & 57.9 & 113.8 \\ \hline \hline \end{tabular} \end{table} Table 5: **Ablation study of the model architecture.** “\(AGG\) Aggregation” denotes the type of aggregation layer in volume rendering. “\(\mathcal{A}\) Attention” denotes whether foreground attention is used. All methods are trained using COCO dataset.
2308.16785
Agent Teaming Situation Awareness (ATSA): A Situation Awareness Framework for Human-AI Teaming
The rapid advancements in artificial intelligence (AI) have led to a growing trend of human-AI teaming (HAT) in various fields. As machines continue to evolve from mere automation to a state of autonomy, they are increasingly exhibiting unexpected behaviors and human-like cognitive/intelligent capabilities, including situation awareness (SA). This shift has the potential to enhance the performance of mixed human-AI teams over all-human teams, underscoring the need for a better understanding of the dynamic SA interactions between humans and machines. To this end, we provide a review of leading SA theoretical models and a new framework for SA in the HAT context based on the key features and processes of HAT. The Agent Teaming Situation Awareness (ATSA) framework unifies human and AI behavior, and involves bidirectional, and dynamic interaction. The framework is based on the individual and team SA models and elaborates on the cognitive mechanisms for modeling HAT. Similar perceptual cycles are adopted for the individual (including both human and AI) and the whole team, which is tailored to the unique requirements of the HAT context. ATSA emphasizes cohesive and effective HAT through structures and components, including teaming understanding, teaming control, and the world, as well as adhesive transactive part. We further propose several future research directions to expand on the distinctive contributions of ATSA and address the specific and pressing next steps.
Qi Gao, Wei Xu, Mowei Shen, Zaifeng Gao
2023-08-31T15:02:01Z
http://arxiv.org/abs/2308.16785v2
# Agent Teaming Situation Awareness (ATSA): ###### Abstract This research was sponsored by National Natural Science Foundation of China (32271090), Key Program of Natural Science Foundation of Zhejiang Province (LZ20C090001), and Fundamental Research Funds for the Central Universities National. Correspondence concerning this article should be addressed to Zaifeng Gao, Department of Psychological and Behavioral Sciences, Zhejiang University, 866 Yuhangtang Road, Hangzhou, China. Email: [email protected] ###### Abstract The rapid advancements in artificial intelligence (AI) have led to a growing trend of human-AI teaming (HAT) in various fields. As machines continue to evolve from mere automation to a state of autonomy, they are increasingly exhibiting unexpected behaviors and human-like cognitive/intelligent capabilities, including situation awareness (SA). This shift has the potential to enhance the performance of mixed human-AI teams over all-human teams, underscoring the need for a better understanding of the dynamic SA interactions between humans and machines. To this end, we provide a review of leading SA theoretical models and a new framework for SA in the HAT context based on the key features and processes of HAT. The Agent Teaming Situation Awareness (ATSA) framework unifies human and AI behavior, and involves bidirectional, and dynamic interaction. The framework is based on the individual and team SA models and elaborates on the cognitive mechanisms for modeling HAT. Similar perceptual cycles are adopted for the individual (including both human and AI) and the whole team, which is tailored to the unique requirements of the HAT context. ATSA emphasizes cohesive and effective HAT through structures and components, including teaming understanding, teaming control, and the world, as well as adhesive transactive part. We further propose several future research directions to expand on the distinctive contributions of ATSA and address the specific and pressing next steps. artificial intelligence, human-AI collaboration, human-AI cooperation, perceptual cycle, team cognition + Footnote †: journal: Journal of Human-AI A Situation Awareness Framework for Human-AI Teaming Situation awareness (SA), defined as the perception, comprehension, and projection of the environment elements within a volume of time and space (Endsley, 1988), has been identified as one of the key cognitive factors leading to safe and effective human-machine interaction, including air or road traffic domain (e.g. Gugerty & Tirre, 1996; Redding, 1992), and health care domain (Schulz et al., 2016), etc. As we continue to transit towards human interaction with AI systems, machines are evolving from mere automation to a state of autonomy, exhibiting unexpected machine behaviors and human-like cognitive/intelligent capabilities, including SA (Damacharla et al., 2018; Rahwan et al., 2019; Xu et al., 2022). When AI becomes a teammate of humans, the human-AI mixed team may gain potential performance over all-human teams, while SA adverse effects within the human-AI team have also been highlighted (McNeese, Demir, et al., 2021; McNeese, Schelble, et al., 2021). This distinction is primarily contingent upon the team process, thereby impelling us to delve into an investigation of the dynamic human-AI teaming (HAT). Considerable work has been done on how to achieve and maintain proper team SA within human-human teams, but the research focusing on HAT is still in its infancy. Researchers have urged the field to proactively explore human cognitive mechanisms for modeling HAT (Liu et al., 2018; Xu et al., 2022), while the foundation of modeling SA remains limited. In this paper, a new framework for SA in HAT context is proposed based on the key features and processes of HAT. In the following sections, we will first introduce team cognition in HAT, and then elaborate on team SA. The third part of the paper is establishing the Agent Teaming Situation Awareness (ATSA) framework. Finally, we discuss the application and summary of the framework. ## 1 Human-Al Teaming The prevalence of HAT is irresistible, as it has been demonstrated to enhance performance in comparison to human-only or machine-only teams under various scenarios, especially in open-ended tasks with high uncertainties (Chen & Barnes, 2014; Cummings, 2014; Demir et al., 2017). Besides, Al possesses unique social advantages compared with a human as a teammate; for instance, Al can be designed not to judge. Therefore, assisted Al aiding special populations is in its prime, including autistic children (Jain et al., 2020), patients with movement obstacles (Budarick et al., 2020), and various professionals (Ziane et al., 2021), etc. Human and Al can both be regarded as agents with intelligence and partial automation (J. D. Lee & Kirlik, 2013). Varying the behavioral pattern of Al can even blur the distinction from humans in a non-verbal Turing test (Ciardo et al., 2022). Therefore, Al has ushered in a new era of human-machine interaction, marked by a transition from human-automation interaction to human-autonomy interaction. The former is characterized by a subordinate relationship, while the latter resembles a teammate relationship. Automation fulfils its mandate in the confines of the program, regardless of the external context volatility, while autonomy is capable of analyzing information and making decisions adaptive to the situations through learning and generalization (Lyons et al., 2021; Vagia et al., 2016; Xu et al., 2022). The autonomous characteristic of Al allows working together with humans as a team through Coordination, Cooperation, and Collaboration(3Cs, J. Lee et al., 2023). A "team" is made of "two or more individuals that adaptively and dynamically interact through specified roles as they work towards shared and valued goals"(Salas et al., 2017). Distinguished from "groups", where members are largely independent and do not necessarily identify shared constructs, "teams" emphasize the interconnectedness of members. To achieve effective team control, team members have to arrange the timings of tasks and resources (coordination), use negotiation to resolve conflicts (cooperation), and make many decisions jointly over an extended time, developing shared rules, norms, and agreements (collaboration). Effective team coordination, cooperation, and collaboration are supported by team cognition. Team members must harmonize cognitive and behavioral components to achieve their goal, which parallels the myriad of coordinated neuronal impulses that must coalesce to produce synchronized performance (Morrow & Fiore, 2013). Of all the cognitive components, SA is one of the most critical factors to be addressed in HAT (Chen & Barnes, 2014; Endsley, 2017), and is capable of integrating most relevant cognitive components through its process modeling, such as shared mental model and shared goal (Lyons et al., 2021). However, so far, SA is only recognized as human metrics in SA-related models or HAT-related summaries (e.g., Barnes et al., 2019; Damacharla et al., 2018; Endsley, 2017), and machines are only external situation factors to be aware by humans. While the truth is that AI systems need to form SA as well, thus ensuring better coordination, collaboration, and adaptation abilities (National Academies of Sciences, Engineering, and Medicine, 2022). A remarkable SA framework for HAT should provide both the explanatory breadth and depth tapping into the HAT issues we reviewed here. To summarize, AI transacts its actions and cognitions, which embodies its autonomy, to form the team-wide control and understandings along with the human teammate counterparts. Although some algorithm scientists have adopted the SA concept to improve AI design (e.g. Murray Perera, 2021; Thombre et al., 2020), we argue that a top-down design process, directed by a HAT framework is necessary. While the SA process of AI shares some similarities with that of humans, its adaptability, programmatic nature, and lack of social interaction can all pose challenges for team SA in HAT, making it inappropriate to simply replicate human teaming strategies. To our knowledge, no such framework has been proposed that unifies human and machine behavior. As Woods (1998) stated, "Designs are hypotheses about how artifacts shape cognition and collaboration", a theory-derived SA framework accompanied by the elaborated cognitive process will fuel the hypotheses process, leading to efficient design. To develop a comprehensive framework facilitating better HAT, a review of leading SA theoretical models is performed in the following section. ## 2 Team Situation Awareness A concise definition for SA is "knowing what is going on around" (Lundberg, 2015), while the definition enshrined in most articles on SA is "the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future"(Endsley, 1988). The latter is based on the most renowned theory of _Three-Level Model_ of SA, which dominates the theoretical model, together with the Perceptual Cycle Mode (Smith and Hancock, 1995), to explain both individual and team SA. Although the two models serve to explain team SA from the individual decomposition perspective for a long time, a different system perspective, represented by the _Distributed SA_ model (Stanton et al., 2006), ushered in a new era of the SA research, with the underlying theoretical emphasis shifting from the individual cognition to the joint cognitive system, addressing the cognition embodied by both humans and artifacts. To fully understand the decomposition of team SA, we hereinafter introduce first the two major theories describing individual SA, and then theories for team SA from both individual and system perspective. ### Individual SA For the individual SA, two theoretical models, the Three-Level Model and the Perceptual Cycle Model have survived in the ebb tide (Stanton et al., 2017; Wickens, 2008). In nature, the two models apply different cognitive theories to explain SA. The three-level model takes information processing theory as its foundation, while the perceptual cycle model directly extends Neisser's perceptual cycle theory (Neisser, 1976). As mentioned earlier, the three-level model forms the core by perception, comprehension, and projection. Specifically, the concept lies in perceiving critical factors in the environment (level 1), understanding these factors when integrating with goals (level 2), and predicting the upcoming status change in the environment (level 3). However, the development from low level to high level is not limited to a linear process, but rather includes the high level directing the low-level SA. In this model, SA is the product of individual, environmental, and system factors, consisting of a combination of pre-attentive processes, focused attention, working memory, and long-term memory, coupled with alternating goal- and data-driven processing (Endsley, 2021). The perceptual cycle model describes SA in a more macroscopic manner, asserting that SA lies in the interaction between the individual and the external world. The process of acquiring and maintaining SA is cyclical and encompasses three main phases with schema, perceptual exploration, and environmental information. The internal mental models (or schema) of current situations form anticipation of upcoming events, and directs the course of perceptual exploration action, including checking whether the evolving environment matches the mental model, which may require further search and explanation of the current situation, thus modifying the existing model (Smith & Hancock, 1995). Although intuitively one may feel that the two models differ starkly, Lundberg (2015) pioneered a seminal integration of the two models. The three-level model depicts SA as the core in a dynamic decision-making loop comprising the state of the environment, SA, decision, and action execution (Endsley, 1995). If drawn in a circle, the similarity between the two models appears (see Figure 1), as the SA section in the decision-making loop could be mapped to the mental model section of the perceptual cycle. This view provides the field with a promising direction to unify the SA models. ### 2.2 Team SA As machines evolve into agents with intellectual ability, i.e., automo Figure 1: Two individual SA models, which have been redrawn to demonstrate similarities between them. (a). Three-level model in dynamic decision-making (Endsley, 1995), redrawn by Lundberg (2015). (b). Perceptual cycle model for SA (Smith & Hancock, 1995) redrawn. connotation of SA in HAT to team elements, beyond the original environmental elements. Each individual SA models had developed their team SA models separately. By Endsley's model, team SA involves all team-members knowing the relevant information for their individual role, and this includes individual SA only self-concerned and shared SA focusing on the sub-set of information needs that are in common(Endsley & Jones, 2001). Four factors need to be identified in support of good shared SA and team SA: 1) requirements, describing what information should be shared across team members; 2) devices, describing the media or the platform that sharing SA relies on; 3) mechanisms, describing the importance of the shared mental models; 4) processes, describing how effective team SA is achieved, for example clear delineation of tasks, goals, and roles. This theory primarily focuses on the breakdown of team SA outcomes, with little emphasis on the development and maintainence process of team SA. Team process, while in the direct extension of perceptual cycle model of team SA proposed by Salas et al. (1995), is the core. In their model, team SA is resulted by the interaction between individual SA, including information processing functions and preexisting knowledge and dispositions, and various team processes, rather than simply addition for all SA that team members hold. The preexisting requirements for the teamwork, the characteristics of team members, and the team processes interact and affect one another. Therefore, individual and team sub-elements intertwine with each other resulting in team SA, and team SA in turn modifies these elements via team situation assessments process. A cyclical process of developing individual SA, sharing SA with other team members and then modifying both team and individual SA based on other team members' SA was addressed, which was inherited and adapted from the perceptual cycle theory. However, we can see the commonalities between the two team SA models that they both dismantle team to only human team members, without taking artefacts like devices or automation machine into a whole SA system. Holding this systematic view, distributed SA argues that SA is held by and distributed between agents and artefacts comprising the system and can be viewed as an emergent property of collaborative systems (Salmon et al., 2017). Stanton et al. (2006) contended that each agent supplements other agents' SA with their own unique but compatible (not shared) views on the situation through interaction. Given that each agent possesses their own subgoals and subtasks, they do not require identical information to be shared, but rather only the information transacted within the team that can be 'decoded' by adjacent members. Accordingly, the SA partition exchanged among team members is referred to as 'transactive SA'. These two characteristics, namely compatible rather than shared SA and a systemetic rather than cognitive view, comprise the key difference between distributed SA theory and previous individual decomposition team SA theories. We particulary value these notions in the HAT context because 1) the systemetic view is useful and suitable for AI team members with both artifact and human properties, and 2) compatible SA and transactive SA is more appropriate for complex, dynamic and occasionally ambiguous teaming processes. ### SA in HAT - human-human vs human-AI Due to the rapid development of technology, AI has become more capable as a teammate as opposed to simply a tool, highlighting the superiority of distributed SA theory. Many researchers have undertaken great efforts to tackle the SA problem in HAT, while most of them still adopt the individual cognitive perspective, which only considers SA of either human side or AI side and misses the other side. For example, the SA requirements for HAT raised by National Academies of Sciences, Engineering, and Medicine (2022), focuses only on human side, encompassing situation, task environment, and teammate- and self-awareness. Besides, several other frameworks were raised as the design blueprint for AI. One of them is the cooperative intelligence framework integrating self + situational awareness for single vehicle and global SA provided by multi-vehicle cooperative sensing (Cheng et al., 2019). Another similar case covers cooperative perception (or sensing in the former model) and intention awareness (Kim et al., 2015). However, such attempts all eliminate the human role in non-fully autonomous AIs. Mostafa et al. (2014) proposed a SA framework comprising human supervisor, system software and software agent. This framework endows system software with SA capability based on three-level model, and places the human and software agent as equal footing. Nevertheless, the resemblance between a human and AI is still not reflected, and no interaction between a human and AI is elaborated in this framework. Chen et al. (2014) proposed the Situation Awareness-Based Agent Transparency (SAT) model to address interaction scenarios involving an AI. The SAT model extends three-level model by incorporating the 3P factors (Purpose, Process, and Performance, J. D. Lee & See, 2004) and the BDI agent framework (Beliefs, Desires, Intentions, Rao & Georgeff, 1995). The three levels of SAT model refer to: Level 1, goals and actions demonstrating what the agent is trying to achieve; Level 2, reasoning process that helps a human understand why the agent is doing so; Level 3, projections indicating what a human should expect to happen. In the original model, they only skimmed SA transparency on the operator side, while neglecting the AI side analogous to human SA. Therefore, the Dynamic SAT model was then developed encompassing the bidirectional, continuously iterating, and mutually interdependent interactions (Chen et al., 2018). In the renewed model, system participants (including humans and AI) share the three levels to achieve the goals of their team tasks through feedforward and feedback loops. However, it fails to articulate that the foundation of transparency benefit in HAT is common activated mental models (Lyons, 2013). Besides, transparency is suggested not the only factor for successful HAT (Damacharla et al., 2018; Fischer, 2018; O'Neill et al., 2020). Despite the emergence of other new SA concepts in HAT, such as supportive SA (Kidalukmana et al., 2020), mutual SA (X. Yuan et al., 2016), or cloud-enabled SA (Golestan et al., 2016), no systematic framework has been developed thus far. The overview highlights the urgent need for a model for SA in HAT that involves unified, bidirectional and dynamic interaction, ultimately promoting cohesive collaboration. To this end, based on the fundamental characteristics of HAT, we propose an Agent Teaming Situation Awareness (hereinafter referred to as ATSA) framework that builds upon previous SA models to guide effective HAT. It is worth noting that 'teaming', not 'team', is expressed in the continuous tense to underscore the dynamism and timeliness of the interaction and collaboration between human and AI. ## 3 The Agent Teaming Situation Awareness Framework (ATSA) We concur with the distributed SA theory that SA should be regarded as both a cognitive construct and a system construct. In the newly proposed ATSA framework (see figure 2), both human and AI cognitive constructs are represented by perceptual cycles as this intuitively reflects both the SA product and the process it inures (Stanton et al., 2017). Different from distributed SA theory, we employ cyclic representations as both the individual and team lens. We propose that the system construct of SA can also be represented by a similar perceptual cycle as the cognitive construct: teaming understanding (TU, corresponding to'mental model' in the individual cycle), teaming control (TC, corresponding to 'action' in the individual cycle), and world (all agents sharing the same external environment). Teaming collaboration is achieved from the iterative teaming cycles within a broader time scale (figure 3). Moreover, these two constructs in ATSA are linked through the transactive part, which is adapted from the 'transactive SA' concept in distributed SA theory. In this manner, the two constructs correspond with each other in a harmonious way. Figure 2. the Agent Teaming Situation Awareness Framework (ATSA). Two individual cycles (human cycle on the left and AI cycle on the right) and a teaming cycle (area with pink color fill) compose the overall frame. The subsequent section offers an extensive overview of the two crucial cycles within the ATSA framework for the individual construct and teaming construct.The individual cycle is a modified version of the original perceptual cycle theory, specifically tailored to cater to the unique requirements of the HAT context and cutting-edge cognitive findings. Here, we delve into the specific modifications made to the cycle and how they contribute to the overall effectiveness of the ATSA framework.The teaming cycle, on the other hand, has been emphasized and lucubrated on as it is the most direct part to ensure cohesive and effective HAT. We explore the structures and various components of the TU, TC and world in the teaming cycle and also elaborate on the adhesive transactive part. By providing an in-depth analysis of the proposed model, we can also provide a more detailed description of the model. Figure3: ATSA expanded for teaming collaboration. Through the team co-learning and AI evolution, the team can manage better teaming collaboration as time progresses. both the individual and teaming cycles, this section aims to offer a profound understanding of the ATSA framework. ### Individual Cycle In each individual cycle, the agent, could be human or AI agent, shares most of the world with their teammates, as the overlapped part demonstrated in the middle of the framework. The exploration of the external world modifies the mental model, which is transacted through a transactive part within the team, and formed into teaming understanding (hereafter referred to as TU). Ulteriorly, the planning is generated on the basis of TU, and is used to direct the function allocation. According to the function allocation, each individual agent acts for their individual task, through which achieving teaming control (hereafter referred to as TC). TC offers two routes for the team to interact with the world: sampling the world and modifying the world, thus closing the cycle. Unlike the perceptual model modified by Smith & Hancock (2015) from Neisser (1976), we keep the link between action and the world. In Neisser's original model, the perceptual exploration serves as sampling the world, as well as modifying the world. Since this omitted control process contributes to the modification of the internal mental model, and the control process is another critical issue for HAT (e.g. Huang et al., 2022; Marcano et al., 2020), we revert this path for SA in our framework. The necessity for the incorporation of control process and the cognition process is supported by extended control model, which builds on the perceptual cycle model just as the extended SA model (Hollnagel & Woods, 2005). Different from the original perceptual cycle model, we herein do not perpetuate the phenotype and genotype decomposition of mental model in humans, rather postulate that individual SA product is embodied in the activated mental model, with higher information priority and availability than other information in the mind. With activated mental model, we extend the original mental model, which refers to the abstract long-term knowledge structures humans employ to describe, explain, and predict the world(Johnson-Laird, 1983), to working memory structure. SA was believed to be reside in working memory for a long time(Johannsdottir & Herdman, 2010), while recent studies have clarified that SA relies on a more integrated interaction between the working memory and long-term memory system, rather than only on working memory (Endsley, 2015; Wickens, 2015). This accords with cognitive research on working memory, which has increasingly been referred to as activated long-term memory (Cowan, 1988). In this view, the ensemble of components in working memory are in a heightened state of availability for use in ongoing information processing (Logie et al., 2020) rather than being a separate structure from long-term memory (Baddeley, 2012). We parallel the activated mental model concept with working memory as the activated long-term memory, and propose that only the active part of the generalized mental model interacts with TU directly. As for the cycle of AI, several different aspects have been identified from that for human. Firstly, since both software and hardware are designed by humans, and the flexibility exists even after the AI is in operation(e.g. Akula et al., 2022; Li, 2017; L. Yuan et al., 2022), we adopt dashed lines for mental model and action of the AI to signify the adjustability of these two parts. Besides, the area of each rugby shape represents the corresponding capability. For AI, the division of mental model and action ability highlights the difference between autonomy level and automation level. "Automation" focuses on the extent to which machines can contribute to a task, emphasizing on the acting ability. On the other hand, under different automation conditions, tasks that would otherwise be executed by humans are partially or completely delegated to machines. In contrast to automation, "autonomy" highlights the independence and dynamism of machines to complete the task, even when the solution was not programmed, relying on the human-like information analysis ability (Endsley, 2017; Simmler & Frischknecht, 2021; Xu, 2020). Though the viewpoint of autonomy analogues to a human is stressed, ATSA is a human-centered framework in nature, which is embodied in two ways, corresponding to TU and TC respectively (Xu, 2020; Shneiderman, 2020a). First of all, plastic AI are designed to empower human, rather than replace human. AI should proactively align their mental model to human (value alignment, L. Yuan et al., 2022) and form TU. This is in line with Shneiderman's view except he describes diametrically from another extreme that portrayals of tool-like appliances for AI is unacceptable (Shneiderman, 2020a, 2020b). Secondly, ensure human ultimate control. This requires the AI systems keep humans access to the final decision, explain enough for the action they take, and provide humans with meaningful control (Beckers et al., 2019; de Sio et al., 2022). Notably, two similar facets were summarized in Rieth and Hagemann (2022), by stressing considering human needs and human-strength based function allocation, which can also be derived from TU and TC components. ### Teaming Cycle #### 3.2.1 Teaming Understanding TU is the mental product of the team SA. In ATSA, TU is deconstructed into three orthogonal dimensions (see Figure 4): content, process, and state. Content, explaining what is in TU, pertains to the outcome manifestation arising from the amalgamation of SA elements. While process, elucidating the temporal sequencing of contents, provides insights into the development stage of the product. Finally, state, capturing how the contents are represented, encapsulates the priority assigned to the products in mind. Though some researchers believe the elements of SA are highly domain-specific (M. R. Endsley, 2021), we posit that there exist common categories that undergird the HAT process: team, task, and communication, addressing who, what and how respectively. Whereas a conventional human-human team necessitates each team member to possess a task model and team model (Converse et al., 1993; Scheutz et al., 2017), in HAT, the communication model assumes particular significance, given that the interaction between humans and AI may not be as natural as human-human or even AI-Al. The shared mental model and shared SA field have both sought to encapsulate the content that HAT requires, and we argue that information from both working memory and long-term memory is interwoven, thereby rendering it unsuitable to disentangle the two sources.Van den Bosch et al. (2019) delineate six mental model challenges for human-AI co-learning, and National Academies of Sciences, Engineering, and Medicine Figure 4: Three dimensions of TU: State, process, and content. (2021) put forward that models for teammate (including self) and model of the world are necessary. We modified and integrated these models into the aforementioned three common categories so that they fit in a framework cohesively. To form TU, or the so-called value alignment process (L. Yuan et al., 2022), the human-AI team must arrive at a consensus on these three components. Regarding the _team_ component, establishing a shared set of teamwork agreements and interdependencies is of paramount consideration, for example, agreements on how agents dynamically update these agreements as continuous interaction. Besides, an essential prerequisite for effective team performance is the awareness of self and others in the team (Andrews et al., 2022; Sycara & Sukthankar, 2006), including the belief, desire, and intention (BDI (Rao & Georgeff, 1995). Moreover, belief comprises knowledge, characteristics, capabilities and status. Capabilities encompass cognition (e.g. memory, attention, etc.), emotion (e.g. empathy, emotion intelligence, etc.) and physical capability (e.g. action limits, etc.). Status includes cognitive resources (e.g. vigilance, engagement, etc.), emotion states (e.g. happy, nervous, fearful, etc.), and physical availabilities (e.g. drunk, hurt, etc.). As for the _task_ component, the team should be aware of regularities between task conditions, actions, and outcomes, as this enables planning based on rational reasoning. Moreover, the current task content and condition should be agreed upon within TU, which is decisive of the current frame of interaction. What is important for the next frame of interaction is that the prediction of the outcome, which also requires in common understanding. Finally, the _communication_ component is vital to the team process that culminates TU. This encompasses not only a shared vocabulary of concepts and relations, but also the functionalities, instructions and training to communicate and explain experiences to other agents. The second dimension of TU is process, which describes the information analysis process from a cognitive perspective. Endsley's three level model can be adopted to address this dimension. Though there are some misconceptions claiming that the three levels are strictly linear (e.g. Salmon et al., 2012; Sorensen et al., 2011), Endsley has clarified the SA process as an ascending but non-linear development path (Endsley, 2015). The third dimension of TU is the information state. As previously mentioned, there's no strict distinction between information in working memory and long-term memory; it is merely a matter of priority. Information in activated long-term memory has a higher access priority than that in long-term memory, while critical role of attention and control in shuttling items into and out of the focus of attention (FOA), allowing information has higher priority than that in activated long-term memory (Cowan, 1988). For example, whereas activated memory items, or items in FOA, function as an attentional template and directly affect perception, accessory items in activated long-term memory do not (Olivers,2011; Oberauer, 2012). Therefore, an additional FOA is included in the mental model to represent the information with highest information priority. #### 3.2.2 Teaming Control TC is the behavioral achievement of the team SA. In ATSA, two hierarchical dimensions compose TC: complementing flow and function allocation, addressing _how and what_ to control respectively. Complementing flow depicts the relationship between the agents, determining the assignment of task control authority, which is further reflected in the allocation of function and tasks. The complementing flow describes the top-level design for maximizing hybrid intelligence for HAT, which is based on the synthesis analysis of constraints, costs, quality, and availability of human engagement (Kamar, 2016). This hybrid intelligence must be propelled by leveraging the complementary advantages of AI and human intelligence (Dellermann et al., 2019). In the team in which they try to compensate for their weaknesses, it might be human towards AI or AI towards human, or peer to peer complementing where both agents complement each other (Zahedi & Kambhampati, 2021). When AI is complementing humans, complicated human cognitive dynamics are involved, introducing AI interpretability need to meet human motivations, expectations, or calibrate trust. When humans complement AI, they usually monitor the AI to prevent mistakes, failures, and limitations, thus improving the AI system's performance. Under this complementing flow, human interpretability, such as behavioral and physiological metrics indicating their status, becomes vital. For example, out-of-the-loop problem could bring huge damage in some life critical systems, such as high-level autonomous driving (Endsley, 2021). For a peer-to-peer teaming, AI and humans may interchangeably enter the land of one another for action, decision, or coordination. The bidirectional communication and feedback between the two entities will help achieve a more effective teaming. Though these three kinds of complementing flow diverge in some points, they are all closely related with how to achieve TU, and how to convey TU across team members. Moreover, note that "human complements Al" does not conflict with human centered Al, as the latter emphasizes that the ultimate decision authority belongs to human, and the Al should defer the control authority to human ultimate decision authority. Control authority can be transitioned when needed, for example, when human monitors the Al malfunction for risk management (Bellet et al., 2019), which is well-studied in autonomous vehicle takeover issues (Maggi et al., 2022). This human-centered view is in line with the human-directed execution principle proposed by Battiste et al. (2018). Complementing flow is further embodied and crystallized by function allocation, regarding how system functions and tasks should be distributed across humans and Al (Roth et al., 2019). Static function allocation was prevailed in early days, when the automated function must not be too critical, or too central for human activities. These traditional approaches, for example "Men-are-better-at/Machines-are-better-at" (MABA-MABA, Fitts, 1951) or Level of Automation (LOA) framework (Parasuraman et al., 2000), list pre-determined strength-based human and machine work (Rieth & Hagemann, 2022), yet without considering dynamically changed contexts and finer-grained needs (Jamieson & Skraaning, 2018; Johnson et al., 2018; Roth et al., 2019), thus underscoring the necessity of dynamic function allocation. Dynamic function allocation, or adaptive automation, proposed to a shift from a strict LOA perspective to a cooperation modes perspective. Hoc (2013) asserted that HAT can fall into five modes in general: (1) Perception mode: The Al is designed only in order to enhance the human perception; (2) Mutual control mode: the Al criticizes human behavior in relation to a standard; (3) Shared control mode: The agents act at the same time on the same variable; (4) Function delegation mode: one of the task functions instead of all the functions remaining under the human control; (5) Full automation: humans do not need to interfere at all. Throughout the interaction, the function allocation can shift among these five modes. Moreover, action complementary mechanism should be designed in human-Al systems to prevent excessive control (e.g., suicidal pilot, drunk driver) via the combination of mutual control mode, shared control mode, and function delegation mode. #### 3.2.3 World World is both the constraint and the behavioral product of the team SA, the former of which is the start point of SA, and the latter is the end point of SA. Less is directly discussed about external world regarding HAT, yet we've seen its presence in almost all SA or control models through sensation input or feedback (e.g. Salas et al., 1995; Woods & Hollnagel, 2006). Both constraints and behavioral products include various non-adaptive environmental context (natural, social, and technical), and adaptive system devices (hardware, software, and interface). Note that there is a non-overlapped area for each individual agent indicated by a dash line outside the teaming world part. This area demonstrates the world sampled or modified by this single agent, which can, at least partially, reflect the "control level" of this agent as expounded by Shneiderman (2020a, 2020b). In his two-dimensional human-centered AI framework, the human and AI control level are two orthogonal dimensions, suggesting even highly autonomous system could still keep people in high control level. ### Transactive Part Transactive part is the aggregation of elements transacted between team part and individual part, which ensures that SA is both teamed and individual-owned. Therefore, the interaction between the individual part and the teaming part is accomplished by transaction process (a kind of team process) and the transactive part. Access to information relies on knowing who knows what, or transactive memory (Wegner, 1987). Transactive SA was first proposed by Stanton and Salmon et al. in the distributed SA theory (Salmon et al., 2017; Stanton et al., 2006, 2017). Via transactions in awareness that arise from sharing of information, distributed SA is acquired and maintained. We draw on this line and postulate that TU and individual mental model shall communicate, interact and transact to each other through the transactive part, where team process can exert its influence on. The integration of the transactive part forms the teaming part, and the teaming part is what is relevant to achieving team goals, while the individual part may encompass some non-team-goal-relevant elements. The same is true for TC and individual action. Excessive individual control should be regulated by the interlock of individual transactive part, hence maintaining TC at an acceptable level. One of the obstacle of the transaction is a phenomenon that Steiner (1972) described as "process loss"--a decrement in coordination that results in performance below team potential. While initially discussed in the context of human teams, this concept can be applied equally in HAT context. There have been several studies (e.g. You et al., 2022; Morrow & Fiore, 2013; Baker et al., 2019; Schneider et al., 2021) addressing the problem through designs and methods that mitigate process loss in HAT via devices in the _world_. Cognitive Interfaces are extensively studied to enhance human cognition of AI teammate, for example transparency-related studies (You et al., 2022). Morrow and Fiore (2013) elaborated upon the value of external representations in the world, such as process mapping, which provides a visual representation of work flow. Collaborative problem conceptualization often requires reshaping, or even discarding the puzzle pieces from the information in the process map. Negotiating the construction of the map itself can lead to the development of TU. Moreover, the Real-time Flow, Event, and Coordination Tool (REFLECT, Baker et al., 2019) has been developed to capture data for mapping communication flows within the team, including who speaks to whom throughout team interactions. However, the effort is limited in its ability to capture the full range of team processes, including implicit and explicit communication, negotiable and directive requests, and other factors (Schneider et al., 2021). ### Teaming Cooperation and Collaboration as Temporal Extension According to J.Lee et al.(2023), the three teaming forms (coordination, cooperation, and collaboration) are connected by adaptive cycles of different time scales and resiliences. A single ATSA frame reflets coordination, which is brittle, meaning that it is vulnerable to unexpected threats and has low adative capability. While multiple ATSA frames iterations constitute cooperation and collaboration (see figure 3), with higher resilience and longer time constants. Collaboration is often used interchangeably with cooperation, but collaboration is a higher resilient and longer co-learning and evolving process in which teammates share and update long-term knowledge, rules, and goals to make decisions jointly (McNamara, 2012). In collaboration, the complementing flow becomes peer-to-peer teaming, with AI requiring little supervision (Mainprice et al., 2014) and engaging in active coordination with human peers to exchange ideas, resolve differences, execute tasks, and achieve goals (Rule & Forlizzi, 2012). Attitudes (e.g. trust, disposition to collaborate) are developed along the cooperation and collaboration process. Compared to cooperation, collaboration can be viewed as a long-term relationship between human and AI, as most aspects of collaboration are uncertain, while cooperation is determined by their fixed shared values and goals. During the cultivation of this long-term relationship, both TU and TC, and therefore individual mental models and actions, will dynamically change over time. ### Discussion Overall, ATSA is a SA framework for HAT context. It takes into account the unique characteristics of both human-human teams and human-AI teams, while also recognizing the similarities between humans and AI as autonomous and intelligent agents. In Table 1, extant theories that can be used to elaborate SA in HAT are compared. \begin{table} \begin{tabular}{p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}} \hline & **Team SA by Endsley** & **Distributed SA** & **ATSA** \\ \hline Target Teams & All human teams; intelligent machines are devices required for team SA & Hybrid teams or all-human or all-machine teams; it is especially designed for complex systems & HAT, with different characteristics described for human described for human and AI; while either systems & HAT, with different characteristics described for human described for human and AI; while either systems & HAT, with different characteristics described for human ATSA provides a solid theoretical basis for designing HAT system. The peculiar insights it provides us are elaborated as follows. 1. Three end components First, ATSA divides the entire HAT succinctly into three components, _Teaming Understanding, World, and Teaming Control_, covering all aspects of the interaction process. What really matters are TU and TC. No matter how the process changes or deteriorates, team performance will not be influenced as long as TU and TC maintain well. 2. Different priority for SA ATSA separates the information priority status of information in individual mental model and TU, with different priorities contributing differently to team performance, making it necessary to distinguish the key SA components and ensure them appropriate priorities under particular scenarios. Meanwhile, there is a significant difference between AI and humans regarding information priority, since AI can treat all information equally or adjust priorities more flexibly, while humans are restricted by cognitive limits (e.g. Gao et al., 2022). Therefore, the interaction entailed by this difference is worth exploring. 3. Transactive part matters Third, since _mental model_ and _action_ are both passed through the transactive part and integrated into the teaming part, and the team work together through the teaming part to complete the plan and function allocation, the most critical factor for team process is the transactive part rather than individual part. 4. Evolutionary AI ATSA corresponds AI automation level and autonomy level to _Action_ capability and _Mental Model_ capability respectively, which are more plastic than humans. In this way, HAT research for function allocation may be clearer when tapping to questions related to automation and autonomy. Besides, different from human innate fixed cognitive and physical mechanism (e.g., selective attention), AI algorithm which decides cognitive mechanism can evolve actively through iteration (e.g. Gupta et al., 2021), and both hardware and software can be adjusted passively by the engineer. Therefore, what will the flexibility of AI teammate bring to team dynamics remains an open question. 5. Teaming collaboration dynamics ATSA effectively addresses cooperation and collaboration issues by extending the time horizon. As AI evolves and teams engage in co-learning, both TU and TC are able to continuously update, leading to the emergence of novel and complex teaming dynamics, such as attitudes, strategies, and long-term relationships, which play a pivotal role in fostering effective and cohesive collaboration. Though ATSA is capable of tackling many issues in HAT, there may remain some facets to be improved. A critic of this model might be that ATSA apply a single human and AI perspective (Renner and Johansson, 2006; Salmon et al., 2018). We are gratifying to note that ATSA is endowed with good extensibility, as we may add as much agents as we want if we convert 2D view to 3D view (see Figure 5). This allows ATSA to be accountable for complex systems with multiple humans or AI, while retaining its elegant essence. Another criticism might be that ATSA is unable to interpret issues like trust (e.g. Biondi et al., 2019; de Visser et al., 2018; Rebensky et al., 2021; Xing et al., 2021). Though ATSA is a cognitive model in nature, we argue that social factors like trust are able to act upon the cognition process (i.e. social cognition) and the cooperation/collaboration process. In this way, trust can not only influence the TU, but also the consequent TC (e.g. Adams et al., 2003; Dzindolet et al., 2003). Therefore, social dynamics may be extended to the model in the future. ## 4 Future Work and Application Case of ATSA ### Next Steps #### 4.1.1 Teaming SA Measurements We posit that teaming SA is a system-level emergent property that cannot be segregated into individual components and measured separately. Prior research has attempted to do so by taking into account the individual SA additions as the team SA result (e.g., Gombolay et al., 2017). However, we emphasize the dynamic nature of SA in ATSA. Consequently, a final test after the task's completion is not a reliable measure of teaming SA. Instead, we recommend a direct in-process approach, which requires urgent attention so that ATSA-related empirical research can be conducted. Figure 5: ATSA 3D conversion. Practitioners can insert as many agents as necessary. Prior research on direct in-process measurement for team SA has provided some insights into measuring teaming SA. Two approaches that may be useful. (1). Team probe-recall techniques (Bolstad et al., 2005) involve using a Situation Awareness Global Assessment Technique (SAGAT) in a team environment, where the thask is frozen and participants have to recall information related to perception, comprehension, and prediction. Although this approach only captures a small part of teaming SA (the process of TU), we suggest extending checklists for information matrices within the three dimensions (contents, processes, states) into the probe-recall as a comprehensive measurement for teaming SA. Furthermore, real-time neural decoding of individual focus of attention (FOA) may also be useful to reveal agreements or conflicts of TU. (2). The Co-ordinated Assessment of Situation Awareness of Teams (CAST, Gorman, 2006) employs a scenario involving "roadblocks" and assesses team SA by evaluating how the team perceives and responds to these obstacles through coordinated actions, resulting in a measure of team SA. This introduces a process-performance-based team SA measurement that synthesizes both cognition and action, which approximates ATSA to a large extent. However, it still fails to measure the concrete dynamic happening within the coordination cycles. A better solution might be to establish TU and TC networks according to the roadblock task settings and evaluate the network correspondingly. #### Proper Teaming SA within Cooperation and Collaboration Dynamics Most SA related studies focus on enhancing team SA, with the assumption that higher SA leads to improved team performance. This belief is prevalent among researchers, with some even claiming that SA can be indirectly measured through performance (Endsley, 2021). However, the relationship between SA and team performance may not be strictly positive, particularly in the context of teaming SA in HAT. We argue that only critical information relevant to the current task in FOA and activated long-term memory, as well as prediction information related to subsequent tasks in activated long-term memory but not FOA, are sufficient and necessary for achieving higher team performance. For the HAT context, positive relationship might be right for systemetic level teaming SA, while not for cognitive level individual SA. Peer-to-peer teaming requires both higher level teaming SA and individual SA, whereas non peer-to-peer teaming might not require this level, as it involves the consideration of various factors including costs, constraints, quality and availability (Zahedi & Kambhampati, 2021). When the AI complements the human or vice versa, teaming SA requirements remain the same as peer-to-peer teaming, while the subordinate member may not require as much individual SA as before. Instead, they must find a balance among multiple factors such as workload, time cost, and attention. Moreover, when more complex factors such as trust and team harmony are introduced in cooperation and collaboration, both teaming and individual SA requirements may change, at least on the 'team' component. Therefore, it becomes crucial to explore in depth how teaming SA can be calibrated in the HAT context. ### Usage of ATSA in an Autonomous Vehicle Case Take HAT in an autonomous driving accident as an example, where the framework should be applied in all changed time points, including normal ADAS mode, hazard appearing, warning, human takeover, and after human takeover. When the AI, for instance, adaptive cruise control system, is controlling the vehicle, and the human is immersing in a non-driving related task, for example watching movies, TC is under function delegation mode. Though both driver and AI reach TU on team and task elements, they cannot reach agreement on communication element if AI does not aware that the human engagement of visual and auditory modality. Once the AI notice the hazard and try to warn the driver through visual and auditory signals, what it does is transacting its activated mental model to the driver. However, the driving-related information priority for the driver now is much lower than the contents in the movie, and plus the cognitive limits restrict him/her capture the signal (Inattentional blindness, Mack, 2003). When the driver is finally aware of the AI warning, the AI may have already developed the TU to the comprehension or even prediction level, and this TU transacts back to the driver through multiple kinds of devices in car, such as head-up displays, or voice assistants. The driver successfully takes over the control, while TC remains shared control mode in that AI and the driver is controlling some specific parameters (e.g., handwheel) together, as humans need a recovery phase when getting back in the loop (e.g., Russell et al., 2016). As both the human and the AI confirm the hazard has been successfully passed, i.e., they reach TU on the task elements, the TC mode shift back to function delegation mode. Through this analysis, we can easily identify several important issues discussed in the HAT field, such as transparency, intention, and workload etc. (e.g. Biondi et al., 2019; O'Neill et al., 2020). By ATSA, transparency is not only a design principle for the adaptive system devices or displays in the _World_ part, but also a principle for mental model and action transaction for all agents. The reason why intention becomes a research hotspot is that it ties TU and TC most closely. As to workload, together with intention, they are one of the most rapidly changed components during the HAT process, compared with other SA components, such as task or communication. Moreover, ATSA can be practically applied to guide engineering implementation. Here we propose several implementation directions that leverage ATSA's unique capabilities. Overall, ATSA is able to guide an interaction or collaboration work flow for hybrid human-Al team, albeit in a highly context-specific manner. Firstly, for instance, an ATSA-based algorithm concept can serve as a threshold to facilitate dynamic function allocation between the human and the autonomy in terms of mental model and action level of both entities. Second, algorithms can be designed to enhance teaming SA by computing TU and TC conflict levels and thus resolving corresponding conflicts. ## 5 Concluding Comments In summary, we conducted a comprehensive review of SA-related frameworks in HAT, and conceptualized a novel, multi-directional, dynamic and theory-based framework (i.e., ATSA) for human-Al interaction and collaboration. ATSA is a cyclical SA process which contains three interrelated components: world, teaming understanding, and teaming control. Despite the dissimilarities between human and AI, both parties transact their individual mental models and actions to form teaming understanding and teaming control, which in turn affect and modify the individual aspects. We anticipate that ATSA will contribute to both the theoretical foundations and application domain in HAT. We propose several future research directions to expand on the distinctive contributions of ATSA and address the specific and pressing next steps. Firstly, it is necessary to conduct empirical research to validate the effectiveness of ATSA and its unique contributions in real-world HAT, particularly regarding the different priority of SA and the transactive part. This could involve conducting experiments or simulations to assess the impact of different levels of TU and TC on team performance and exploring how different SA components contribute to team performance under various scenarios. Analytic network process tools might help in solving such scenarios. The premise of these empirical research is to establish direct in-process approaches to measure teaming SA, which requires partial and convenient approach to produce satisfactory measurement results. Lastly, it is crucial to explore the optimal level of teaming SA in HAT, and how it can be calibrated in the context of cooperation and collaboration dynamics. This could involve developing new benchmarks for evaluating teaming SA in different contexts with mathematic approaches. Addressing these research directions will help advance our understanding of ATSA and therefore HAT systems, leading to more effective and cohesive collaboration between humans and AI.
2305.19462
Optimized Constellation Design for Two User Binary Sensor Networks Using NOMA
Data Fusion of wireless sensors is a common technique employed in many communication systems. This work focuses on incorporating the principles of non-orthogonal-multiple-access (NOMA) to optimize error performance directly in the choice of constellation design. More specifically, the problem of two sensor data fusion of a binary uniform source sent over a Gaussian multiple access channel via symmetric binary constellations is investigated. A so-called planar upper bound on the error probability is analytically derived. A constellation design is then obtained by establishing in closed form its rotation parameter that minimizes the upper bound. Simulation results show that the resulting constellations achieve a near identical performance as experimentally determined optimal constellations.
Luca Sardellitti, Glen Takahara, Fady Alajaji
2023-05-31T00:11:46Z
http://arxiv.org/abs/2305.19462v1
# Optimized Constellation Design for Two User Binary Sensor Networks Using NOMA ###### Abstract Data Fusion of wireless sensors is a common technique employed in many communication systems. This work focuses on incorporating the principles of non-orthogonal-multiple-access (NOMA) to optimize error performance directly in the choice of constellation design. More specifically, the problem of two sensor data fusion of a binary uniform source sent over a Gaussian multiple access channel via symmetric binary constellations is investigated. A so-called planar upper bound on the error probability is analytically derived. A constellation design is then obtained by establishing in closed form its rotation parameter that minimizes the upper bound. Simulation results show that the resulting constellations achieve a near identical performance as experimentally determined optimal constellations. Uplink NOMA, wireless sensor networks, data fusion, inter-constellation rotation design, error probability. ## I Introduction There has been increased interest recently in the spectrally efficient non-orthogonal-multiple-access (NOMA) technique where multiple users send different data (via superposition) across a common channel (e.g., see [1, 2, 3, 4, 5, 6] and the references therein). In this work, the concept of sensor networks is combined with NOMA for the design of optimal symmetric binary constellations. Previous works such as [7] used functional processing of multiple sensor nodes' data before sending it over the channel to perform predictive error correction. Another common error mitigation approach is called data fusion. This is when multiple sensors simultaneously observe the same data source and then send their data over a shared channel, where a receiver is designed to decode the original source data. Previous research in this area has focused on optimizing the data fusion algorithm at the receiving node, while working with a fixed modulation scheme for sending the sensors' data. For example, [8] fixes the constellation type to be binary phase-shift keying (BPSK), and [9] uses local likelihood ratio based on-off keying. This work instead focuses on finding the optimal constellation design so that the signals from the sensors can better reinforce each other and achieve a minimal error probability. This is similar in concept to [5] and [10], where the optimal constellation rotation angle is investigated. This work however considers a different problem since [5] and [10] deal with decoding two different sources sent over a multiple access channel, while the focus herein is on data fusion where a single source is decoded from two signals that are correlated to the source and sent via symmetric binary NOMA constellations over the multiple access channel. When analyzing constellation design, various possible configurations of parameters such as sensor correlation, noise and power constraints can affect the results of problem. The main contribution of this work is an upper bound on the optimal bit error rate (or error probability) which, when minimized to design the NOMA system's rotation parameter, is shown to be experimentally close to the simulated optimal error performance. The problem setup, including the mathematical model, is described in Section II. In Section III, the system's error behaviour is analyzed in detail and an upper bound on the error probability based on planar decoding regions is established. Section IV compares numerical results from the upper bound to the simulated optimal error performance. In Section V, it is shown that at high SNRs the error upper bound approaches the optimal error performance. Finally, future work directions are stated in Section VI. ## II System Model ### _Source and Sensors_ Let \(X\) be a memoryless binary data source that is to be observed by a sensor network. For simplicity, the source is assumed to be uniformly distributed; i.e., \(\text{Pr}(X=i)=1/2\), \(i=0,1\). There are two sensors, \(X_{1}\) and \(X_{2}\) observing the source \(X\), which are modelled as passing \(X\) through two memoryless binary symmetric channels: \[X_{s}=X\oplus Z_{s},\qquad\qquad s=1,2, \tag{1}\] where \(Z_{1}\) and \(Z_{2}\) are independent Bernoulli noise processes with means (or channel crossover probabilities) \(\epsilon_{1}\) and \(\epsilon_{2}\), respectively. To avoid uninteresting cases, sensor 1 is assumed to have stronger correlation to the original source \(X\) than sensor 2: \(0<\epsilon_{1}<\epsilon_{2}<0.5\). Also let \(P_{1}\) and \(P_{2}\) denote the power constraints of the sensors (each sensor has its own power allotment, as opposed to having a common power constraint on the entire network). The sensors, unable to communicate with each other, encode their data independently using symmetric binary constellations. The constellations for the sensors are parameterized as follows: \(\mathcal{C}_{1}=\{c_{0,1},c_{1,1}\}=\{-\sqrt{P_{1}},\ \sqrt{P_{1}}\}\) and \(\mathcal{C}_{2}=\{c_{0,2},c_{1,2}\}=\{-\sqrt{P_{2}}e^{j\theta},\ \sqrt{P_{2}}e^{j\theta}\}\), where \(j\) is the imaginary unit and \(\theta\in[0,\frac{\pi}{2}]\) is the rotation between the two constellations. For \(i\in\{0,1\}\), \(s\in\{1,2\}\)\(c_{i,s}\in\mathcal{C}_{s}\) denotes the constellation point assigned to \(X_{s}=i\). ### _Channel Model_ The two sensor signals are superimposed and sent through a Gaussian multiple access channel (GMAC) with noise power \(N_{0}\). The received signal \(R\) is given by \[R=S_{1}+S_{2}+Z \tag{2}\] where \(S_{1}\in\mathcal{C}_{1}\), \(S_{2}\in\mathcal{C}_{2}\) are each sensor's chosen constellation point, and \(Z\) is a complex (bivariate) Gaussian noise variable with independent zero mean components of equal variance given by \(\frac{N_{0}}{2}\). It is also assumed that \(Z\) is independent of the sensor signals \(S_{1}\) and \(S_{2}\). Due to the superposition of the sensor signals, the overall signal \(S_{1}+S_{2}\) sent over the channel can be represented as a point in the combined constellation of \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), given by \[\mathcal{C}=\{c_{1}+c_{2}\mid c_{1}\in\mathcal{C}_{1},\;c_{2}\in\mathcal{C}_{2 }\}. \tag{3}\] This work's objective is then to design optimal symmetric binary NOMA constellations in the sense of achieving the smallest possible error probability. Since sensor 1's constellation \(\mathcal{C}_{1}\) is fixed as BPSK, it is sufficient to optimize constellation \(\mathcal{C}_{2}\) for sensor 2, which is equivalent to finding the optimal rotation angle \(\theta\). This problem is tackled by establishing and optimizing a tight upper bound on the error probability. ## III Error Analysis and Constellation Design ### _Decoding Regions_ To recover the original data source, maximum-likelihood (ML) decoding is used (which is optimal as the source is uniform). For a GMAC received signal \(r\in\mathbb{C}\), the decoded symbol is determined as follows: \[\hat{x}(r) =\operatorname*{arg\,max}_{i\in\{0,1\}}f(R=r\mid X=i)\] \[=\operatorname*{arg\,max}_{i\in\{0,1\}}\ \sum_{(l,m)\in\{0,1\}^{2}}p_{lm|i}e^{-\frac{|r-a_{lm}|^{2}}{N_{0}}} \tag{4}\] where \(f\) is the conditional probability density function (pdf), \(p_{lm|i}\triangleq\text{Pr}(X_{1}=l,X_{2}=m|X=i)\), and \(a_{lm}\in\mathcal{C}\) denotes the superimposed constellation symbol associated with \(X_{1}=l\) and \(X_{2}=m\). Note that the conditional probabilities \(p_{lm|i}\) can be expressed in terms of the sensor crossover probabilities: \[p_{11|0}=p_{00|1}=\epsilon_{1}\epsilon_{2},\quad p_{00|0}=p_{11|1 }=(1-\epsilon_{1})(1-\epsilon_{2})\] \[p_{01|0}=p_{10|1}=(1-\epsilon_{1})\epsilon_{2},\quad p_{10|0}=p_ {01|1}=\epsilon_{1}(1-\epsilon_{2}).\] To decode \(r\), the complex plane is partitioned into two decoding (or decision) regions, \(\mathcal{D}_{0}\) and \(\mathcal{D}_{1}=\mathcal{D}_{0}^{c}\), where \(\mathcal{D}_{i}=\{r\mid\hat{x}(r)=i\}\), \(i=0,1\). By taking advantage of symmetries, \(\mathcal{D}_{1}\) can be expressed explicitly as function of the rotation angle \(\theta\) between constellations \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) as follows: \[\mathcal{D}_{1}(\theta) =\left\{r\mid\sum_{(l,m)\in\{0,1\}^{2}}p_{lm|1}e^{-\frac{|r-a_{lm }|^{2}}{N_{0}}}>\right.\] \[\left.\sum_{(l,m)\in\{0,1\}^{2}}p_{lm|0}e^{-\frac{|r-a_{lm}|^{2}}{ N_{0}}}\right\}\] \[=\left\{r\mid(p_{11|1}-p_{11|0})e^{\frac{-|a_{11}|^{2}}{N_{0}}} \Big{(}e^{\frac{2\text{Re}(r\epsilon_{11}^{*})}{N_{0}}}-e^{\frac{-2\text{Re}( r\epsilon_{11}^{*})}{N_{0}}}\Big{)}\right.\] \[\left.>(p_{10|1}-p_{01|0})e^{\frac{-|a_{01}|^{2}}{N_{0}}}\Big{(}e^ {\frac{2\text{Re}(r\epsilon_{11}^{*})}{N_{0}}}-e^{\frac{-2\text{Re}(r\epsilon _{11}^{*})}{N_{0}}}\Big{)}\right\}\] \[=\left\{r\mid K_{1}(\theta)\sinh\Big{(}\frac{2\text{Re}(rc_{1,1}^ {*})+2\text{Re}(rc_{1,2}^{*})}{N_{0}}\Big{)}\right.\] \[\left.>K_{0}(\theta)\sinh\Big{(}\frac{-2\text{Re}(rc_{1,1}^{*})+2 \text{Re}(rc_{1,2}^{*})}{N_{0}}\Big{)}\right\}\] \[=\left\{r\mid\tanh A(r)>\tanh B(r,\theta)\frac{K_{0}(\theta)-K_{1} (\theta)}{K_{0}(\theta)+K_{1}(\theta)}\right\}\] where \({}^{*}\) denotes complex conjugation, \[A(r)=\frac{2\text{Re}(rc_{1,1}^{*})}{N_{0}},\quad B(r,\theta)= \frac{2\text{Re}(rc_{1,2}^{*})}{N_{0}},\] \[K_{0}(\theta)=(\epsilon_{2}-\epsilon_{1})e^{\frac{-|a_{01}|^{2}}{ N_{0}}},\quad K_{1}(\theta)=(1-\epsilon_{1}-\epsilon_{2})e^{\frac{-|a_{11}|^{2}}{ N_{0}}},\] and the identity \(|a+b|^{2}=|a|^{2}+2\text{Re}(ab^{*})+|b|^{2}\) was used along with the facts that \(a_{11}=-a_{00}\) and \(a_{01}=-a_{10}\). Note also that the constellation points \(c_{1,2}\), \(a_{01}\) and \(a_{11}\) are all a function of \(\theta\). An example of regions \(\mathcal{D}_{0}\) and \(\mathcal{D}_{1}\) is depicted in Fig.1 for \(N_{0}=1\), \(P_{1}=1\), \(P_{2}=1.5\), \(\epsilon_{1}=0.15\), \(\epsilon_{2}=0.17\) and \(\theta=\frac{\pi}{2}\). ### _Error Upper Bound for Planar Decision Regions_ The system's error probability (under optimal ML decoding and uniform source \(X\)) can be expressed as \[P_{e}=\text{Pr}(R\in\mathcal{D}_{0}\mid X=1). \tag{5}\] Note that \(P_{e}=P_{e}(\theta)\), i.e., it is a function of \(\theta\), and the boundary between regions \(\mathcal{D}_{0}\) and \(\mathcal{D}_{1}\) is given by \[\tanh A(r)=\tanh B(r,\theta)\frac{K_{0}(\theta)-K_{1}(\theta)}{K_{0}(\theta)+K_ {1}(\theta)}. \tag{6}\] An upper bound on \(P_{e}(\theta)\) in (5) is next obtained by restricting the decision regions to be _planar_, i.e., they are the left and right half planes: \[\mathcal{D}_{1}^{c}=\mathcal{D}_{0}=\mathcal{D}_{0,\text{planar}}\triangleq\{r \mid\text{Re}(r)\leq 0\}.\] Fig. 1: Decision regions for \(\theta=\frac{\pi}{2}\) (yellow region is \(\mathcal{D}_{1}\)). The red points represent the superimposed constellation points \(a_{lm}\in\mathcal{C}\). Thus \[P_{e}(\theta) =\text{Pr}(R\in\mathcal{D}_{0}\mid X=1)\] \[\leq\text{Pr}(R\in\mathcal{D}_{0,\text{planar}}\mid X=1)\] \[=\text{Pr}(\text{Re}(R)\leq 0\mid X=1)\] \[=\sum_{(l,m)\in\{0,1\}^{2}}p_{lm|1}\text{Pr}(\text{Re}(R)\leq 0 |X_{1}=l,X_{2}=m)\] \[=\epsilon_{1}+(1-\epsilon_{1}-\epsilon_{2})Q\left(\frac{\sqrt{P_{ 1}}+\sqrt{P_{2}}\cos(\theta)}{\sigma}\right)\] \[\qquad\qquad+(\epsilon_{2}-\epsilon_{1})Q\left(\frac{\sqrt{P_{1}} -\sqrt{P_{2}}\cos(\theta)}{\sigma}\right)\] \[\triangleq P_{e}^{\text{nb}}(\theta), \tag{7}\] where \(\sigma=\sqrt{N_{0}/2}\) denotes the noise standard deviation, \(Q(x)=\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}e^{-t^{2}/2}\,dt\) is the Gaussian tail function and the penultimate equality holds since for a fixed \((X_{1},X_{2})=(l,m)\), the real part of the received signal, \(\text{Re}(R)\), is normal (Gaussian) with mean \(\text{Re}(a_{lm})\) and variance \(N_{0}/2\). ### _Existence of Planar Decision Regions_ With the error upper bound \(P_{e}^{\text{ub}}(\theta)\) obtained in (7) for planar decision regions, the objective is to minimize it over the constellation angle \(\theta\) and hence construct an optimized constellation \(\mathcal{C}_{2}\). But it is first shown that such planar decision regions indeed exist; that is, there is always a value of \(\theta\) that will result in the regions \(\mathcal{D}_{0}\) and \(\mathcal{D}_{1}\) being the left and right half planes, respectively. In order for the decision boundary to be exactly on the imaginary axis it is required that for all \(r\) such that \(\text{Re}(r)=0\), (6) must hold. Since \(\tanh A(r)=0\) for \(\text{Re}(r)=0\), and \(\tanh B(r,\theta)\neq 0\) for \(\text{Im}(r)\neq 0\) and \(\theta\neq 0\), (6) yields that \[\frac{K_{0}(\theta)-K_{1}(\theta)}{K_{0}(\theta)+K_{1}(\theta)}=0\] \[\implies K_{0}(\theta)=K_{1}(\theta)\] \[\implies (1-\epsilon_{1}-\epsilon_{2})e^{\frac{-|a_{11}|^{2}}{N_{0}}}=( \epsilon_{2}-\epsilon_{1})e^{\frac{-|a_{01}|^{2}}{N_{0}}}\] \[\implies \theta=\cos^{-1}(pcf) \tag{8}\] for \(0\leq pcf\leq 1\), where \[pcf\triangleq\frac{N_{0}}{4\sqrt{P_{1}P_{2}}}\ln\left(\frac{1-\epsilon_{2}- \epsilon_{1}}{\epsilon_{2}-\epsilon_{1}}\right) \tag{9}\] is called the _power-correlation-factor_. We remark that the \(pcf\) given in (9) may be larger than 1, in which case (8) does not hold. When \(pcf\leq 1\), it is easy to verify that \(\theta\) in (8) yields that the decision regions are exactly the left and right halves of the plane (note that \(pcf>0\) since \(0<\epsilon_{1}<\epsilon_{2}<1/2\)). When \(pcf>1\) it is next shown that for \(\theta=0\), the resulting decision regions are also the left and right halves planes. First note that when \(\theta=0\), the constants \(K_{0}\) and \(K_{1}\) reduce to \[K_{0}(0) =(\epsilon_{2}-\epsilon_{1})e^{\frac{-P_{1}-P_{2}+2\sqrt{P_{1}P_{ 2}}}{N_{0}}} \tag{10}\] \[K_{1}(0) =(1-\epsilon_{2}-\epsilon_{1})e^{\frac{-P_{1}-P_{2}-2\sqrt{P_{1} P_{2}}}{N_{0}}}. \tag{11}\] Now in light of (10) and (11), the condition \(pcf>1\) implies \[\frac{N_{0}}{4\sqrt{P_{1}P_{2}}}\ln\Big{(}\frac{1-\epsilon_{2}- \epsilon_{1}}{\epsilon_{2}-\epsilon_{1}}\Big{)}>1\] \[\implies \frac{1-\epsilon_{2}-\epsilon_{1}}{\epsilon_{2}-\epsilon_{1}}>e^{ \frac{4\sqrt{P_{1}P_{2}}}{N_{0}}}\] \[\implies (1-\epsilon_{2}-\epsilon_{1})e^{\frac{-2\sqrt{P_{1}P_{2}}}{N_{0}} }>(\epsilon_{2}-\epsilon_{1})e^{\frac{2\sqrt{P_{1}P_{2}}}{N_{0}}}\] \[\implies K_{0}(0)<K_{1}(0)\] \[\implies \frac{K_{0}(0)-K_{1}(0)}{K_{0}(0)+K_{1}(0)}<0.\] Finally, noting that \(\text{sgn}(\tanh A(r))=\text{sgn}(\tanh B(r,0))\) for any \(r\), it directly follows that the resulting decoding regions are the left and right halves of the complex plane. ### _Optimizing the Planar Error Bound_ To optimize (7) over \(\theta\), first observe that the only dependence on \(\theta\) is in a cosine function, so the function will be periodic and even. Hence it is sufficient to minimize it over the interval \(\theta\in[0,\pi]\). Since the expression is a bounded, differentiable function of \(\theta\), finding the smallest critical point or endpoint will solve the minimization problem. Solving for the critical points is done as follows, recalling that the derivative of the Gaussian tail function is \(\frac{d}{dx}Q(x)=-f_{\mathcal{N}}(x)\), where \(f_{\mathcal{N}}\) is the pdf of a standard normal random variable: \[\frac{d}{d\theta}P_{e}^{\text{ub}}(\theta)\] \[\quad=(1-\epsilon_{1}-\epsilon_{2})f_{\mathcal{N}}\Big{(}\frac{ \sqrt{P_{1}}+\sqrt{P_{2}}\cos(\theta)}{\sigma}\Big{)}\Big{(}\frac{\sin\theta \sqrt{P_{2}}}{\sigma}\Big{)}\] \[\quad-(\epsilon_{2}-\epsilon_{1})f_{\mathcal{N}}\Big{(}\frac{\sqrt {P_{1}}-\sqrt{P_{2}}\cos(\theta)}{\sigma}\Big{)}\Big{(}\frac{\sin\theta \sqrt{P_{2}}}{\sigma}\Big{)}. \tag{12}\] From (12), it is clear that if \(\sin(\theta)=0\), it results in a critical point. This is equivalent to \(\theta\in\{0,\pi\}\), which takes care of the endpoints as well. Now, assuming \(\sin(\theta)\neq 0\), and solving for when the above derivative is 0 yields: \[\frac{(1-\epsilon_{1}-\epsilon_{2})}{(\epsilon_{2}-\epsilon_{1})}e^{ -\frac{(\sqrt{P_{1}}+\sqrt{P_{2}}\cos(\theta))^{2}}{2\sigma^{2}}}=e^{-\frac{( \sqrt{P_{1}}-\sqrt{P_{2}}\cos(\theta))^{2}}{2\sigma^{2}}}\] \[\implies (1-\epsilon_{1}-\epsilon_{2})e^{\frac{-2\cos(\theta)\sqrt{P_{1}P_{ 2}}}{N_{0}}}=(\epsilon_{2}-\epsilon_{1})e^{\frac{2\cos(\theta)\sqrt{P_{1}P_{2}}}{N_ {0}}}\] \[\implies \theta=\cos^{-1}(pcf) \tag{13}\] where the above critical point exists if and only if \(pcf\leq 1\). Hence the error bound expression has 2 or 3 critical points depending on the value of \(pcf\). It is however easy to verify that \(P_{e}^{\text{ub}}(0)<P_{e}^{\text{ub}}(\pi)\) using the fact that \(\cos(0)=-\cos(\pi)=1\): \[P_{e}^{\text{ub}}(0)-P_{e}^{\text{ub}}(\pi) =(1-\epsilon_{1}-\epsilon_{2})(Q_{1}-Q_{2})\] \[\qquad+(\epsilon_{2}-\epsilon_{1})(Q_{2}-Q_{1})\] \[=(1-2\epsilon_{2})(Q_{1}-Q_{2})<0\] where \(Q_{1}\triangleq Q(\frac{\sqrt{P_{1}}+\sqrt{P_{2}}}{\sigma})\), \(Q_{2}\triangleq Q(\frac{\sqrt{P_{1}}-\sqrt{P_{2}}}{\sigma})\) and the inequality follows since \(Q_{1}<Q_{2}\) (as the \(Q\)-function is decreasing). All that remains is to compare \(P_{e}(0)\) and \(P_{e}(\cos^{-1}(pcf))\) (if it exists). It suffices to show in this case that the derivative of the error expression in (12) is less than zero for each \(\theta\in(0,\cos^{-1}(pcf))\). First note that \(\theta\in(0,\cos^{-1}(pcf))\implies\cos(\theta)\in(pcf,1)\), and hence, recalling the expression of \(pcf\) in (9), the following holds for these \(\theta\) values: \[(1-\epsilon_{1}- \epsilon_{2})e^{\frac{-2\cos(\theta)\sqrt{\mathcal{T}_{1}P_{2}}}{N _{0}}}-(\epsilon_{2}-\epsilon_{1})e^{\frac{2\cos(\theta)\sqrt{\mathcal{T}_{1} P_{2}}}{N_{0}}}\] \[<(1-\epsilon_{1}-\epsilon_{2})e^{\frac{-2pcf/\mathcal{T}_{1}P_{2 }}{N_{0}}}-(\epsilon_{2}-\epsilon_{1})e^{\frac{2pcf/\mathcal{T}_{1}P_{2}}{N_{0} }}\] \[=0. \tag{14}\] Thus, for \(\theta\in(0,\cos^{-1}(pcf))\), \(dP_{e}^{\text{ub}}(\theta)/d\theta\) in (12) satisfies \[\frac{d}{d\theta}P_{e}^{\text{ub}}(\theta)=\frac{\sin\theta \sqrt{P_{2}}}{\sigma}\Big{(}(1-\epsilon_{1}-\epsilon_{2})e^{-\frac{(\sqrt{ \mathcal{T}_{1}}+\sqrt{\mathcal{T}_{2}}\cos(\theta))^{2}}{N_{0}}}\] \[\qquad\qquad\qquad-(\epsilon_{2}-\epsilon_{1})e^{-\frac{(\sqrt{ \mathcal{T}_{1}}-\sqrt{\mathcal{T}_{2}}\cos(\theta))^{2}}{N_{0}}}\Big{)}\] \[=\frac{\sin\theta\sqrt{P_{2}}}{\sigma}e^{\frac{P_{1}+\cos^{2}( \theta)P_{2}}{N_{0}}}((1-\epsilon_{1}-\epsilon_{2})e^{-\frac{2\cos(\theta) \sqrt{P_{1}P_{2}}}{N_{0}}}\] \[\qquad\qquad\qquad-(\epsilon_{2}-\epsilon_{1})e^{\frac{2\cos( \theta)\sqrt{P_{1}P_{2}}}{N_{0}}})\] \[<0\] where the last inequality holds by (14) and the fact that \(\sin(\theta)>0\) for all \(\theta\in(0,\cos^{-1}(pcf))\). Hence the smallest critical point (and minimum) of the planar error bound equals \(\theta=\cos^{-1}(pcf)\), or \(\theta=0\) if \(pcf>1\). ### _Least Planar Upper Bound_ The implication of the above results is that the smallest planar upper bound over the constellation rotation parameter \(\theta\) is achieved at \[\theta_{ub}^{*}=\cos^{-1}(\min\{pcf,1\}) \tag{15}\] and, upon substituting (15) into (7), is given by \[P_{e}^{\text{ub}}(\theta_{ub}^{*}) =\epsilon_{1}+(1-\epsilon_{1}-\epsilon_{2})Q\left(\frac{\sqrt{ \mathcal{T}_{1}}+\sqrt{\mathcal{T}_{2}}\min\{pcf,1\}}{\sigma}\right)\] \[\quad+(\epsilon_{2}-\epsilon_{1})Q\left(\frac{\sqrt{\mathcal{P}_{ 1}}-\sqrt{\mathcal{P}_{2}}\min\{pcf,1\}}{\sigma}\right). \tag{16}\] Note that the \(\theta\) in (15) minimizing \(P_{e}^{\text{ub}}(\theta)\) is identical to the one derived in Section III-C yielding planar decision regions. ## IV Numerical and Simulation Results It is next demonstrated that this upper bound performs very well experimentally at any signal-to-noise ratio (SNR). This is achieved by comparing the optimal upper bound in (16) with an experimentally determined optimal error probability. The system's SNR is calculated as a geometric average, i.e., \(\text{SNR}=\frac{\sqrt{\mathcal{P}_{1}P_{2}}}{N_{0}}\), and it is reported in dB (i.e., \(\text{SNR}\) (\(\text{dB})=10\log_{10}(\text{SNR})\)). The process for generating the experimental data is as follows. For each SNR and \(\theta\) value, \(n=30\) trials, each consisting of \(N=100,000\) independent source bits being sent through the channel, were simulated. The SNR values, ranging from -10 to 20 dB, are listed in Table I, and 100 \(\theta\) values, each equally spaced in \([0,\frac{\pi}{2}]\), were simulated. The ML decoding rule was applied to the simulated data and the error rate for each trial was calculated for each \(\theta\). The error probability was estimated for each \(\theta\) by averaging the 30 at low SNRs, the \(\theta_{ub}^{*}\) values minimizing the error upper bound \(P_{e}^{\text{ub}}(\theta)\) and the \(\theta_{exp}^{*}\) values minimizing the simulated true error probability \(P_{e}(\theta)\) are nearly identical. This is not however the case at high SNRs; this discrepancy is attributed to the system's high SNR behavior (analyzed in Section V), where it is shown that both \(P_{e}^{\text{ub}}(\theta)\) and \(P_{e}(\theta)\) approach the _same constant_ (\(\epsilon_{1}\)) that is independent of \(\theta\). Hence this explains why the optimal \(\theta_{exp}^{*}\) values are hard to obtain accurately at high SNRs. However, this is not of practical interest since the optimal error probability is not sensitive to \(\theta\) in the high SNR regime. Furthermore, the optimal \(\theta_{ub}^{*}\) values in Table I are accurate for large SNRs and approach the theoretical limit of \(\pi/2\) as shown in Section V. Finally, note that for any SNR, it is the resulting optimal error values that are of primary interest, and these can be seen in Fig. 2. The error bars show the \(95\%\) confidence intervals for the optimal bit error rate and are calculated using the standard deviation of the minimum values over each of the \(n\) trials. From Fig. 2 note that, for the entire SNR range, the error probability graphs are nearly identical and the experimental error bars overlap with the upper bound at most data points. It is also observed that at high SNRs, although the optimal \(\theta\) values do not line up very well, the resulting optimal error values at high SNR are still very close together. ## V High SNR Analysis For this analysis it is defined that high SNR means \(N_{0}\to 0\). This is a reasonable assumption since each of the individual sensor SNRs should be growing at similar rates, and one sensor should not have infinitely more power than the other. ### _Upper Bound Analysis_ Taking the limit as \(N_{0}\to 0\) of the power-correlation-factor's expression in (9) directly gives that \[\lim_{N_{0}\to 0}pcf=\lim_{N_{0}\to 0}\frac{N_{0}}{4\sqrt{\mathcal{P}_{1}P_{2}}} \ln\Big{(}\frac{1-\epsilon_{2}-\epsilon_{1}}{\epsilon_{2}-\epsilon_{1}}\Big{)}=0.\] This directly implies using (15) that \(\lim_{N_{0}\to 0}\theta^{*}_{\text{ub}}=\frac{\pi}{2}\), which in turn yields using (16) that \[\lim_{N_{0}\to 0}P^{\text{ub}}_{e}(\theta^{*}_{ub})=\epsilon_{1}\] since \(\lim_{x\rightarrow\infty}Q(x)=0\). The above results show that the optimal upper bound constellation approaches _orthogonal signaling_, and that \(P^{\text{ub}}_{e}(\theta^{*}_{ub})\) approaches the error probability incurred by just sending the signal with more correlation to the original source (i.e., just sending \(X_{1}\)). ### _Asymptotic Optimality_ It is next pointed out that the upper bound \(P^{\text{ub}}_{e}(\theta^{*}_{ub})\) approaches the true minimum error probability \(P_{e}(\theta^{*})\) in the high SNR regime. Due to space limitations, this result is not proved rigorously, but it is instead explained following an intuitive argument. First note that for any fixed set of parameters (including \(\theta\)) the decision regions in the high SNR regime approach the nearest neighbour decoding regions for the sensor with more correlation to the original source. That is, the region \(\mathcal{D}_{1}\) approaches the union of the two nearest neighbour decoding regions for the points associated with \(X_{1}=1\); this is illustrated in Fig. 3. Note that the boundary of these limiting decision regions are combinations of straight lines. It can be verified that the two vertical lines occur at \(x=\pm\sqrt{P_{2}}\cos(\theta)\) and the diagonal line is described by the equation \(y=\frac{\sqrt{P_{1}}-\sqrt{P_{2}}\cos(\theta)}{\sqrt{P_{2}}\sin(\theta)}x\). The only exception to this is at \(\theta=0\), where there is no diagonal line, and instead there is a third vertical line at \(x=0\). Also observe that there is no dependence on \(\epsilon_{1}\) or \(\epsilon_{2}\) in the high SNR decision regions. As SNR grows without bound, the probability that any sent constellation point exits its decision region approaches zero. Hence the decoding rule at high SNRs becomes \(\hat{X}=X_{1}\). This immediately implies that the error probability at any \(\theta\) approaches \(P(X_{1}\neq X)=\epsilon_{1}\). Hence \[\lim_{N_{0}\to 0}P_{e}(\theta^{*})=\epsilon_{1}=\lim_{N_{0}\to 0}P^{\text{ub}}_{e}( \theta^{*}_{ub})\] and \(P^{\text{ub}}_{e}(\theta^{*}_{ub})\) is asymptotically optimal. ## VI Future Work Given the experimental results showing strong correlation between the optimal upper bound and the overall optimal error performance, the first natural extension of this work is to prove that this upper bound is indeed the optimal value, or show conditions for when it is not the case, if there are any edge cases not herein considered. Furthermore, the source distribution was taken to be uniformly distributed and the binary constellations were restricted to be symmetric. This problem could be generalized to non-uniform binary sources, or by adding more constellation parameters to allow non-symmetrical constellations, allowing potentially for better error performance.
2309.06697
Designing Voice Interfaces to Support Mindfulness-Based Pain Management
Objective: Chronic pain is a critical public health issue affecting approximately 20% of the adult population in the United States. Given the opioid crisis, there has been an urgent focus on non-addictive pain management methods including Mindfulness-Based Stress Reduction (MBSR). Prior work has successfully used MBSR for pain management. However, ensuring longitudinal engagement to MBSR practices remains a serious challenge. In this work, we explore the utility of a voice interface to support MBSR home practice. Methods: We interviewed ten mindfulness program facilitators to understand how such a technology might fit in the context of the MBSR class and identify potential usability issues with our prototype. We then used directed content analysis to identify key themes and sub-themes within the interview data. Results: Our findings show that facilitators supported the use of the voice interface for MBSR, particularly for individuals with limited motor function. Facilitators also highlighted unique affordances of voice interfaces, including perceived social presence, to support sustained engagement. Conclusion: We demonstrate the acceptability of a voice interface to support home practice for MBSR participants among trained mindfulness facilitators. Based on our findings, we outline design recommendations for technologies aiming to provide longitudinal support for mindfulness-based interventions. Future work should further these efforts towards making non-addictive pain management interventions accessible and efficacious for a wide audience of users.
Sanjana Mendu, Sebrina L. Doyle Fosco, Stephanie T. Lanza, Saeed Abdullah
2023-09-13T03:39:52Z
http://arxiv.org/abs/2309.06697v1
# Designing Voice Interfaces to Support Mindfulness-Based Pain Management ###### Abstract **Objective:** Chronic pain is a critical public health issue affecting approximately 20% of the adult population in the United States. Given the opioid crisis, there has been an urgent focus on non-addictive pain management methods including Mindfulness-Based Stress Reduction (MBSR). Prior work has successfully used MBSR for pain management. However, ensuring longitudinal engagement to MBSR practices remains a serious challenge. In this work, we explore the utility of a voice interface to support MBSR home practice. **Methods:** We interviewed ten mindfulness program facilitators to understand how such a technology might fit in the context of the MBSR class and identify potential usability issues with our prototype. We then used directed content analysis to identify key themes and sub-themes within the interview data. **Results:** Our findings show that facilitators supported the use of the voice interface for MBSR, particularly for individuals with limited motor function. Facilitators also highlighted unique affordances of voice interfaces, including perceived social presence, to support sustained engagement. **Conclusion:** We demonstrate the acceptability of a voice interface to support home practice for MBSR participants among trained mindfulness facilitators. Based on our findings, we outline design recommendations for technologies aiming to provide longitudinal support for mindfulness-based interventions. Future work should further these efforts towards making non-addictive pain management interventions accessible and efficacious for a wide audience of users. mindfulness-based stress reduction, voice interface, smart speaker, chronic pain + Footnote †: journal: _Proceedings of the 2017/17th International Conference on_ ## Introduction Chronic pain is a serious public health issue affecting approximately 20% of the US adult population [(89)]. Chronic pain is also the leading cause of disability in the United States and can result in impaired physical and mental functioning as well as reduced quality of life [(74)]. Although many treatments, both pharmacological and nonpharmacological, are available for managing chronic pain [(75)], an estimated 5 to 8 million Americans are prescribed opioids for long-term pain management [(67)]. Opioid pain medication is particularly effective in short-term pain management, but poses serious health risks since it can lead to misuse and addiction. The widespread use of opioid pain medication has contributed to a national epidemic of addiction in the US; more than 80,000 deaths occurred in 2021 alone due to opioid pain medication overdose [(56)]. As such, there is an urgent need to develop and deploy non-addictive chronic pain management methods. Reports from the National Pain Strategy (NPS) [(31)] and Institute of Medicine [(41)] emphasize the need for evidence-based strategies that address the biopsychosocial nature of this problem. Furthermore, recent guidelines from the Center for Disease Control (CDC) on chronic pain included a recommendation on the preferred use of nonopioid treatment over opioid therapy [(28)]. These initiatives speak to the importance of advancing work on improving the accessibility of noninvasive, nonpharmacological treatment of chronic pain. ### Mindfulness-Based Strategies for Chronic Pain Management Toward this goal, a number of recent studies have used mindfulness-based interventions for chronic pain management [(80; 83; 68; 25; 53)]. Recent studies have identified Mindfulness-Based Stress Reduction (MBSR) to be a promising alternative for long-term pain management [(38)]. MBSR is a particularly popular approach since it was developed specifically for patients with chronic pain who did not have their needs met by the traditional medical establishment [(43)]. MBSR was designed to increase psychological distress tolerance [(59)], in part, enabling individuals to disentangle the emotional and fear-based aspects of pain from the physical sensations [(50)]. While external sensations of pain may remain unchanged, the accompanying negative emotional and cognitive processes (e.g., hurt, suffering) of the pain experience can be reduced through this disentanglement [(42)]. This is particularly helpful in the context of chronic pain, due to the persistent and often lifelong nature of the experience. A significant body of work supports the efficacy of MBSR in reducing the adverse impact of chronic pain stemming from migraines [(85; 72)], lower back injuries [(5; 77; 24)], and other conditions [(45; 68)]. MBSR has also been shown to significantly reduce perception of pain intensity and functional limitations [(83; 25; 53)]. Despite these promising findings, a number of challenges can hinder the use of MBSR practices for chronic pain management. Because the efficacy of MBSR partially depends on engaging in the practices, long-term and regular home practice is essential for effective pain management, and thus reduction in opioid use. However, this can be particularly challenging for those new to mindfulness, leading to problems with treatment compliance and impacting outcomes for those engaged in the program [(68)]. These challenges are particularly impactful for individuals living with chronic pain due to the heightened physical and emotional discomfort that can arise in the initial stages of the intervention [(51)]. Given the dose-response relationship between duration of MBSR practice and its degree of effectiveness, low adherence can reduce its usefulness for chronic pain management [(20)]. In other words, long-term engagement with MBSR practices is essential for effective pain management and subsequently reducing the risk of opioid dependence. ### Voice Interfaces to Support Engagement with Mindfulness Practice A large body of work has examined the utility of technology in supporting the practice of mindfulness [(47; 48; 79)]. Prior work has leveraged a diverse range of technological systems, including smartphone apps [(46)], web interfaces [(14; 80)], and other technology-based systems (e.g., biofeedback, virtual reality) [(19; 36; 76)]. While these systems have been relatively successful in supporting mindfulness outcomes, the visual and tactile interactions may present challenges for individuals with limited motor functioning [(6; 81)]. Given individuals with chronic pain might have limited mobility, touch or click driven interfaces can pose serious accessibility challenges for them. Furthermore, long-term engagement with these technologies tends to be quite low. Voice interfaces have recently emerged as a promising tool to promote user engagement with health interventions. In this work, we define "voice interfaces" as systems for which interactions are primarily voice-based, such that the user talks to the device and the device responds with a synthesized voice (e.g., Amazon Alexa [(3)], Google Assistant [(34)]). Voice interfaces offer several advantages, including convenience, simplicity and confidentiality [(82)]. A number of studies have found that such conversational technology can promote social rapport [(49)] and establish a trusting relationship with human participants [(65)]. A growing body of work has documented the incremental effects resulting from the use of these technologies in healthcare settings, particularly for management of chronic conditions [(63; 13)]. A recent meta-analysis found that voice-driven technologies are effective in promoting adherence to health-promoting behaviors, such as medication taking and disease prevention [(44)]. Prior research concerning the application of voice interfaces in health contexts has highlighted health tracking and monitoring, assistance in locating health providers, and collecting data to aid in self-driven decision making [(58; 8)] as promising applications. Meta-analyses of customer reviews revealed that users are primarily interested in utilizing voice interface technology for self-management, as a memory aid, or for overcoming accessibility issues [(27; 64)]. The Amazon Alexa platform has been leveraged to assess deaf speech [(15)], provide task support for individuals with cognitive disabilities [(21)], and increase physical activity among overweight or obese cancer survivors [(37)]. Voice interfaces have further been shown to uniquely support positive health outcomes within vulnerable contexts, such as metastatic breast cancer [(66; 35)] and social anxiety [(84)]. These results highlight the potential for voice interfaces to support behavior change and improve clinical outcomes, particularly for vulnerable populations. Prior work has further pointed to the utility of voice interfaces specifically for individuals living with chronic pain [(16; 30)]. While some researchers have utilized conversational technologies to support mindfulness interventions [(40; 73)], few have focused on voice interfaces. One study from Naylor et. al. [(57)] shows the potential for voice interfaces. Using a CBT approach, they used a phone-based interactive voice response system to support self-tracking of pain control and mood as well as provide guided assistance in practicing therapeutic skills. However, at this time, there is no existing work which uses a voice interface to delivers evidence-based mindfulness interventions specifically designed for individuals with chronic pain. While a number of mindfulness applications exist currently for the Amazon Alexa ecosystem [(69)], none are oriented specifically for chronic pain management. Moreover, these applications are not informed by clinically-validated research efforts. As a result, there is no empirical evidence of their effectiveness. ### Current Study In this work, we developed a voice interface to deliver on-demand mindfulness practices. Specifically, we leveraged the Amazon Alexa ecosystem to support personalized delivery of MBSR practices designed for individuals living with chronic pain. By extending the current capabilities of Alexa, we designed a personalized and engaging virtual mindfulness practice support tool. This approach has the potential to support high adherence to MBSR practices over a long period of time due to the potential for relationship building between users and the technology. Furthermore, using an interactive voice interface may eliminate a key barrier to the success of MBSR by improving the accessibility of home practice, particularly for individuals living with chronic pain. Finally, this approach is highly scalable and can improve access to mindfulness practices for underserved populations, including those in rural communities who face disproportionately negative consequences due to opioid prescriptions for pain management. The goal of the current study is to understand the acceptability of this voice interface among mindfulness experts to establish the utility of this technology in supporting MBSR home practice. Specifically, we conducted semi-structured interviews with ten certified mindfulness facilitators to understand how such a technology might fit in the context of the MBSR class and identify potential usability issues with our prototype. By drawing on their rich understanding of the principles of MBSR and participant challenges informed by years of practice and experience (62), we have gathered a wide array of meaningful insights on how voice interfaces can effectively support mindfulness practice and what is being taught in MBSR training. Based on our findings, we have outlined a number of design recommendations for future developers of voice interfaces to support evidence-based mindfulness interventions. ## Method ### System Design For this study, we developed a voice interface to deliver MBSR practices. Introduction to the MBSR practices was adapted to be short, interactive, and appropriate for the Amazon Alexa dialogue system. This content generation was led by the second author who completed a 6-day MBSR intensive teacher training. Requirements for participation in the teacher training included (1) one year of personal mindfulness practice, (2) participation in a previous MBSR training, and (3) participation in a 5-day silent retreat. The second author is also a certified facilitator and master trainer for two other mindfulness-based programs. She recorded all practices available on the application. The voice interface was designed to act as a virtual coach that could quickly and easily provide access to supporting resources for MBSR home practice, as well as foster engagement through establishing social rapport. To achieve an interactive voice-driven dialogue flow, the Alexa Skills Kit SDK (4) was used to implement the voice interface and dialogue models in the prototype. To design the dialogue flow for the voice interface, turn-taking points were identified and branching logic was established to reflect user inputs and selected intervention strategy. The user flow diagram of the voice interface prototype is shown in Figure 1. When the user starts the interaction, they first receive a friendly verbal greeting. This greeting welcomes the user to the interaction and provides a brief list of basic functions available through the skill (i.e. "recommend a session" based on progression through the MBSR curriculum or "play a guided or unguided session"). If the user requests a recommendation, they are then asked to specify which week of MBSR they are currently on. Suggestions for practice for each of the eight weeks were derived from the MBSR Authorized Curriculum Guide (70). Based on the user's response, Alexa provides a tailored recommendation reflecting the home practice assigned for that week. Alternatively, if the user requests a guided or unguided session, they are then asked how long they would like to practice for. If the length they have requested is not available, they are subsequently informed and asked to modify their request. Once the session length is confirmed, the available practices are specified if the user requested a guided session. Upon confirming the desired practice, the user is asked whether they would like ambient sounds to play in background during their practice. Finally, once all variable parameters have been specified, the voice interface plays the appropriate pre-recorded practice. While novice users are expected to specify these parameters one at a time, the prototype is configured to allow experienced users to make more complex requests containing multiple parameter specifications in one command (e.g. "I would like a 30-minute body scan without ambient noise"). This function is intended to reduce redundancy across longitudinal use and streamline access to embedded content. The co-author with MBSR expertise scripted and recorded all guided practices used in the prototype. ### Participants We recruited 10 mindfulness program facilitators (7 female, 3 male; ages 34-75; see Table 1) via e-mail and over social media through the second author's professional networks. Due to their rich understanding of the principles of MBSR and participant challenges informed by years of practice and experience (62), mindfulness program Figure 1: User Flow Diagram. The interactive dialogue structure includes branching logic based on user input and spoken utterances (i.e., dialogue interactions) implemented using the Amazon Alexa SDK (4). facilitators are an invaluable resource in informing the development of this technology. All participants had received formal training in facilitation of MBSR or related mindfulness-based interventions (e.g., Learning2Breathe (17)) who had participated previously in MBSR. No monetary compensation was provided for participation, however, participants who did not have access to a physical Amazon Alexa device (i.e., smart speaker) were mailed a 2nd generation Amazon Echo Dot. ### Study Procedure Participants were asked to complete an online survey containing basic demographic questions and six open-ended questions regarding their current facilitation practices, prior experience with technologies to support mindfulness practice, and attitudes regarding the use of voice interfaces to support MBSR. These questions were based on prior work on eliciting feedback on system prototypes from domain experts (7). The online survey remained open to data collection for three months (from June to August 2021) and was designed to take approximately 30 minutes to complete (5 minutes per question). Participants were directed to explore the prototype's functionality on a physical smart speaker device for as long as they wished. Participants who did not have access to a physical device were mailed an Amazon Echo Dot. Participants were asked to set up the device in a location of their choosing and install the prototype application on the device. Basic written instructions regarding the setup process were provided via email and the first author remotely provided technical support as needed. The interaction task was intentionally left unstructured (i.e., no instruction regarding tasks beyond setup) to capture the intuitiveness of the conversational flow to a naive user. Data regarding the length of interaction and specific dialogue selections was not collected. Following the interaction, participants were asked to evaluate the usability of the prototype via the System Usability Scale (SUS) (18), followed by a series of open-ended questions regarding the perceived efficacy and impact of the tool as well as suggestions for improving the current prototype. The SUS is an empirically validated scale (10) designed to measure users' subjective ratings of a system's usability, and is composed of 10 statements that are scored on a 5-point scale of strength of agreement (0 = "Completely disagree" to 4 "Completely agree"). Final scores for the SUS can range from 0 to 100, where higher scores indicate better usability. A SUS score of 70 is generally considered to be an indicator of good usability, while a score below 50 indicates significant usability concerns (9). After the survey, we conducted one-on-one follow-up interviews with participants. Interviews followed a semi-structured protocol (see Supplementary Material) using the same open-ended questions from the survey as a starting point. Interviews were conducted online via Zoom and lasted approximately 60 minutes. ### Analysis We used directed content analysis (39) to identify key themes and sub-themes within the interview data. Interviews were audio-recorded and transcribed for coding. We began by deriving an initial set of codes corresponding to individual questions from the semi-structured interview protocol (see Supplemental Material). The first author then manually coded facilitators' responses (i.e., open-ended survey question responses, interview transcripts) with support from the second author accordingly. The initial set of codes was iteratively expanded to accommodate emerging themes in the data which were not captured within the interview protocol. The final codebook was then circulated among members of the study team to establish consensus. Data saturation was established by observing the repetition of existing codes across different participants' transcripts, as well as the lack of emerging codes across later iterations of coding. ## Results Based on our analysis, we report findings on facilitators' current techniques for supporting home practice, their perceptions and expectations of technology to support mindfulness practice, their responses to the voice interface prototype, and potential implications for their class participants, particularly those living with chronic pain. ### Identified Needs in Current MBSR Practices The need to support regular home practice was a common theme in the interviews. Facilitators described different challenges and their strategies for supporting practice outside of the formal sessions. Based on facilitators' feedback, we have highlighted identified needs and the potential of technologies to address these needs in the following section. ### Supporting Engagement The structure of MBSR requires engagement with different practices across weeks. Identifying appropriate content and practices for a given week can be confusing for MBSR participants. Facilitators reported using handouts and email reminders. However, there is a clear gap when it comes to effectively communicating how individuals should engage with MBSR practices as P8 pointed out: "_[A] lot of people do [get confused about what practices they should do]_, even _when [we give]_ then a _handout_, _even when it's written down_, _even with the reminder email_" (P8). This lack of knowledge can deter individuals from engaging with regular home practices. \begin{table} \begin{tabular}{c c c c c} \hline \hline **PID** & **Gender** & **Age** & \begin{tabular}{c} **MBSR Exp** \\ **(in yrs.)** \\ \end{tabular} & \begin{tabular}{c} **MBSR and** \\ **Related Exp** \\ **(in yrs.)** \\ \end{tabular} \\ \hline P1 & F & 72 & 10 & 10 \\ P2 & M & 34 & 0 & 6 \\ P3 & F & 46 & 6 & 11 \\ P4 & F & 62 & 3 & 3 \\ P5 & F & 47 & 7 & 12 \\ P6 & F & 50 & 0 & 5 \\ P7 & F & 56 & 1 & 1 \\ P8 & M & 39 & 12 & 12 \\ P9 & M & 71 & 5 & 5 \\ P10 & F & 75 & 23 & 23 \\ \hline \hline \end{tabular} \end{table} Table 1: Participant Demographics. All facilitators are Caucasian/White. 2 facilitators (P2, P6) were not trained in MBSR, but were trained in adjacent mindfulness programs. Facilitators also primarily used email to provide links to different MBSR resources. However, one participant noted that doing so could create additional barriers for MBSR participants to engage with in-the moment practices. P7 commented that _"any time you put any kind of barrier of 'I have to search for an email that has the meditation that I'm supposed to do', that's just another thing to stop them from doing it"_ (P7). #### Tracking Practices and Experiences The facilitators expressed interest in collecting both quantitative and qualitative measures to assess attributes of practices as well as track personal experiences. P1 noted that the quantitative data could help facilitators to address non-engagement issues: _"If you haven't done it, you know, what's getting in the way? Is it because you're so busy? Or is it because there might be a little bit of resistance?"_ (P1). P2 also commented on the usefulness of engagement data to identify potential obstacles: _"Ideally, it [will be] cool to be able to see that, on average, students practice this many times a week [...] Because from an instructor's perspective, you want to make sure that [...] they're not experiencing obstacles to keep them from practicing"_ (P2). There was also a consistent focus on collecting qualitative data to better understand an MBSR participant's experiences throughout the learning steps. For example, P2 commented that _"I think [it] is also important to collect [data] which is sort of around what was the meditation experience like for you? What did you notice during that experience? I think that information is really helpful too"_ (P2). Facilitators also expressed interest in receiving a summary of logs describing the frequency and types of practices their class participants engaged in throughout the course. P7 was particularly excited about this potential feature _"because people may be more inclined to use it if they know their information is sent to [the facilitator]"_ (P7). P8 advocated for this extension as well, arguing that it would more clearly distinguish the experience afforded by the voice interface from that of traditional offerings: _"I think you might get some more legs for innovation if you integrated performance feedback, such as it tallies up how many times they did recordings and gives that back to them"_ (P8). In parallel with excitement about the potential positive implications of collecting such data, facilitators emphasized the importance of ensuring that tracking does not lead to competitive comparison or counterproductive behaviors: _"I love [the] idea of building a group, [but] what I don't want to have happen is [class participants to think]. Oh two; they mediated a lot more than I did this week."_ (P2). Thus, while the collection of qualitative and quantitative data to could support MBSR facilitation and participant engagement, there is a need to consider potential harms if this data is shared in a group setting. #### Availability and Rigor of Supporting Resources Facilitators talked about different forms of external resources they provided their class participants to facilitate home practice. Ideally, for MBSR, facilitators create their own set of recordings for guided home practice. However, facilitators often found it challenging to create their own recordings. Four facilitators in our study reported that they had not yet created those recordings due to time constraints and instead relied on resources published by reputable sources (e.g., Jon Kabat-Zinn, Tara Brach, Jack Kornfield). However, there were concerns about the quality of available resources. While channels in YouTube might allow easy access to these resources, the use of commerciala can be interruptive to MBSR practices as P4 noted: _"I try to stay away from YouTube videos. Main reason is because [of] commerciala"_ (P4). P9 questioned the general quality of online resources: _"So many recordings online are really meant to be superficial [...] And because a lot of people will be attracted for various reasons, [...] it's really important to have the depth of teaching there"_ (P9). P9 also raised concerns about the financial burden of accessing these resources due to a recent surge in monetization: _"[Online resources are] becoming monetized now [...] money has a way of taking precedent, and when it takes precedent, it just ruins everything"_ (P9). Beyond publicly available online resources, facilitators also reported _"having a couple of trusted sources for these recorded mediations and [linking] clients to those resources"_ (P4). Two facilitators reported contributing to a shared repository of facilitator recordings and were thus able to leverage the full set of recordings through an institutionally crowdsourced effort. This strategy was advantageous since these facilitators' students had access to multiple options for any given practice. That said, having access to these resources does not guarantee user engagement as P2 pointed out: _"I will have a lot of people say to me, 'Yeah, we have these audio files but... "_ (P2). As such, there is a need for accessible and low-cost content that can support effective user engagement. #### Support for Community Interactions Facilitators noted the importance of the _sangha_ (community) as an important element of MBSR. P2 commented _"from a very historical perspective, there is a deep sense of community and interpersonal relationships that were really important to practicing meditation"_ (P2). The cultivation of trust and interpersonal relationships within the sangha of a facilitator's classroom is important for MBSR outcomes and in-class inquiry. P6 noted that, without building a foundation of trust, some MBSR participants might not feel comfortable sharing their experiences and thus may not engage in the depth of inquiry that will afford them the best outcomes. Facilitators also pointed to the utility of community interaction in promoting accountability for practices outside of class. P3 shared that _"consistently people say it's easier to practice when you're in the group [...] and have that accountability"_ (P3). P7 further argued that _"If you don't have an expectation [that you should practice], you're probably not going to do whatever it is that's on your plate"_ (P7). These findings highlight the need for both the in-person sessions and home practice. ### Potential of Using Voice Interfaces to Support MBSR Practices This section reports on facilitators' perspectives on the potential and implications of using voice interfaces to support MBSR practices. Attitudes towards online and remote technologies in the context of mindfulness are discussed. Additionally, the unique affordances of voice interfaces for supporting MBSR home practice, such as improved accessibility, perceived social presence, and support for in-the-moment practices is considered. ### Affordances of Technology-Supported Mindfulness Practices Facilitators highlighted the utility of technology in reducing time and cost burdens related to practicing mindfulness. Technology-driven alternatives to in-person mindfulness offerings allowed for agency to practice when they wished as opposed to being restricted by class schedules. P1 expressed that the flexibility afforded by online offerings presented a unique opportunity for practices to be completed at the users' preferred time and place: _"I can [participate in yoga class] in my pajamas [...] Whereas with the class offerings, I had to make myself available when the class is being taught"_(P1). P3 also noted the reduced logistical burden following the adoption of remote technology: _"You decreased time stress because you cut out the commute for people to attend class [...] they were doing the practices in their class and then they walked out into their regular life"_ (P3). Facilitators pointed to an increasing interest from students to use technologies to support mindfulness practices as noted P1: _"I know that sometimes they would even ask for apps and which ones we might suggest. I think some of them would like it"_ (P1). Online technologies can be particularly useful in supporting in-the-moment practices following an individual's need and availability. P1 pointed out how such technologies can enable mindfulness practices beyond the traditional MBSR class schedules for certain groups: _"If I were a night ow, [technology] would be another helpful option for someone who might use yoga and meditation as a way to begin to relax to sleep later"_ (P1). Similarly, P7 noted how technology can enable and support in-home practices: _"If you're meditating and practicing in your home environment, absolutely, that becomes your normal. And it should because that's where you're gonna be expected to do it going forward"_ (P7). ### Improving Accessibility Facilitators highlighted that individuals living with chronic pain often have unique accessibility needs. It can be particularly challenging for them to use touch or click driven interfaces. P6 noted that _"people who experience chronic pain not only may have general mobility issues but even just using a phone, like the fine motor aspect of opening the app or whatever may be difficult"_ (P6). As a result, it can be particularly challenging for them to use touch or click driven interfaces. Voice interactions are uniquely poised to address this accessibility challenge. P5 commented that, _"for people who are in very severe pain and may be bedridden, maybe they're not able to lift their arms or wiggle their fingers [...] yet they can usually open their mouth and then Alexa can respond with this skill"_ (P5). Voice interfaces are uniquely poised to address this accessibility challenge Facilitators highlighted the potential use of voice interfaces for specific vulnerable populations with limited motor function including Amyotrophic Lateral Sclerosis (ALS). P6 described her experience with her father's ALS diagnosis and emphasized the potential of this technology for individuals with limited motor function but full capacity of verbalization. P4 also noted the implications of the voice interface for her end-of-life clients who are bedridden but can still vocalize their requests. P9 noted how the accessibility of voice interfaces might lead to effective integration into traditional medical care: _"I think it's a great idea because I could see this easily plugged in in hospitals"_ (P9). ### Supporting Home Practices Facilitators expressed excitement about the potential for the interface to support home practice for MBSR participants. They particularly appreciated the ease of access to meditations and believed that the simplicity of asking for meditations via voice command would effectively reduce obstacles to home practice for MBSR participants. P2 commented that _"just the accessibility [barrier] stops people from practicing [...] It would be great if our students could just talk to Alexa and get their meditations that way because it's always so hard to get people to practice outside of class"_ (P2). P9 noted how improved accessibility can support in-the-moment practices: _"[If] somebody wants to mediate, they can mediate any time, day or night, they have the Alexa methee"_ (P9). P5 also noted how accessibility of voice interface can complement current learning steps:_"I think it will be really a huge difference for people taking the classes and afterward to do the daily work"_ (P5). Similarly, P6 commented: _"I think this is a really interesting way to help support home practice"_ (P6). While facilitators agreed that the voice interface would be useful for meditation beginners, they expressed mixed views on its utility for experienced meditation practitioners. P1 speculated that voice interfaces to support MBSR practices might be mostly useful for beginners: _"If I were new, it would probably be helpful. But I've been practicing for 15 years so I don't know if I would use technology to help me"_ (P1). However, even though P5 is an experienced practitioner, she noted that guided practices delivered through the voice interface could be very useful: _"I've been mediating for [...] 30 years. I can really just kind of turn my brain off and let that take over and follow that [...] I wasn't aware of what a little spent [the guided practices] would be. And so I think that it'll be hugely beneficial for people who are well experienced with MBSR as well"_ (P5). ### Unique Affordances of Voice Interfaces Facilitators consistently pointed out the unique affordances of voice interfaces to support MBSR practices including ease of use, high interactivity, and perceived social presence. P5 commented: _"It's just so easy! [...] You don't have to even hardly think about it. You just have to open your mouth and there it is. It's just amazing"_ (P5). Similarly, P6 also highlighted the ease of access provided by a voice interface compared with the current methods to provide MBSR resources: _"I definitely think it's easier [than using audio files] I didn't need to have my phone or my computer open. All I needed to do was sit and voice my want of whatever type of meditation"_ (P6). Such ease of access can be a critical factor in supporting MBSR practices as P2 noted: _"People are just looking for super accessible ways. And so that's why I really like using this system. It's just always there"_ (P2). Facilitators also noted the highly interactive nature of the voice interface, particularly its potential utility in sustaining home practices. P2 commented: _"It would be great if our students could just talk to Alexa and get their meditations that way because it's always so hard to get people to practice outside of class"_ (P2). P5 agreed that the voice interface incentivized her to practice in a way that she would not otherwise have been motivated to with other technological platforms with passive content delivery: "_If all I have to do is say 'Hey Alexa, play a meditation' and she does, then that's great [...] I probably won't go through the effort of YouTube_" (P5). Furthermore, facilitators thought the voice interface could effectively complement current MISR teaching processes. P6 commented: "_I just am so excited about the idea of this being applied to MISR and to facilitate home practice[...] We invite our participants to do this, but are we giving them enough support and direction to be able to do that on their own, and this actually helps that process, right? So it kind of closes the gap_" (P6). Participants also pointed to the perceived social presence of the voice interface as a positive attribute to support MISR practices. For example, P5 commented: "_My husband travels a lot for work [...] and so he's just been gone a lot. And so here I am, kind of talking to Alexa and it's just been really fun to have this voice in my life [...] I think that just hearing her talk and being able to talk and [have] somebody [who] talks back when you speak out loud, I think that that's gonna be really, really beneficial for people trying to get through the program_" (P5). ### Identified Design Challenges for a Voice Interface Supporting MISR Home Practice We used the System Usability Scale (SUS) (18) to assess the developed prototype. The average SUS score for the prototype voice interface was 54.8 (SD = 25.6) out of 100. More importantly, the SUS scores had a wide range with a low score of 7.5 and a high score of 97.5. These scores reflect the varying level of acceptance and expectations from the facilitators for the developed voice interface prototype. We used the interview data to get further insights into challenges and opportunities for designing a voice interface to support MISR practices. Flexibility in Matching User Utterances and Intentions Facilitators consistently noted the need for flexibility in handling user utterances for different voice interactions. Not recognizing user intent due to alternative words or minor discrepancies can lead to frustration over time. P1 commented: "_One thing that frustrated me sometimes is the need to say exactly the right phrase_" (P1). Addressing this usability issue will require moving away from rigid phrase matching to identify user intentions. For example, P9 recommended having a list of alternative keywords that might be used to invoke different practices: "_like the auto fill function, but with voice. Now if somebody's in chronic pain, 'Do that meditation thing! And they won't remember the exact title, so there has to be like a keyword function in ther. So 'The meditation thing! Oh, no, no. Not that one, the one that's lying down. To give them a lot of leeway in terms of being able to get to the meditation or the exercise that they want_" (P9). #### Documentation and Support It is critical to provide documentation and initial support to communicate the capability and features of the voice interface. Three facilitators wanted written documentation outlining the structure of the interaction. P6 mentioned the importance of having step-by-step instructions specifically for older users: "_for somebody like my mom who really likes to have step-by-step directions written out for her. She's nervous she's going to mess things up, like break the technology [...] So having just a way to show how 'If you want this, say this. If you want that, say this. If you want to exit, say this', just so [the user] understands[] how to get where they want to go_" (P6). It is also necessary to provide support for initial setup of voice interfaces. P6 commented that: "_I just fear for those who really don't understand technology and don't understand apps and this and that, how much support they would need to be able to get this set up? I think once they're set up, they would be able to do it just fine. It's just the initial piece of how to work it_" (P6). For four facilitators, the invocation phrase ("Alexa, open mindful pain management") was also challenging to remember, which resulted in difficulty getting the prototype to respond. Overall, it is essential to provide both written and voice clarifications to reduce obstacle and avoid confusion. #### Error Handling Successful user interactions require graceful error handling. While a voice interface for support MISR home practice might not be able to address all user requests, facilitators suggested adopting flexible approach by providing closest available option to users: "_Let's say 1 say '3 minutes' [of meditation] but there's not a 3 minute one, so she doesn't recognize it. [In that case, she should ask:] 'I have a 5-minute meditation available. Would you like to do that?' or if I said 45 minutes, which is longer than the longest one you have, 'Oh I have a 30 minute one available. Would you like that one?' [...] if it's a number that she doesn't have, routing her to the closest number_" (P3). It is also important to communicate rationale for not exactly matching user's initial request in these cases. Otherwise, it can lead to confusion and frustration. P10 noted feeling unsure about why her requests were not being understood by the device: "_I was asking it a question and then it was getting confusing and it was either shunting off or was it giving me something else. And so I was left feeling, as the user, that unless I had asked for whatever one in some sort of sequence which I was unaware, I wasn't gonna get it_" (P10). This misunderstanding can be avoided by asking for user confirmation before beginning a practice and clearly communicating the system's understanding of the user's request. Although this communication may result in occasional redundancy, the resulting clarity might outweigh the potential user burden. Facilitators also wanted a more proactive recovery from errors. Early termination of an interaction may point to potential errors and a voice interface should proactively offer related options in such cases. P9 provided an example of proactive interaction: "_Alexa, stop! Stop it! And then what I would recommend is Alexa says 'Oh, okay. Would you like a different exercise or would you like me to stop altogether?' 'No, the lying down one, the lying down.' Okay' 'I think you mean the body scan meditation. Is that correct?'_" (P9). Such proactive error recovery can help to reduce user frustration. #### Options to Use Different Vices For a voice interface aiming to support longitudinal engagement in home practice, facilitators indicated that it may be useful to provide alternative voices. P1 suggested "_to have a variety of different voices, because all of us have preferences. Some voices we like, some we don't like, some are neutral. [...] So I would probably suggest having multiple options, female vice, male voice, just a variety_" (P1). Similarly, P2 also noted the importance for different options and variety: "_It's like, 'Oh, yes, I do want this voice because it's conforming, and I'm familiar with it', and other people might be like, 'Well, I've already heard this voice every week in a classroom setting, and so maybe I need another one_". So _yeah, I think choice and variety will be something to think about_" (P2). P1 also wanted to make alternative voices available ubiquitously available throughout the voice interface for MBSR: "_I think the [vice] option should always be there. I understand it makes more work, but I still think the option should always be there every day, not just in the beginning_" (P1). #### Supporting Customizability Facilitators consistently noted the potential and advantage of being able to customize the voice interface. For example, P6 commented that "_you can just customize it in so many different ways which is really exciting to me_" (P6). Specifically, there was interest in adding custom recordings to better support facilitators: "_I think a lot of MBSR instructors could potentially be interested in, when they're teaching, if they can load their recordings into appropriate slots_" (P3). Facilitators recommended having a wide variety of practice lengths available to optimize ability to engage in home practice. P3 also encouraged the inclusion of a practice for working through difficulty given our interest in supporting individuals living with chronic pain. P8 suggested creating a modular script for guided practice that could be started and stopped at differing intervals to curate varied lengths of sessions. Future work should look into designing voice interfaces that can empower MBSR facilitators and participants to customize home practices in a way that adheres to the philosophy while supporting personalized needs. #### Inclusion of Inquiry-Based Approaches In addition to the current practices, facilitators expressed interest in inclusion of some form of inquiry within the voice interface response. P4 suggested posing brief questions after the practice concludes such as "_How did your body feel after that?_" and "_What did you bring up with that?_". P10 proposed asking the user a question if they attempt to end the practice prematurely in order to disincentivize MBSR participants from taking the easy way out: "_If you're noticing that you want to stop Alexa now or you're noticing how much time you have left or you want to ask a question of how much time you have, just see if you could notice that without acting on it_" (P10). ## Discussion Our findings highlight key design principles of voice interfaces to support mindfulness interventions. Voice interfaces are particularly well-suited to improve the accessibility of clinically-informed mindfulness practices for individuals living with chronic pain. However, there remains a number of design and implementation challenges. Specifically, effective support for MBSR using voice interfaces will require maintaining practice fidelity. In this section, recommendations are provided for future voice interface designers and developers aiming to provide mindfulness interventions for vulnerable communities, including individuals living with chronic pain. #### Support Data Collection but Avoid 'Competition' Facilitators expressed the need for data collection to provide effective support for MBSR practitioners as well as sustaining engagement with MBSR practices. This aligns with findings from prior work, which highlights the diversity of approaches used by facilitators to capture home practice towards promoting adherence [(61; 48)]. Despite changes due to the recent shift to an online format, they struggled to receive meaningful participant feedback so they could refine their teaching strategies. Interaction data collected through the interface could provide insights into personalized trends, which could lead to actionable suggestions and support from facilitators [(1)]. However, it is critical to ensure that such tracking is not shared in a way that promotes comparison or competition. Recent work has identified how behavioral tracking can lead to unintended negative consequences [(29)]. To avoid problems, designers should allow for facilitators to produce how and when to provide quantitative metrics of home practice to MBSR participants. By delegating this responsibility to the facilitator, the technology will better support rather than detract from the facilitator's personal teaching philosophy [(62)]. Furthermore, the technology can thus draw upon facilitators' expertise and extensive knowledge of the MBSR curriculum without pre-determining the appropriate strategy from a limited sample of perspectives. #### Support Facilitator-Participant Interaction Additionally, extending the accountability engendered by the classroom community emerged as an important factor underlying consistent home practice. This is particularly important given community interaction is absent when MBSR participants practice at home, which can lead lack of engagement over time [(2)]. Efforts to support direct and accessible pathways for open communication between facilitators and their class participants are necessary to promote the acceptability of this technology in this context [(62)]. As such, future work should consider supporting interactions by which MBSR participants can engage with each other and the facilitator outside of sessions. For example, knowing when someone else in their class is engaging in a mindfulness practice may encourage them to engage as well. #### Ensure Ease of Access The ease of access is a critical factor in ensuring engagement with MBSR practices. Facilitators repeatedly noted how their class participants often forgot which practices were assigned for home practice which caused them to forgo practicing altogether. These findings are consistent with prior work which point to role of both perceived (e.g., cognitive dissonance, forgetfulness) and pragmatic barriers (e.g., lack of time, physical discomfort) in reducing the amount of time individuals practice at home [(11; 22)]. Facilitators also observed that MBSR participants were reluctant to practice at home if supporting resources were not readily available. A voice interface can effectively address these barriers by streamlining access to different practices, thus supporting in-the-moment practices and potentially contributing to sustained engagement with MBSR [(52)]. Designing effective reminders and notifications through the voice interface could further mitigate existing barriers to engagement with home practice and improve the overall MBSR participant experience. #### Foster Social Presence A voice interface aiming to sustain user engagement should focus on fostering social presence. Voice interactions provide a unique affordance, when it comes to perceived social presence [(55)]. Facilitators also noted how voice interactions can lead to perceived social presence, which could subsequently improve engagement [(52)]. For individuals with chronic pain, this aspect of voice interface technology is critically important since physical pain may inhibit their ability to engage with others during flare ups [(11)]. Future work in this domain should explore how voice interfaces can support MBSR practices as well as improving perceived social presence through different interactions. #### Provide User Guide and Documentation Designers should provide adequate user guide and documentation to ensure successful interactions with the voice interface. Specifically, it is critical to convey utterances and phrases that might be used to invoke different functionalities of the voice interface as it may be challenging for users to discover features and capabilities of a voice interface without explicit guidance [(23; 87)]. The complete dialogue flow should succinctly and thoroughly be communicated to the user in advance of any interactions. This can be achieved via written documentation in conjunction with verbal explanation within the voice interface [(26)]. While verbal explanation should be provided without an explicit request the first time the user interacts with the voice interface, users should be able to request the verbal explanation at any time via voice command. The written documentation should serve as an adjunct to the in-situ verbal explanation and provide a complete picture of every possible interaction scenario. It should further be organized such that the user can readily look up and identify their concerns, rather than being a necessary precursor to interacting with the voice interface. #### Handle Errors Gracefully Facilitators noted that individuals struggling with physical discomfort and subsequent lower cognitive functioning (e.g., individuals with chronic pain) may experience difficulty remembering voice commands and phrases, particularly in moments with intense discomfort [(60)]. It is thus critical to be flexible in handling user inputs. Rigorous testing must be conducted to identify variations in phrasing (e.g. "standing mindful movement", "standing movement", "standing mindfulness") to avoid errors when a user deviates slightly from the expected response. In other words, the voice interface should be able to recover from most errors and identify user intents effectively. In case of unavoidable errors, the voice interface must clearly articulate the reason as well as suggest relevant options to users. Prior work supports the efficacy of these strategies in preventing negative user experiences due to command misrecognition in voice interfaces [(54)]. Handling errors gracefully and robustly is critical to facilitate positive interactions with individuals during their moments of needs. #### Provide Different Voices As a standard feature, the voice interface should support different voices to meet individual preferences (e.g., voices representing different genders and personalities). Allowing users to customize speech characteristics of voice interfaces has been shown to promote trust, attraction, and favorability [(88; 78)]. Facilitators pointed out the utility of having a diverse set of voices to choose from when offering supportive resources to their class participants. Because these preferences might vary for different practices, a voice interface should allow users to pick their preferred voice for any given interactions. Both Alexa and Google provide different voices and personas. For example, both platforms allow users to choose between a masculine or feminine voice options, with Google offering multiple within-gender variations of vocal characteristics [(71; 33)]. Prior work highlights the impact of these gender differences on users' perceptions of voice interfaces, particularly in health-related contexts [(32)]. Future work should look into how the use of different voice attributes can address user needs across diverse contexts. #### Enable Customization Consistent with prior work [(62)], facilitators repeatedly noted the need for editing and tailoring different activities and interactions. To adequately address these needs, a voice interface should enable facilitators to add and adapt MBSR content as necessary. Furthermore, it should support the facilitators in uploading and using their own personal guided audio-recordings of practices for their class participants. This recommendation aligns with guidelines for MBSR facilitator training, which explicitly encourages the development of individualized guided recordings by each facilitator for their own instruction purposes [(86)]. This would require supporting modular script and audio recording through the voice interface. Furthermore, interactions and content should be adaptive to support personalized needs of a class participant. A voice interface should also be able to remember user preferences and prior activities to offer easy customization for subsequent sessions. While the value of persistent memory across user interactions has been established in the context of human-robot interaction [(12)], limited work has focused on its affordances for voice interfaces. For example, memory of a user's activities would allow the system to automatically understand the progress of an individual through the MBSR curriculum and suggest relevant practices accordingly. Future work should explore the impact of accounting for user preferences and historical interactions on the acceptability and efficacy of voice interfaces for supporting MBSR home practice. ### Limitations & Future Work The study has a number of limitations. First, interview data was collected from a small number of trained mindfulness facilitators. While this sample size is consistent with prior work that also used one-on-one semi-structured interviews with stakeholders to understand perceptions and requirements of technology within a healthcare setting [(7; 62)], future work should explore data from a larger sample of experts and supplement interview discussions with quantitative measures of usability (e.g., backend usage logs). Additionally, these findings are limited to the perspective of facilitators which may not reflect the intended user (i.e., individuals living with chronic pain). While this approach provides useful insights towards integrating our prototype into existing MBSR program facilitation, it is critically important that future studies collect interaction and acceptance data from individuals living with chronic pain. Future work should also investigate the role of healthcare providers in facilitating the use of mindfulness practices taught in MBSR by integrating digital health tools and technologies into existing clinical practices. Finally, our prototype was adapted specifically to the MBSR curriculum due to the credentials and training of the research team and accessibility of relevant content experts. As such, our findings do not generalize to other mindfulness-based interventions. However, the flexible structure of our prototype could easily be adapted to support other mindfulness-based interventions (e.g., Mindfulness-Based Cognitive Therapy, Acceptance and Commitment Therapy). Future work should investigate the utility of voice interfaces in the context of other mindfulness-based interventions. ## Conclusion With the rising need for non-addictive pain management methods to address chronic pain, efforts to support longitudinal engagement with interventions like MBSR are critically important. In response to this challenge, we developed a voice interface to facilitate MBSR home practice. Findings from our preliminary evaluation show that facilitators supported the use of the voice interface for MBSR practices, particularly for individuals with limited motor function. Facilitators also highlighted unique affordances of voice interfaces, including perceived social presence, to support sustained engagement. Based on findings, we have outlined design recommendations for technologies aiming to provide longitudinal support for mindfulness-based interventions. Future development should further these efforts towards making non-addictive pain management interventions accessible and efficacious for a wide audience of users. The author(s) declare that there is no conflict of interest. This work was not funded by any ongoing grants. Ethical ApprovalAll procedures performed in studies involving human participants were in accordance with the ethical standards of the local institutional review board. The requirement for written informed consent was waived due to the minimal harms posed by the study procedures. ## Guarantor: SM **Contributorship:** SA, SDF, and STL researched the literature and conceived the study. SDF was involved in participant recruitment, along with SM. SM led the data collection, including conducting participant interviews. SM and SDF performed the data analysis and SA helped with interpretation. SM wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version of the manuscript. We would like to thank Ryan O'Neill for his assistance in developing the initial prototype for this work.
2309.08862
Retention of CO Ice and Gas Within 486958 Arrokoth
Kuiper Belt Objects (KBOs) represent some of the most ancient remnants of our solar system, having evaded significant thermal or evolutionary processing. This makes them important targets for exploration as they offer a unique opportunity to scrutinize materials that are remnants of the epoch of planet formation. Moreover, with recent and upcoming observations of KBOs, there is a growing interest in understanding the extent to which these objects can preserve their most primitive, hypervolatile ices. Here, we present a theoretical framework that revisits this issue for small, cold classical KBOs like Arrokoth. Our analytical approach is consistent with prior studies but assumes an extreme cold end-member thermophysical regime for Arrokoth, enabling us to capture the essential physics without computationally expensive simulations. Under reasonable assumptions for interior temperatures, thermal conductivities, and permeabilities, we demonstrate that Arrokoth can retain its original CO stock for Gyrs if it was assembled long after the decay of radionuclides. The sublimation of CO ice generates an effective CO `atmosphere' within Arrokoth's porous matrix, which remains in near vapor-pressure equilibrium with the ice layer just below, thereby limiting CO loss. According to our findings, Arrokoth expels no more than $\approx 10^{22}$ particles s$^{-1}$, in agreement with upper limits inferred from \textit{New Horizons}' 2019 flyby observations. While our framework challenges recent predictions, it can serve as a benchmark for existing numerical models and be applied to future KBO observations from next-generation telescopes.
Samuel P. D. Birch, Orkan M. Umurhan
2023-09-16T03:54:48Z
http://arxiv.org/abs/2309.08862v2
# Retention of CO Ice and Gas Within 486958 Arrokoth ###### Abstract Kuiper Belt Objects (KBOs) represent some of the most ancient remnants of our solar system, having evaded significant thermal or evolutionary processing. This makes them important targets for exploration as they offer a unique opportunity to scrutinize materials that are remnants of the epoch of planet formation. Moreover, with recent and upcoming observations of KBOs, there is a growing interest in understanding the extent to which these objects can preserve their most primitive, hypervolatile ices. Here, we present a theoretical framework that revisits this issue for small, cold classical KBOs like Arrokoth. Our analytical approach is consistent with prior studies but assumes an extreme cold end-member thermophysical regime for Arrokoth, enabling us to capture the essential physics without computationally expensive simulations. Under reasonable assumptions for interior temperatures, thermal conductivities, and permeabilities, we demonstrate that Arrokoth can retain its original CO stock for Gyrs if it was assembled long after the decay of radionuclides. The sublimation of CO ice generates an effective CO 'atmosphere' within Arrokoth's porous matrix, which remains in near vapor-pressure equilibrium with the ice layer just below, thereby limiting CO loss. According to our findings, Arrokoth expels no more than \(\approx 10^{22}\) particles s\({}^{-1}\), in agreement with upper limits inferred from _New Horizons_' 2019 flyby observations. While our framework challenges recent predictions, it can serve as a benchmark for existing numerical models and be applied to future KBO observations from next-generation telescopes. 0000-0002-4882-8878]Samuel P.D. Birch 0000-0002-4880-7888]Orkan M. Umurhan ## 1 Introduction Comets and Kuiper Belt Objects (KBOs) are a diverse population of small icy bodies that contain varying amounts of primitive refractory and volatile materials within their interiors. This diversity is a result of the range of temperature environments that they inhabit, from the inner solar system where most ices sublimate quickly, to the outer solar system where most ices remain frozen since the planet formation era. As members of the family of cold classical KBOs, 486958 Arrokoth (hereafter Arrokoth) was observed by the _New Horizons_ spacecraft (Figure 1A; Stern et al., 2019) and provides a unique window into the earliest stages of the solar system. Arrokoth is one of the most primitive objects in our solar system, having never been significantly heated within the inner solar system. Its bi-lobate structure, composition, shape, and dynamical family suggest that it has likely remained in its current orbit since its formation (Spencer et al., 2020; Grundy et al., 2020; Keane et al., 2022; McKinnon et al., 2020). Arrokoth may have formed well into the solar nebula's Class II epoch, more than 4 million years after CAIs (Bierson and Nimmo, 2019). Its chemistry may therefore most closely resemble that of the protostellar and protoplanetary disk environment from which it formed, which is rich in hypervolatile ices like CO (Chiar et al., 1994; Pontoppidan, 2006; Caselli et al., 1999; McClure et al., 2023). Many numerical models have been developed to understand the activity of small bodies like comets and centaurs and how various ices and gases may be retained (Festou et al., 2004; Gkotsinas et al., 2022; Jindal et al., 2022; Loveless et al., 2022; Lisse et al., 2021; Davidsson, 2021, 2023; Merk and Prialnik, 2006; Steckloff et al., 2021; Bouziani and Jewitt, 2022; Malamud et al., 2022; Parhi and Prialnik, 2023, to name only a few). Such models are required because comets and centaurs can have multiple heat sources that drive the transport of sublimated vapor through a porous matrix that initially retains numerous ices with varying vapor pressures (Festou et al., 2004). This leads to a highly non-linear system. Sophisticated models like NIMBUS (Davidsson et al., 2022) provide great detail, having been tested and calibrated on fine-resolution data acquired by _Rosetta_ at comet 67P/Churyumov-Gerasimenko. Often, these models show that comets (and centaurs) should be strongly active at large heliocentric distances (20\(-\)30 AU) if they retained significant hypervolatile ices. However, despite CO production potentially outpacing CO\({}_{2}\) in some objects (e.g., 1P/Halley; Woods et al. (1986)), particularly at greater heliocentric distances (Mumma and Charnley, 2011; Womack et al., 2020). Figure 1: A: Captured by the Multicolor Visible Imaging Camera (MVIC) component of the Ralph instrument onboard _New Horizons_, this image was taken on January 1, 2019, 7 minutes prior to the spacecraft’s closest approach, which was at a distance of 6700 kilometers from the surface. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute; B: Orbitally averaged temperature at the seasonal skin depth \(r_{t}\), which was computed according to the approach detailed in Umurhan et al. (2022). The physical scale of Arrokoth is shown in kilometers, while the orientation is comparable to that of panel A, looking down on the south pole. 2017; Chandler et al., 2020; Pinto et al., 2022), the vast majority of small bodies (outside objects like C/2016 R2 (PanSTARRS) and 29P/Schwassmann-Wachmann Roth et al., 2023; Cordiner et al., 2022) do not show substantially elevated CO activity levels. Primordial CO in these more close in objects may therefore be largely lost (e.g., Parhi and Prialnik, 2023), with modern day CO released due to the sublimation of entrapping, less volatile ices, possibly as nm-scale intimate mixture within amorphous H\({}_{2}\)O or CO\({}_{2}\) or photodissociation of other more abundant ices like CO\({}_{2}\)(e.g., Rubin et al., 2020; Luspay-Kuti et al., 2022). KBOs, like comets and centaurs, are highly porous bodies (Keane et al., 2022) and likely have both volatile and refractory materials intimately mixed within their interiors. However, KBOs may even retain their most primitive volatiles, as they have not experienced significant solar heating at large heliocentric distances. Observations of old debris disks (e.g., in Fomalhaut and HD181327) indicate the presence of non-primordial (i.e., secondarily sourced) CO gas, interpreted as possibly originating from extrasolar Kuiper belt and/or exocometery objects (Matra et al., 2017; Kral et al., 2017; Wyatt, 2020). Theoretical modeling of grain growth during the planetesimal formation phase of our solar system strongly suggests CO as a major constituents of all planetesimals formed at these distant locations (e.g., Estrada and Cuzzi, 2022, and references therein), and that planetesimals are ideal locations to sequester such volatiles (e.g., Krijt et al., 2020). Upcoming JWST observations of KBOs of our own solar system will soon reveal the existence or absence of CO and other volatiles associated with these bodies. It is therefore surprising that CO was not detected within the limits of the _New Horizons_ spacecraft (Stern et al., 2019), leading to suggestions that CO may be depleted in even distant, cold small bodies, prior to their injection into the inner solar system (Lisse et al., 2021; Parhi and Prialnik, 2023). However, we must consider that _New Horizons_ provided only coarse lower limits, which could allow for scenarios where Arrokoth may be weakly outgassing enough CO that could be detected with more sensitive instrumentation, as recently suggested by the calculations presented in Kral et al. (2021). Another confounding observation is that methanol ice was also detected on Arrokoth's surface, but no evidence of water ice was found (Grundy et al., 2020). Is CO involved? Taken together, these puzzling observations make it unclear to what extent the most volatile ices have been retained within KBOs, how various thermal evolutionary processes in the distant solar system since their formation may have depleted their initial inventory, and how representative modern outgassing measurements are of their bulk ice inventories. To address these questions, we have crafted an analytical framework that is in harmony with extensive previous research (Section 2), operating within an extreme end-member scenario where Arrokoth is incredibly frigid (\(<\)40K, see Figure 1B; Umurhan et al., 2022). Instead of relying on computationally intensive models, we have opted for a simplistic analytic framework that captures the essential physics, whereby the numerous non-linear feedbacks and other short timescale physics are of lesser importance (Section 3). We have subsequently made estimations concerning the viability of CO ice within Arrokoth \(-\) on the assumption that certain planetesimals of the cold classical Kuiper Belt, like Arrokoth, were formed after the decay of radionuclides \(-\) and have demonstrated that significant volumes of its original CO ice and gas can still be retained within its interior up to the present day (Section 4). Our methodology showcases that Arrokoth, and conceivably other KBOs, retain a CO 'atmosphere' within their porous interiors, with only weak outgassing that falls below current detection thresholds. Furthermore, the solutions obtained from our analytical framework can subsequently be used to verify and benchmark more complex models (Section 5), and can be expanded to future KBO observations conducted by next-generation space- and ground-based telescopes. ## 2 Analytical model for sublimation-driven gas transport in Arrokoth-like objects We are considering a spherical body with a radius of \(r=r_{s}\), and we assume that the seasonal thermal skin depth is located at \(r=r_{t}\) (see Figure 2). At this location, we take the temperature to be the average of the extreme high (\(T=T_{\rm max}\)) and low (\(T=T_{\rm min}\)) temperatures of the surface, which is given by \(T_{t}\equiv T(r=r_{t})=0.5(T_{\rm max}+T_{\rm min})\). We keep this temperature fixed, and use it as the initial upper boundary condition for modeling the evolution of Arrokoth's bulk interior. Based on our spherical approximation of the Arrokoth's shape and fixed orbit, we can assert that this simplifying assumption is valid (see Figure 1B; Umurhan et al., 2022). Assuming that Arrokoth's structure below \(r_{t}\) is a rubble pile with a global porosity of \(\Psi\), we consider the solid portion of the body as composed of non-volatile weakly self-adhering material (amorphous H\({}_{2}\)O ice) and a single volatile ice species (CO), which constitutes a fraction \(f_{i}\) of the total mass density. More complex interior structures can be considered in follow-up work. Our model assumes a thin impermeable sheet at \(r=r_{b}\) for the volatile-ice refractory matrix, from which the ice sublimates steadily. This is a slight underestimation of the sublimating surface area compared to other models (Davidsson, 2021). At \(t=0\), \(r_{b}=r_{t}\). As the ice sublimates, the sublimation front moves deeper into the interior, and the gas flows upward through an ice-free porous matrix above \(r_{b}\), characterized by a pore size of \(r_{p}\). Notably, the material below \(r_{b}\) always retains its initial CO ice inventory, as illustrated in Figure 2. We postulate that the CO gas sublimated between \(r_{t}\) and \(r_{b}\) flows upwards through the porous matrix, driven by the pressure gradients that arise from the sublimation process itself (as depicted Figure 2: Model setup: A porous rubble pile, with the solid matrix comprised of intimately mixed CO ice and refractory amorphous H\({}_{2}\)O ice, with pore radii \(r_{p}\). The top-most layer (brown) is thermally processed within a single orbit, with any volume of the assumed CO (ice and gas) absent. Material below the sublimation front \(r_{b}\) (dark blue) retains its initial CO ice volume. As the sublimation front migrates downward through time (right), CO within the the amorphous H\({}_{2}\)O ice matrix sublimates. The produced gas (light blue) fills the pore space and migrates radially upward, away from the sublimation front. in Figure 2). All gas above \(r_{t}\) is assumed to be rapidly lost. As the sublimation front \(r_{b}\) migrates downwards, its rate slows progressively as a decreasing amount of energy is conducted downwards. To simplify our analysis, we assume that the solid matrix above \(r_{b}\) is unaffected by the sublimation of ices, so the pore sizes remain fixed through time. Although processes such as sublimation/re-condensation, hydrostatic closure with increasing depth, fracturing (El-Maarry et al., 2015), or removal of overburden material via large-scale collapse (i.e., sinkholes; Vincent et al., 2015) can affect the grain pore sizes, we expect the strength of analogous cometary-like material (Groussin et al., 2019), which we assume is similar for Arrokoth, to render such processes unimportant for our initial work here (see Section A). The dynamic evolution of pore radii may be an outcome of the flow of gases through the interior for warmer or smaller icy small bodies (e.g., Parhi and Prialnik, 2023), which we leave as a topic for future studies. _Key to our subsequent analyses_: we assume that the evolution of \(r_{b}\) is a quasi-static process, where \(\dot{r}_{b}/r_{b}\) progresses on timescales (\(\tau_{sub}\)) much longer than both the thermal (\(\tau_{t}\)) and dynamical (\(\tau_{d}\)) readjustment timescales for the region below \(r=r_{t}\) and above \(r=r_{b}\). In Section 3, we will explore the conditions under which this assumption may fail. Nevertheless, we confirm that this condition is met for an object like Arrokoth (Section 3), allowing us to use our simple analytic treatment instead of more computationally expensive numerical models. Thus, not unlike in previous work (e.g., Davidsson, 2021; Davidsson et al., 2022; Bouziani and Jewitt, 2022; Parhi and Prialnik, 2023), the long-term migration rate of the sublimation front \(r_{b}\) is controlled by three fundamental concepts: downward heat conduction, upward gas flow, and the conservation of mass. ### Thermal evolution Describing the thermal evolution of the interior of Arrokoth, we use \[0=\frac{1}{r^{2}}\frac{\partial}{\partial r}K_{\rm eff}r^{2}\frac{\partial T} {\partial r}-\dot{\mathcal{Q}}\delta(r-r_{b}), \tag{1}\] where \(r\) denotes the radial distance from the core (\(r=0\)) to the surface (\(r=r_{s}\)) of Arrokoth (refer to Figure 2). The temperature \(T\) depends on \(r\) and is determined by the effective conductivity (\(K_{\rm eff}\)), which in general includes heat transfer through the solid matrix and radiative conduction across pore spaces, the latter being negligible for the cold conditions we consider here. We also neglect heat transfer through the gas phase (see Section A). Several previous theoretical, observational, and experimental studies (Groussin et al., 2019; Umurhan et al., 2022; Bouziani and Jewitt, 2022; Davidsson et al., 2022; Parhi and Prialnik, 2023) provide a range of plausible effective conductivities. We assume that the solution to the heat equation is constrained by a fixed temperature, \(T_{t}\), at the location of the seasonal thermal skin depth. The heat loss term, \(\dot{Q}\) (Eq. A3), is represented by the Dirac delta function at \(r=r_{b}\), with the coefficient \(\dot{\mathcal{Q}}\) quantifying the energy consumed in driving sublimation. This is formally treated as a boundary condition at \(r=r_{b}\), i.e., \[K_{\rm eff}\partial_{r}T\big{|}_{r=r_{b}}=\dot{\mathcal{Q}}=\mathcal{L}\dot{ \Sigma}(r=r_{b}), \tag{2}\] where \(\mathcal{L}\) represents the enthalpy of sublimation per unit mass of CO ice, and \(\dot{\Sigma}\) is the net mass loss rate per unit area (specifically, in units of kg\(\cdot\)m\({}^{-2}\cdot\)s\({}^{-1}\)). To model \(\dot{\Sigma}\) at \(r=r_{b}\), we use a modified version of the method proposed by Lebofsky (1975), where gas production is proportional to the difference in ambient gas pressure just above \(r=r_{b}\) and the vapor pressure generated by ice sublimation at \(r=r_{b}\). In other words \[\dot{\Sigma}=\frac{v_{k}}{{c_{s}}^{2}}\Delta P=\frac{v_{k}}{{c_{s}}^{2}}\Bigg{(}P _{vap}\Big{(}T_{b}\Big{)}-P_{b}\Bigg{)}, \tag{3}\] where the sublimated gas molecules have a mean-averaged kinetic speed \(v_{k}\equiv\sqrt{8/\pi}c_{s}\), \(c_{s}\) is the sound speed, \(R_{g}=8315\) J/kg/K is the gas constant, and \(\mu_{a}\) is the average atomic weight of the gas molecules. \(P_{b}\) represents the ambient gas pressure at \(r=r_{b}\), and \(P_{vap}(T_{b})\) is the saturation vapor pressure at temperature \(T_{b}\equiv T(r=r_{b})\). \(\dot{\Sigma}\) is generally expressed as the product of \(v_{k}\) and the net sublimating gas density, \(\Delta\rho\), the latter of which has been re-expressed in Eq. (3) in terms of \(c_{s}^{2}\) according to the ideal gas law, i.e., \(\Delta P=c_{s}^{2}\Delta\rho\). We further make the assumption that there are no internal heat sources, such from as the radiogenic decay of \({}^{26}\)Al or the exothermic transition of amorphous to crystalline water ice. This assumption is reasonable for cold-classical KBOs like Arrokoth (refer to Section 4) because they may have formed late from the protoplanetary disk (e.g., Bierson and Nimmo, 2019) and may have never been heated above \(\sim 60\) K (Umurhan et al., 2022). As our work is focused on building the initial framework for the evolution of CO within cold KBOs, the potentially non-negligible impact of both short- and long-lived radionuclides (e.g., Davidsson, 2021) will be treated in follow-up numerical studies. We use Eq. (1) under the implicit assumption that \(\rho_{tot}C_{p}(dT/dt)=0\), where \(\rho_{tot}\) represents Arrokoth's bulk density and \(C_{p}\) is the specific heat at constant pressure of the solid ice constituent of the CO-depleted layer above \(r_{b}\). This assumption indicates that Arrokoth attains a thermal steady-state from its initial cold formation temperature much faster than it sublimates its CO, which results from being in the extreme \(\tau_{t}\ll\tau_{sub}\) limit. Although numerical modeling that retains the \(\rho_{tot}C_{p}(dT/dt)\) term always produces correct thermal profiles, any errors incurred by its neglect - as we have done here - will be insignificant for very cold bodies such as Arrokoth, where the \(\tau_{t}\ll\tau_{sub}\) condition is met. This is elaborated in Section 3. It is worth reiterating that the boundary condition in Eq. (2) specifies that all the thermal flux reaching the sublimating front is completely used up there. In more general treatments that include the \(\rho_{tot}C_{p}(dT/dt)\) term in the heat equation, it is necessary to consider the difference in thermal fluxes on both sides of the front to accurately account for the energy that drives sublimation. Our approach to the thermal boundary condition may seem to provide an excess of this energy, but this is not the case because our method assumes that \(\tau_{t}\ll\tau_{sub}\), which means that the system has had enough time to reach its quasi-static state. In Section A, we demonstrate that in this quasi-steady state, the temperature configuration below the front is simply \(T(r<r_{b})=T_{b}\), which, in turn, implies that the thermal flux immediately beneath the front is zero. Therefore, in this extreme timescale limit, all the incoming thermal flux from above the front is consumed to drive sublimation there. The reason for this is apparent: there is enough time to raise the temperature of the interior to its natural spatially uniform value \(T_{b}\) before there is any significant movement in the position of \(r_{b}\) caused by sublimation. The Darcy flow law, corrected for Knudsen diffusion (Ziarani & Aguilera, 2012), governs the spherically symmetric gas flow through the porous matrix as \[0=-k_{a}\frac{\partial P}{\partial r}-\mu u, \tag{4}\] where a pressure gradient \(\frac{\partial P}{\partial r}\) drives the gas flow of viscosity \(\mu\) and radial velocity \(u\) through a matrix with an effective permeability \(k_{a}\). We disregard gravitational effects in our formulation since we observe that \(\rho_{tot}g\) is generally much smaller than \(\partial P/\partial r\) for small bodies with sizes less than 100 km. The formula given by (Ziarani & Aguilera, 2012) modifies the effective permeability based on the empirically-derived Knudsen number \[k_{a}=k_{\infty}(1+4\bar{c}\text{Kn}), \tag{5}\] where \(\bar{c}\) is an \(\mathcal{O}\left(1\right)\) constant, and \(\text{Kn}=\lambda/r_{p}\) is the Knudsen number defined in terms of the mean free path of vapor molecules, \(\lambda=1/n\sigma={mc_{s}}^{2}\big{/}\rho\sigma\), with \(\sigma\) being the cross-section of collision of gas molecules with number density \(n\). At very small Knudsen numbers, gas diffusion through pore spaces is dominated by molecule-molecule collisions, and the effective 'liquid' permeability reduces to the Darcy limit \(k_{\infty}={r_{p}}^{2}/32\)(Bouziani & Jewitt, 2022). However, for larger Knudsen numbers, where the mean free path is comparable to or larger than the pore radius, gas diffusion is instead dominated by molecule-matrix collisions. Equation (4) requires a boundary condition, for which we assume that the ambient gas pressure is zero at the surface, i.e., \(P(r=r_{s})=0\). We also assume that the interior gas pressures (\(\approx 10^{-4}\) Pa) are weaker than the bonding strengths (\(\sigma_{b}\)) between individual \(\mu\)m-scale grains (\(\approx 1\) kPa, Gundlach et al., 2018) and/or their mm-scale aggregates (\(\approx 1\) Pa, Blum et al., 2014). This justifies the use of Eq. (4). For further details, see Section A. Our approach for the Kn\(\gg 1\) regime is analogous to the Skorov & Rickman (1995) formulation for high-Kn flow, which can be found in Eqs. (2) & (46) of Davidsson (2021). Generally, the Skorov & Rickman (1995) formulation is dependent on pores that are tubes with length (\(L_{p}\)), width (\(r_{p}\)), and tortuosity (\(\xi\)). For the present investigation, we adopt the assumption that \(\xi=1\) and \(L_{p}=r_{p}\). Further investigation of the overall dependencies on these parameters, utilizing our quasi-static model, remains a task for future studies. ### Mass conservation Finally, mass conservation within the object is maintained using the equation \[\frac{1}{r^{2}}\partial_{r}r^{2}\rho u=\dot{\Sigma}\delta(r-r_{b}), \tag{6}\] where \(\dot{\Sigma}\) is the instantaneous mass loss rate defined in Eq. (3), and \(\rho\) is the gas density. The solution of Eq. (6) yields \[\rho u=\frac{{r_{b}}^{2}}{r^{2}}\frac{v_{k}}{c_{s}{}^{2}}\Bigg{(}P_{vap} \Big{(}T_{b}\Big{)}-P_{b}\Bigg{)}, \tag{7}\] which, together with Eq. (4), establishes a solution for \(P\) and subsequently \(P_{b}\) (see details in Section A). Lastly, the rate at which the sublimation front propagates through the volatile ice proportion of the total ice matrix is specified as \[\rho_{ice}\dot{r}_{b}=\frac{v_{k}}{{c_{s}}^{2}}\Bigg{(}P_{vap}\Big{(}T_{b}\Big{)}- P_{b}\Bigg{)}, \tag{8}\] where \(\dot{r}_{b}\equiv dr_{b}/dt\). We account for the reduced porosity and dust-to-ice fraction in the partial ice density \(\rho_{ice}\). Net sublimation will occur when \(P_{b}<P_{vap}(T_{b})\). For an Arrokoth-sized object with low internal temperatures (Figure 1B; Umurhan et al., 2022), \(P_{vap}(T_{b})-P_{b}\) is typically \(\mathcal{O}\left(10^{-5}\right)\) Pa or smaller. Significant sublimation requires long timescales for these small deviations from the saturation vapor pressure, which may pose challenges for efficient time-stepping in numerical models that include the time derivative in the thermal energy equation. This highlights the usefulness of our analytic approach (Section 5). Our approach differs from recent works such as Bouziani & Jewitt (2022) and Lisse et al. (2021) (see Section B), who assumed \(P_{b}=0\) in Eq. (7). However, our formulation is broadly consistent with standard thermophysical models (e.g., Festou et al., 2004; Davidsson, 2021, among many others) that have built on decades of prior research. ### Implementation Combining Eqs. (1\(-\)8), and imposing \(P=0\) at \(r=r_{s}\), yields a single ordinary differential equation for the time-dependent rate at which \(r_{b}\) propagates into the interior. The methodology for solving this equation is detailed in Section A. The temperature solution across the entire domain is given by Eqs. (A18-A19) and Eq. (A23). The resulting approximate differential equation describing the sublimation front's propagation rate, normalized by Arrokoth's radius, \(\zeta\), follows as \[\zeta(1-\zeta)\dot{\zeta}=\frac{1}{6\tau_{s}(T_{b})},\qquad\zeta\equiv\frac{r_ {b}(t)}{r_{s}}, \tag{9}\] where \[\tau_{s}(T_{b})\equiv\frac{4{r_{s}}^{2}\rho_{\rm ice}v_{k}}{9r_{p}\bar{c}P_{ \rm vap}(T_{b})}\Bigg{/}\left[1+\frac{P_{\rm vap}(T_{b})}{\tilde{P}}\right], \tag{10}\] is the timescale of the volatile ice layer's lifetime, and \[\tilde{P}=\frac{8{m{c_{s}}^{2}}\bar{c}}{\sigma r_{p}}=\frac{24{c_{s}^{2}}\bar {c}}{v_{k}r_{p}}\mu. \tag{11}\] When writing Eqs. (10-11), we explicitly used the Maxwell kinetic theory approximation for CO gas viscosity, where \(\mu\approx mv_{k}/3\sigma\). The timescale depends on \(T_{b}\), which is a diagnostic function of \(\zeta\), given by \[T_{t}-T_{b}=\overline{\Delta T}\left[\frac{P_{\rm vap}(T_{b})}{P_{\rm vap}(T_{ t})}\right]\left[1+\frac{P_{\rm vap}(T_{b})}{\tilde{P}}\right]\left(\frac{1- \zeta\psi^{-1}}{1-\zeta}\right), \tag{12}\] with \[\overline{\Delta T}\equiv\frac{3r_{p}\bar{c}\mathcal{L}}{8v_{k}K_{\rm eff}}P_ {\rm vap}\left(T_{t}\right).\qquad\psi\equiv r_{t}/r_{s}. \tag{13}\] We use the characteristic difference temperature scaling \(\overline{\Delta T}\) at the top of the volatile ice layer and introduce the ratio \(\psi\) (\(<1\)) representing the base of the seasonal thermal wave penetration depth to the total radius of Arrokoth in Eqs. (10-11). Although there is some dependence on \(\zeta\) near \(\zeta=\psi\) in Eq. (12), it is negligible elsewhere. Henceforth, we assume \((1-\zeta\psi^{-1})\big{/}(1-\zeta)\to 1\), which makes it easier to derive an analytical solution since \(\tau_{s}(T_{b})\) becomes independent of \(\zeta\). As a result, Eq. (9) directly integrates into an implicit relationship for \(\zeta\), where we take \(\zeta(t=0)=1\) \[3\zeta^{2}-2\zeta^{3}=1-\frac{t}{\tau_{s}(T_{b})}. \tag{14}\] When utilizing the analytical solutions provided in Eqs. (9-14), it is straightforward to apply them if the following condition is met \[P_{\rm{dar}}\equiv\frac{2r_{s}v_{k}\mu}{k_{\infty}}\zeta(1-\zeta)=\frac{64r_{ s}v_{k}\mu}{r_{p}^{2}}\zeta(1-\zeta)\gg\left\{\tilde{P},P_{\rm{vap}}\left(T_{t} \right)\right\}. \tag{15}\] In our particular scenario, it is necessary that \(P_{\rm{dar}}\geq\mathcal{O}\left(10^{7}\rm{Pa}\right)\). This condition is usually satisfied when CO is retained within a KBO like Arrokoth (see Section A). ## 3 Validity Criteria for Quasi-Static Evolution Theory We assumed that the movement of gas and heat can be treated as quasi-static processes in response to the slow downward evolution of the CO ice front. This enabled us to obtain simple analytical solutions without relying on computationally intensive models. Although this assumption may not apply to all small, porous bodies and ices in the solar system, and numerical models may be needed to address such non-linear problems (e.g., NIMBUS, as described in Davidsson, 2021), we believed it to be valid for cold bodies like Arrokoth. In the following section, we present an analysis of the conditions under which this assumption holds. In order for the quasi-static theory discussed in Section 2 to be valid, both \(\tau_{sub}\gg\tau_{t}\) and \(\tau_{sub}\gg\tau_{d}\) must hold. For a spherically symmetric object undergoing thermal diffusion, the slowest thermal relaxation time in a spherical shell bounded by \(r_{b}\) and \(r_{t}\) can be approximated by (e.g., Coradini et al., 1997) \[\tau_{t}\approx\frac{\rho_{tot}C_{p}}{K_{\rm{eff}}}\frac{\Delta r^{2}}{\pi^{2} }=\frac{\rho_{tot}C_{p}}{K_{\rm{eff}}}\frac{r_{s}^{2}}{\pi^{2}}\left(1-\frac{r _{b}}{r_{s}}\right)^{2}, \tag{16}\] where \(\Delta r\) is defined as \(r_{t}-r_{b}\) (which is approximately equal to \(r_{s}-r_{b}\)). For our calculations, we utilize the general formulation presented in Shulman (2004) for amorphous H\({}_{2}\)O ice and take values within the temperature range \(T_{t}\) as listed in Table 1. It is a commonly known fact that the thermal relaxation time \(\tau_{t}\) typically ranges from 30Kyr to 30Myr, depending on various parameters such as \(K_{\rm{eff}}\) and \(r_{s}\) (e.g., see recent discussions in Davidsson, 2021; Davidsson et al., 2022; Parhi and Prialnik, 2023, and references therein). Likewise, in the case of gas density variations in a porous matrix with a pore size of \(r_{p}\) and thermal velocities of \(v_{k}\), the dynamical relaxation time takes the same form as before, but with a diffusion coefficient \(D\) roughly equivalent to \(\sim 3r_{p}v_{k}/8\)(like the Skorov and Rickman (1995) formulation in Davidsson, 2021, with \(L_{p}=r_{p}\) and \(\xi=1\)). Hence, we obtain \[\tau_{d}\approx\frac{1}{D}\frac{\Delta r^{2}}{\pi^{2}}. \tag{17}\] For small objects, it is generally true that \(\tau_{d}\ll\tau_{t}\) in the diffusive limit, thereby allowing the gas flow problem to be treated as a quasi-static process, as is done in this work and other studies (e.g., Davidsson, 2021). Specifically, for pore sizes ranging from \(0.01-1\)mm, thermal velocities of \(v_{k}=\mathcal{O}\left(150\text{m/s}\right)\), and length scales of \(\Delta r=\mathcal{O}\left(1\text{km}\right)\), \(\tau_{d}\) varies from \(\approx 10-1000\) years depending on assumed values of \(r_{p}\). We propose that \(r_{b}/\dot{r}_{b}\) serves as a suitable approximate estimate for \(\tau_{sub}\). By utilizing the relationship for \(\dot{\zeta}\) from Eq. (9), we obtain \[\tau_{sub}\equiv\zeta\Big{/}\dot{\zeta}=6\tau_{s}\zeta^{2}(1-\zeta). \tag{18}\] To establish the condition \(\tau_{sub}\gg\tau_{t}\), we use the definition for \(\tau_{s}\) from Eq. (10) and perform some re-arrangement to obtain a requirement on \(P_{\text{vap}}(T_{b})\) for the quasi-static evolution solutions to be valid. This condition is given by \[P_{\text{vap}}(T_{b})\left(1+\frac{P_{\text{vap}}(T_{b})}{\tilde{P}}\right) \ll\frac{24\pi^{2}}{9\bar{c}}\left(\frac{\rho_{\text{ice}}}{\rho_{\text{tot}} }\right)\left(\frac{K_{\text{eff}}v_{k}}{r_{p}C_{p}}\right)\frac{r_{b}^{2}/r_{ s}^{2}}{1-r_{b}/r_{s}}. \tag{19}\] The condition \(\text{Kn}\gg 0.1\) is the same as \(\tilde{P}\gg P_{\text{vap}}(T_{b})\), which implies that the quantity in the parenthesis on the left-hand side (LHS) of Eq. (19) can be replaced by \(1\) in the Knudsen limit. This assumption is valid for objects such as Arrokoth, where the interior temperatures are low, and CO vapor pressures are small (Section 4). It is important to note that, for any value of \(P_{\text{vap}}(T_{b})\), there exists a small enough \(r_{b}/r_{s}<r_{b,min}/r_{s}\) where this approximation fails. However, for the conditions relevant to Arrokoth, we find that this breakdown only occurs for \(r_{b}/r_{s}\leq 0.05\) and \(\tau_{sub}\) is \(\mathcal{O}\left(10^{8}\right)\) \begin{table} \begin{tabular}{c c c c} \hline \hline **Parameter** & **Variable** & **Range of Values** & **Source/Comments** \\ \hline Arrokoth Radius & \(r_{s}\) & \(4.5-5.0\) km & Keane et al. (2022) \\ Arrokoth Surface Area & \(A_{s}\) & \(1000\) km\({}^{2}\) & Keane et al. (2022) \\ Arrokoth Porosity & \(\Psi\) & \(0.6-0.8\) & Keane et al. (2022) \\ Orbitally Averaged Temperature & \(T_{t}\) & \(30-40\) K & Umurhan et al. (2022) \\ Specific Heat of Amorphous H\({}_{2}\)O ice & \(C_{p}(T_{t})\) & \(235-342\) J/kg/K & Shulman (2004) \\ Activation Temperature (CO) & \(T_{a}\) & \(982\) K & near \(T=30-35\)K, Grundy et al. (2022) \\ Molecular Mass (CO) & \(\mu_{a}\) & \(4.68\times 10^{-26}\) kg & NIST \\ Gas Viscosity (CO) & \(\mu\) & \(5\times 10^{-6}\) Pa s & NIST \\ Gas Collisional Cross-Section (CO) & \(\sigma\) & \(4.3\times 10^{-19}\) m\({}^{2}\) & NIST \\ Latent Heat of Sublimation (CO) & \(\mathcal{L}\) & \(308\) kJ/kg & Grundy et al. (2023) \\ Arrokoth Total Density & \(\rho_{tot}\) & \(500\) kg/m\({}^{3}\) & high end cited in Keane et al. (2022) \\ Partial Ice Density & \(\rho_{\text{ice}}=\rho_{tot}/3\) & \(175\) kg/m\({}^{3}\) & Patzold et al. (2019); Estrada \& Cuzzi et al. (2019) \\ Pore Radius & \(r_{p}\) & \(10\mu\)m\(-1\) mm & Merouane et al. (2016) \& Umurhan et al. (2016) \\ Effective Conductivity (\(r>r_{b}\)) & \(K_{\text{eff}}\) & \(10^{-4}-10^{-1}\) W/m/K & Groussin et al. (2019), Bouziani \& Jewitt et al. (2013); Umurhan et al. (2016) \\ \hline \hline \end{tabular} \end{table} Table 1: Input parameters adopted for application to Arrokoth years or more (see the next two sections & Figure 5A). We finally note that the estimated sublimation timescales go from being order of magnitude correct to being, instead, lower bounds whenever the timescale ordering transitions into \(\tau_{t}\gg\tau_{sub}\), which may occur for conductivities as low as \(10^{-5}\) Wm\({}^{-1}\)K\({}^{-1}\) as suggested for TNOs (Lellouch et al., 2013). ### Validity at Arrokoth To confirm the validity of our quasi-static assumption for objects like Arrokoth, we can make use of established thermophysical properties to estimate important parameters. Firstly, we can estimate \(\tilde{P}\) by inputting representative values relevant to our study, yielding \[\tilde{P}\approx 7.8\left(\frac{v_{k}}{165\text{ m/s}}\right)\left(\frac{1 \text{ mm}}{r_{p}}\right)\text{Pa}, \tag{20}\] where the reference value for \(v_{k}\) is chosen for the corresponding reference temperature of about 36K. Secondly, assuming that \(\tilde{P}\gg P_{\text{vap}}(T_{b})\), we obtain a validity condition for pressures at \(r_{b}\) from Eq. (19), given by \[P_{\text{vap}}(T_{b})\ll P_{\text{val}}\equiv\frac{r_{b}^{2}\big{/}r_{s}^{2} }{1-r_{b}/r_{s}}P_{\varphi}, \tag{21}\] with \[P_{\varphi}\equiv 14.5\left(\frac{\rho_{ice}}{\rho_{tot}}\right)\left(\frac {1\text{ mm}}{r_{p}}\right)\left(\frac{K_{\text{eff}}}{0.001\text{ W/m/K}}\right)\left(\frac{v_{k}}{165\text{ m/s}}\right)\left(\frac{300\text{ J/K/m}^{3}}{C_{p}}\right)\text{ Pa}, \tag{22}\] and the specific heat value is chosen based on \(C_{p}(T=36\)K) (Shulman, 2004). The pressure validity bound \(P_{\text{val}}\) is equal to \(P_{\varphi}\) when \(r_{b}/r_{s}=1/\varphi=\varphi-1\approx 0.618\), where \(\varphi=(1+\sqrt{5})/2\). Henceforth, we use \(P_{\varphi}\) as a convenient reference for the pressure bound to ensure that the pressure at \(r_{b}\) satisfies the validity condition for quasi-static evolution solutions. We can define \(r_{b,\text{min}}\) as the depth at which \(P_{vap}(T_{b})=P_{\text{val}}\). If \(P_{vap}(T_{b})\big{/}P_{\varphi}\ll 1\), then we have \[\frac{r_{b,\text{min}}}{r_{s}}\approx\sqrt{P_{vap}(T_{b})\big{/}P_{\varphi}}. \tag{23}\] This implies that when \(r_{b}\lessgtr r_{b,\text{min}}\), the sublimation front is receding faster than the thermal relaxation time. The reason behind this can be attributed to the spherical geometry effect, where as the front gets closer to the core, the thermal energy conducted from \(r=r_{t}\) increases as the inverse square, thus leading to an increase in the sublimation rate. Though our quasi-static evolutionary theory breaks down at a certain value of \(r_{b}\), for Arrokoth, this breakdown occurs at very small values of \(r_{b}/r_{s}\) (see next section). ## 4 Estimating the longevity and outgassing rates of CO at Arrokoth Our hypothesis is that although _New Horizons_ reported upper limits on outgassing rates during its flyby (\(\dot{N}<\dot{N}_{\text{max}}=3\times 10^{24}\) H atoms/s; Stern et al., 2019), significant amounts of CO gas and ice may still exist within Arrokoth that can outgas below detection limits. To investigate this hypothesis, we employ the system of equations outlined in Section 2, assuming that Arrokoth was initially seeded with substantial amounts of CO and that we remain in our quasi-static limit (Section 3). The input parameters are sourced from previous studies that analyzed _New Horizons_ data of Arrokoth and are summarized in Table 1. We use conservative values for \(r_{s}\) based on the best fit to Arrokoth's shortest dimension (of both lobes Wenu and Weeyo; Keane et al., 2022, Figure 1A). For the temperature range at the base of the seasonal thermal wave skin depth, we adopt temperature ranges quoted in (Grundy et al., 2020; Umurhan et al., 2022) and shown in Figure 1B. The temperatures on the low end correspond to values found at Arrokoth's poles, which generally correspond to our above assumed values of \(r_{s}\), while the higher temperatures correspond to the equatorial zones, where effective radii are larger (10 km for Wenu and \(7-8\) km for Weeyo; Keane et al., 2022, see Figure 1B). Based on brightness temperature measurements by _New Horizons_' REX instrument and best fits to Arrokoth's thermal inertia (Bird et al., 2022; Umurhan et al., 2022), and estimates of Arrokoth's bulk porosity (Keane et al., 2022; McKinnon et al., 2020), we derive a range of effective thermal conductivities \(10^{-4}-10^{-2}\)W/m/K for Arrokoth. This range is similar to cometary \(K_{\rm eff}\) values (Groussin et al., 2019) and those assumed in other studies (e.g., Bouziani and Jewitt, 2022). The smaller end of the assumed \(K_{\rm eff}\) range follows the finding that TNOs have small thermal inertias (Lellouch et al., 2013). However, some measurements of comets have reported conductivities as high as \(10^{-1}\)W/m/K (Groussin et al., 2019), which we adopt as our upper limit for completeness. These larger conductivities have only a minor impact on the retention of CO within Arrokoth-like objects (Figure 3). Objects like Arrokoth typically have a small seasonal skin depth (\(r_{t}\)) that ranges from a few to tens of meters (Groussin et al., 2019). Based on estimates for Arrokoth's thermal skin depth of \(<30\) meters (Umurhan et al., 2022), we assume that the ratio of \(r_{t}\) to \(r_{s}\), denoted as \(\psi\), is approximately 0.998. The range of possible pore sizes, which we assume to be proportional to the amorphous H\({}_{2}\)O ice aggregate grain sizes, is still uncertain. In-situ measurements of dust grains conducted by _Rosetta_ at 67P range from 0.01\(-\)1 mm (Merouane et al., 2016). Theoretical estimates provided by Umurhan et al. (2022) align with these measurements. Therefore, we set \(r_{p}\) to range from 0.01 to 1 mm (Table 1). Larger grains would not be consistent with the geophysical measurements of Arrokoth (Umurhan et al., 2022) and may violate our validity criterion (Section 3 & Figure 5). On the other hand, smaller grains will result in retention timescales longer than the age of the solar system for any reasonable temperature \(T_{t}\) (see Figure 4). Importantly, Grundy et al. (2023) present novel laboratory measurements of CO vapor pressure that are significantly lower, by a factor of \(5\times\), compared to those collected in Fray and Schmitt (2009) and utilized in previous research analogous to ours (Bouziani and Jewitt, 2022; Lisse et al., 2021; Parhi and Prialnik, 2023). Hence, we utilize an approximate Arrhenius form for \(P_{\it vap}\) that aligns with our temperature range of interest, based on the updated CO vapor pressure laboratory measurements (Grundy et al., 2023). The form is given by \[P_{\it vap}(T)=P\left(T_{\rm ref}\right)\exp\left(\frac{T_{a}}{T_{\rm ref}}- \frac{T_{a}}{T}\right), \tag{24}\] where \(T_{\rm ref}=30\)K, \(P(T_{\rm ref})\approx 6\times 10^{-5}\)Pa, and an activation temperature of \(T_{a}\approx 982\)K. Notably, for the highest temperatures considered (\(T=40\)K), the updated vapor pressure value is found to be \(P_{\it vap}(T=40\)K\()\approx 0.22\)Pa. We then estimate the total mass loss rate from Arrokoth as \[\dot{N}=\dot{\Sigma}A_{b}=\dot{\Sigma}A_{s}\zeta^{2}, \tag{25}\] where \(A_{b}\) represents the total area of the sublimating surface, which we assume to be equal to Arrokoth's total surface area (Table 1) reduced by a factor of \(r_{b}^{2}/r_{s}^{2}\). Within our temperature range, we operate in the particle diffusion limit (i.e., \(\tilde{P}\gg P_{\mathit{vap}}(T_{t})\)), and we can simplify the expression for \(\dot{\Sigma}\) given in Eq. (7) (see Section A). This simplification leads to the total particle loss rate given by \[\dot{N}(\zeta)\approx A_{s}\frac{3\bar{c}r_{p}}{8mv_{k}r_{s}}\cdot\frac{\zeta} {1-\zeta}P_{\mathit{vap}}(T_{b}), \tag{26}\] where we evaluate \(\zeta(t)\) using the solution provided in Eq. (14). We explore three different \(r_{p}\) values and examine solutions for \(\zeta\) within plausible ranges of \(K\) and \(T_{t}\) after 4.55 Gyr (Figure 3A). We keep in mind the widely accepted timescale of 4.568 Gya for CAI emplacement (Dunham et al., 2022, and references therein). Our analysis shows that for \(r_{p}=\)0.1mm (or smaller; Bouziani & Jewitt, 2022), all primitive CO in Arrokoth can survive up to at least \(T_{t}\approx 38\)K, and can persist for the lifetime of the solar system Figure 3: Evolution of the CO sublimation front depth and CO gas production rate for Arrokoth. (A) The relative depth of the sublimation front (\(\zeta\)) after 4.55 Gyr for a range of assumed pore radii, initial temperatures, and conductivities (see Table 1). The ratio \(r_{b}/r_{s}=1\) represents an Arrokoth that has not undergone sublimation (at t=0), while smaller values (blues) indicate greater sublimation; (B) The associated gas production rate (\(\dot{N}\)) after 4.55 Gyr, which is dependent on the volume of CO ice remaining within Arrokoth (panel A). (Figure 3A/Figure 4), within the plausible ranges of \(K_{\rm eff}\) and \(T_{t}\) considered. Lower conductivities lead to even longer timescales. This finding contradicts recent studies (Lisse et al., 2021; Parhi and Prialnik, 2023) (Section 5), but should not be surprising, given the low conductivity, our assumed lack of internal heat sources, and small pore radii in Arrokoth. Such conditions lead to a positive feedback loop: as temperature falls deeper inside the object, the rate of gas transport decreases, which, in turn, causes a further drop in interior temperatures. If Arrokoth is relatively insulating, with an effective thermal conductivity of \(K_{\rm eff}\lessapprox 5\times 10^{-3}\) Wm\({}^{-1}\)K\({}^{-1}\), any initial CO inventory will be preserved almost indefinitely (Figure 3A/Figure 4). This is due to the slow movement of the sublimated gas, which effectively stays near vapor pressure equilibrium with ice at the front. Only for the largest pore sizes and highest conductivities does it become difficult to retain CO to the present day (Figure 4), particularly when \(r_{p}=1\) mm and \(T_{t}\gtrapprox 34\)K (Figure 3A). This is because gas more readily streams out of a matrix with such large pore sizes, interrupting the favorable feedback mechanism that arises for more reasonable pore radii. Nevertheless, CO can still be retained in the deeper regions of such objects for the entirety of the solar system's existence. For all three values of \(r_{p}\) considered, the maximum rate of CO molecules produced via sublimation after 4.55 Gyr (\(\dot{N}\leq 2\times 10^{22}\)s\({}^{-1}\)) is also well below the _New Horizons_ upper limit \(\dot{N}_{\rm max}\). We observe that as \(T_{t}\) is varied and all other quantities are held fixed, the following trends emerge for \(\dot{N}\) after 4.55 Gyr (Figure 3B): (1) because the intrinsic sublimation rate is high for higher values of \(T_{t}\), the CO sublimation front advances deeper into the interior, resulting in an effectively reduced surface area and a correspondingly low value of \(\dot{N}\). (2) As \(T_{t}\) is steadily reduced, the depth to which the front has advanced is less, the emitting surface area (\(A_{b}=\zeta^{2}A_{s}\)) is larger, and \(\dot{N}\) increases. (3) When \(T_{t}\) is further lowered, the sublimation rate from the emitting front reduces substantially due to the Arrhenius dependence of \(P_{\rm vap}(T_{t})\). Despite the fact that the total emitting surface area is much larger, since the sublimation front has barely advanced, the net \(\dot{N}\) decreases. Figure 4: Lifetime of CO within a porous, Arrokoth-sized object as a function of the orbitally averaged temperature at \(r_{t}\) (\(T_{t}\)), the conductivity (\(K\)) and pore radius (\(r_{p}\)). Estimates of Arrokoth’s orbitally averaged temperature range from 30\(-\)40 K (Umurhan et al., 2022), with the most probable temperature near 34 K. For most scenarios, CO is retained within Arrokoth over the lifetime of the solar system (marked by a solid white line). When examining the solutions generated for our input parameters, it is important to verify that they fall within the bounds of validity for our quasi-static theory, as summarized in Eq. (21) for the \(\tilde{P}\ll P_{vap}(T_{b})\) limit that is relevant here. Figure 5B shows that the ratio \(P_{\rm vap}(T_{b})/P_{\rm val}\ll 1\) when estimated at \(r_{b}/r_{s}=1/\varphi\approx 0.618\). We have argued earlier that assessing \(P_{\rm vap}(T_{b})/P_{\rm val}\) is satisfactory to estimate general validity, and we therefore find that the criterion for validity is met across the entire parameter range considered. Figure 5A displays the limiting value \(r_{b,\rm min}\), which represents the value of \(r_{b}\) below which the quasi-static theory fails to hold (Section 3.1). Since we are operating in the \(P_{\rm vap}(T_{b})/P_{\varphi}\ll 1\) regime, \(r_{b,\rm min}\) can be well approximated by the expression given in Eq. (23). Our results reveal that only for the highest values of \(T_{t}\) and lowest values of \(K_{\rm eff}\) does \(r_{b,\rm min}/r_{s}\) approach 0.05. Overall, throughout our analysis, \(r_{b,\rm min}/r_{s}\) consistently stays well below 0.01. Only in the extreme scenario of the hottest and most conductive end-member, does our theory break down within the final few percent radii of the object, which the sublimation front very rarely reaches anyway (Figure 3). Judging from the trends inferred upon examining Figure 5B, only when \(K_{\rm eff}\to 10^{-5}\) Wm\({}^{-1}\)K\({}^{-1}\)(as in the extreme bookend case examined in Kral et al., 2021) do we expect our lifetime predictions as possibly failing. Such low conductivites would mean \(\tau_{t}\geq\tau_{sub}\), in which case our lifetime estimates should be interpreted as lower bounds instead. Nevertheless, our calculated timescales at these low conductivities remain longer than the age of the solar system, as in Kral et al. (2021). Finally, it is important to note that we assume that the sublimation of CO is dictated by CO-CO bonding energies expressed via \(T_{a}\). We assume that planetesimals like Arrokoth are comprised of mm-scale aggregates of \(\mu\)m-sized refractory grains (whether they be silicate based or water, or a mixture) that are either coated with CO or are intermingled with similar \(\mu\)m-sized CO grains. This picture has its basis in various global solar nebula models of grain growth in the outer solar system near and out beyond the CO ice-line (like those discussed in Birnstiel et al., 2012; Estrada et al., 2016; Estrada and Cuzzi, 2022). However if the component \(\mu\)m-sized grains are comprised instead of nm-scale intimately mixed CO-H\({}_{2}\)O complexes, then the CO sublimation rates will change significantly due to differences in CO-H\({}_{2}\)O binding energies that arise from mono-layer scale adsorption of CO molecules upon H\({}_{2}\)O -ice substrates (e.g., Kouchi, 1990; Sandford and Allamandola, 1990, and several others since). For example, Sandford and Allamandola (1990) report experimental results showing that CO-H\({}_{2}\)O binding energies correspond to \(T_{a}\approx 1740\pm 50\)K, which is not only far higher compared to that of CO-CO bonding (\(T_{a}\approx 940\)K), but would also lead to CO retention timescales significantly longer than those calculated here. We leave treating this possibility to a more comprehensive follow-up examination. Nevertheless, we consider the existence of nm-scale intimately mixed CO-H\({}_{2}\)O grains to be unlikely because it would mean that both CO and H\({}_{2}\)O molecules condensed out of the solar nebular gas at the same location at the same time within the protoplanetary disk, despite different condensation temperatures. Such a scenario seems unlikely, as it would require highly variable spatio-temporal temperature structures within the solar nebula.1 In all likelihood, \(\mu\)m scale homogeneous grains formed at different disk locations and only subsequently would turbulent disk transport result in grain species mixing across the disk (e.g., like framed in Estrada et al., 2016, and others). Footnote 1: We note that the experiments of Sandford and Allamandola (1990) involved the creation of intimately mixed CO-H\({}_{2}\)O ice mixtures by spraying pre-mixed CO-H\({}_{2}\)O gas onto the substrate inside their cryochamber. It is fair to argue that this experimental setup would represent an astrophysical scenario in which both molecular species are simultaneously condensing. ## 5 Implications, Relation to Other Recent Studies, & Discussion We demonstrated that primitive CO (and other hypervolatile) gas and ice reservoirs may exist within the interiors of Arrokoth and similar KBOs. Such reservoirs could have significant implications for how CO and amorphous H\({}_{2}\)O ice interacted within the protoplanetary disk, and whether a gradual leakage of CO from Arrokoth's interior may alter Arrokoth's surface in the present day (e.g., Grundy et al., 2020). Confirmation of our theoretical predictions could be obtained by observing CO around other KBOs using upcoming space- and ground-based telescopes. Such observations would serve as a crucial validation of our work and allow for more detailed calculations of the internal evolution of these objects. Throughout our analysis, we have been mindful of the limitations of our analytical treatment, which is only valid under certain conditions due to the cold temperatures within Arrokoth's interior (Figure 1B). Notably, our assumptions may not hold for Jupiter Family Comets in the inner solar system. However, in the remote outer solar system, the assumptions we used allowed us to simplify a complex non-linear system involving a series of partial differential equations into a single ordinary differential equation. We are not introducing any new physics compared to detailed models like NIMBUS (Davidsson, 2021) or those of Festou et al. (2004), instead we offer a simpler approach to obtain similar solutions - appropriate in the cold limit - thereby avoiding the need for computationally expensive models. In fact, we anticipate that our analytical solutions could be used to validate such models under the extreme thermophysical conditions that we are dealing with here. Further investigations of KBOs could explore the influence of internal heat sources in their interiors where heating timescales remain long, such as heating from short- and long-lived radiogenic nuclides or the crystallization of amorphous H\({}_{2}\)O ice, on our predictions (Malamud et al., 2022; Parhi and Prialnik, 2023). In our application for a KBO like Arrokoth, we assumed that active radionuclides like \({}^{26}\)Al were absent at the time of its assembly because it and other KBOs may have formed quite possibly after 4 Myr after solar system formation (Bierson and Nimmo, 2019), which is several half-lives of \({}^{26}\)Al. Similarly, Arrokoth is far too distant for amorphous H\({}_{2}\)O ice to crystallize. KBOs, and similar objects, also have complex shapes and variations in their spin-orbit evolution that could cause asymmetries and dichotomies in the sublimation front. Arrokoth itself has a complex, bi-lobate shape (Figure 1), and it is likely that such effects would be common on other KBOs (Jutzi and Benz, 2017; Showalter et al., 2021). Investigating the combined effects of such factors, though unlikely to significantly change the overall conclusions outlined in Section 4, would therefore permit a more general application to the broader family of KBOs. Our work differs from other recent studies (Parhi and Prialnik, 2023; Lisse et al., 2021, 2022; Kral et al., 2021) that investigated similar questions. Kral et al. (2021) explored the idea of retaining volatile ices within KBOs for billions of years. They employed an analytical approach similar to ours and arrived at a similar prediction that volatiles might persist in KBOs until the present time. However, our conclusions are based on different considerations. According to Kral et al. (2021), the rate of volatile loss is primarily governed by the time it takes for solar radiation absorbed at the surface to penetrate and drive sublimation deep within the interior. They further explain that \(\tau_{t}\gg\tau_{sub}\) is generally true for KBOs owing to their very small thermal inertias, for which they require very low values of \(K_{\rm eff}=\mathcal{O}\,(10^{-5})\)Wm\({}^{-1}\)K\({}^{-1}\), corresponding to the very lowest values of thermal inertia for TNOs reported in Lellouch et al. (2013), which may not be representative of the bulk interior of such objects 2. With such low thermal inertia, the thermal relaxation timescale may indeed be comparable to or even greater than \(\tau_{sub}\), leading to the dominance of \(\tau_{t}\) in controlling the loss rate. Under these conditions, our lifetime estimates serve as conservative lower bounds, or equivalently, our predicted loss rates can be viewed as upper bounds. Footnote 2: For example, comet 67P is thought to have a stratified interior with substantially higher effective values of thermal inertia (Groussin et al., 2019). However, we disagree with the suggestion that \(\tau_{t}\gg\tau_{sub}\) is as general for KBOs as Kral et al. (2021) claim. Our findings indicate that, for the range of reasonable \(r_{p}\) values we have explored, along with \(K_{\rm eff}=\mathcal{O}\,(10^{-4})\) Wm\({}^{-1}\)K\({}^{-1}\), the situation is quite the opposite: \(\tau_{t}\) is considerably smaller than \(\tau_{sub}\). In fact, when we examine the estimates for \(\tau_{t}\) and \(\tau_{sub}\) provided in Appendix A of Kral et al. (2021), we observe that their calculations suggest gas diffusion times (\(\tau_{d}\sim 10^{4}\) yr) that are even slower than their estimated sublimation timescales (\(\sim 10^{3}\) yr). The discrepancy in the two sublimation timescale estimates stems from the physical assumptions underlying their treatment of \(\tau_{sub}\), which assumes that sublimation occurs in proportion to the enhanced surface area of an interstitial volatile ice block with effective pore spacing \(r_{p}\) (also see Prialnik et al., 2004). However, this treatment overlooks how a gaseous subsurface atmosphere with pressure \(P\) -- which itself steadily diffuses outward at a comparatively slow rate -- acts to reduce the net sublimation from a volatile ice block. Specifically, their estimate for \(\tau_{sub}\) fails to consider that the rate of volatile sublimation is also dependent on \(P_{vap}-P\), as expressed in the right-hand side of Eq. (3). As a result, their sublimation model primarily attributes the slow liberation of gas molecules to the extended time required for the thermal energy driving sublimation to reach the depths where the sublimating ice is located. This stands in contrast to our work, where the long timescales result from sublimated gas in the body's interior being in near vapor pressure equilibrium with the volatile ice at depth, significantly suppressing sublimation rates. In the case of Parhi and Prialnik (2023), who predict that CO should be severely depleted in under 100 Myr for spherical 5km KBOs at Arrokoth's heliocentric distance, and in just under 400 Myr for \(\sim 10\)km spherical KBOs (see their Table 3), these differences arise for three primary reasons. First, they adopt an orbitally averaged temperature of \(T_{t}=42\)K, above any such temperature that is reasonable for Arrokoth (e.g., see Figure 1B; Umurhan et al., 2022). Second, the vapor pressure profiles that Parhi and Prialnik (2023) use in their analysis predict values of \(P_{vap}\) for given \(T\) that are \(\approx 100\times\) larger than the updated ones we use (i.e., those of Grundy et al., 2023). Both studies adopt characteristic \(K_{\rm eff}\) values that are typical of comets, which may be higher for TNOs (Lellouch et al., 2013). Finally, they note that \({}^{26}\)Al heating is a key agent in driving off CO and, also, is responsible for keeping the interior temperatures very low throughout the CO depletion process (\(\sim 25\)K). Thus, in the framework of Parhi and Prialnik (2023), not only is \(\tau_{sub}\approx\tau_{t}\), which is outside the validity regime of our analysis, but their timescales fall far to the upper right in any of the calculations shown in Figure 4. We anticipate that our results could be recovered by their models if similar assumptions were made to those we make above. Our conclusions also differ from those of Lisse et al. (2021, 2022) who predict the total loss of hypervolatiles in KBOs like Arrokoth. In these studies, it is assumed that the mass loss rate is given by the free-streaming flux of sublimated vapor straight to the surface without any ambient gas pressure control on sublimation at the front that typify atmospheres in near-vapor pressure equilibrium, like Mars and Pluto. In other words, it is assumed that the sublimated gas does not diffuse through the volatile depleted refractory matrix above the sublimation front but, instead, flows straight out to the surface. Below, we provide a brief description as to how our work differs from Lisse et al. (2021, 2022), with extra derivations detailed in Section B. Indeed, a free-stream state is a distinct possibility if the ambient gas pressures in such objects are larger than the grain-grain bonding strengths (\(\sigma_{g}\sim\)kPa) or grain-aggregate to grain-aggregate bonding strengths (\(\sigma_{agg}\sim 1\)Pa, also see discussion in Appendix A), thereby leading to the disintegration of the refractory subsurface porous matrix. However, we find that such high vapor pressures are not possible for bodies like Arrokoth at its current heliocentric distance of 45 A.U., and assuming KBOs (or some fraction of them) were formed well after radionuclides like \({}^{26}\)Al have long burned out. Nevertheless assuming a free-stream flux is like setting the ambient gas pressure (\(P_{b}\)) at the front to zero in Eq. (3) and Eqs. (7-8). The corresponding free-streaming mass flux, \(\dot{\Sigma}_{free}\), would then be \[\dot{\Sigma}_{free}\approx\frac{v_{k}}{c_{s}^{2}}P_{vap}(T). \tag{27}\] In our treatment, the low-density gas flows through a porous medium with a corresponding mass-loss rate \(\dot{\Sigma}_{F}\). In Section B we develop a simple calculation that estimates \(\dot{\Sigma}_{F}\) based on Fick's Law, which we show captures the spirit of the calculation done in our study in the Kn\(>1\) molecular flow regime limit. A direct comparison between the estimate for \(\dot{\Sigma}_{F}\) found in Eq. (B28) and Eq. (27) immediately shows that \(\ddot{\Sigma}_{F}=(r_{p}/r_{s})\cdot\Sigma_{free}\). While in both approaches the gas sublimation rate is controlled by \(P_{vap}(T)\), allowing gas to free-stream toward the surface underestimates the volatile lifetime in a body by many orders of magnitude. In general, we expect that a free-stream description will only be valid when the interior temperatures rise high enough that the internal gas pressures exceed \(\sigma_{agg}\). In those interior regions where \(P>\sigma\), the dynamics of the system may also resemble those of terrestrial fluidized beds, where particle-gas momentum exchange physics begin to play a role, thereby requiring a more careful treatment. In that vein, we conclude our analyses by proposing speculations as to the peculiar nature of two small bodies: 29P/Schwassmann-Wachmann (29P) and C/2016 R2. These objects exhibit exceptional activity at large heliocentric distances and harbor hypervolatiles, such as CO, which they outgas at rates exceeding \(10^{28}\) particles s\({}^{-1}\)(Roth et al., 2023; Cordiner et al., 2022). Although these objects have undergone the amorphous-crystalline ice transition and possess other active volatiles apart from CO, we speculate that our model framework could potentially describe their production rates. As a preliminary step, we make predictions for 29P as we did for Arrokoth, assuming a nucleus diameter of 30 km, \(T_{t}=110\)K, and \(K_{\rm eff}=0.01-0.1\) Wm\({}^{-1}\)K\({}^{-1}\)(Roth et al., 2023), with all other parameters matching those presented in Table 1. Our calculations suggest that if the CO front is situated less than \(\approx 50\) m below the surface, we can estimate similar CO production rates to those previously reported for 29P (Roth et al., 2023). This would imply that 29P entered its current orbit from the outer reaches of the solar system only recently (within the last \(<30\) Myr). The CO gas contained within its interior would also now be pressurized near the bonding strength of the aggregate grains that we assume make up these objects (\(\approx 3\) Pa), which could fluidize the matrix and potentially drive the substantial outbursts that are characteristic of 29P (see Roth et al., 2023, and references therein). As stated above, we recognize that more sophisticated numerical computations are required to fully investigate such systems, a line of future research that could lead to testable predictions for future spacecraft missions and telescopic observations. ## 6 Acknowledgments The authors thank Will Grundy for providing us preliminary data regarding updated CO vapor pressure measurements. We also thank Bjorn Davidsson, Jason Soderblom, Jordan Steckloff, and two anonymous reviewers for helpful guidance and reviewing earlier forms of this work. This research was supported by the Heising-Simons Foundation (51 Pegasi b Fellowship to S.P.D.B.) and the _New Horizons_ Kuiper Belt Extended Mission I (support to O.M.U.). ## Appendix A Additional Mathematical Derivations & Quantification of Assumptions We base our spherically symmetric model on the premise that sublimation processes occur over much longer timescales compared to both thermal and dynamical readjustment times. Thus, we assume that there exists a time-variable radial location \(r_{b}(t)\) beneath the surface at \(r=r_{s}\) that represents the sublimating surface or 'front' as described in the main text. The front gradually moves downward with a time rate of change \(\dot{r_{b}}\) such that the corresponding timescales, \(r_{b}/\dot{r_{b}}\) (\(\tau_{sub}\)), are significantly longer than the corresponding thermal wave propagation times (\(\tau_{t}\)) and dynamical readjustment times (\(\tau_{d}\)), as discussed in Section 3. Our model adopts an interior structure where CO ice and amorphous H\({}_{2}\)O ice intermix to form a porous and permeable matrix. The size of the void spaces within this matrix is \(r_{p}\), and the porosity of Arrokoth (\(\Psi_{tot}\)) is uniform, ignoring any pore closure with depth. We further assume that all the relevant physics occur for \(r\geq r_{b}\) and that the ice layer at \(r<r_{b}\) instantaneously adjusts to the sublimation dynamics at \(r=r_{b}\). These assumptions may not hold if there are rapid changes to the system. However, we assume Arrokoth's orbital elements or energetic forcings do not change too quickly. Such matters could be addressed in follow-up studies that employ a more comprehensive approach. In this quasi-static evolutionary framework, the time derivatives in the fluid equations of motion are neglected, resulting in a series of steady state equations for the system \[\frac{1}{r^{2}}\frac{\partial}{\partial r}\rho r^{2}u = \dot{\Sigma}\delta(r-r_{b}) \tag{10}\] \[0 = -\frac{\partial P}{\partial r}-\frac{\mu}{k_{a}}u\] (11) \[0 = \frac{1}{r^{2}}\frac{\partial}{\partial r}K_{\rm eff}r^{2}\frac {\partial T}{\partial r}+\dot{Q}, \tag{12}\] where Eqs. (10-12) represent: (1) mass conservation for the gaseous component with density \(\rho(r,t)\), radial gas velocity \(u(r,t)\), and a mass source \(\dot{\Sigma}_{b}\) located at the time variable location \(r=r_{b}\); (2) momentum conservation in a generalized Darcy flow with gas pressure \(P(r,t)\), dynamical molecular viscosity \(\mu\), and generalized matrix permeability \(k_{a}\); and (3) heat balance with temperature \(T(r,t)\), effective conductivity \(K_{\rm eff}\), and a heat sink term \(\dot{Q}\), which we represent in terms of a Dirac delta function loss term localized at \(r=r_{b}\) \[\dot{Q}=-\dot{\mathcal{Q}}\delta(r-r_{b}), \tag{13}\] to represent the consumption of energy via sublimation. We provide a detailed description of each equation, as well as the boundary conditions adopted, below. We make the additional assumption that there exists an enrichment of CO ice at a greater depth \(r=r_{b}\), extending all the way to the center of Arrokoth at \(r=0\) (Figure 2). As a result, we assume that for all locations above this CO ice front at \(r=r_{b}\), the porosity (\(\Psi\) as referenced in the main text) is uniform but higher than its interior (i.e., \(\Psi>\Psi_{\rm tot}\)). The temperature at the surface of the CO ice layer is designated as \(T_{b}(t)=T(r=r_{b},t)\). All of the physics we consider occur either at \(r=r_{b}\) or above. We expect cold trapping to rapidly close any fluid pathways a short distance below \(r=r_{b}\) and assume that no sublimated CO gas penetrates deeper than \(r=r_{b}\). Hence, by construction, we assume flow only occurs radially outwards from \(r_{b}\). We proceed by developing solutions to Eqs. (10-12) sequentially, beginning with the statement of mass conservation. The CO ice surface at \(r=r_{b}\) sublimates with a rate \(\dot{\Sigma}\) (in units kg\(\cdot\)m\({}^{-2}\cdot\)s\({}^{-1}\)) given by the standard formulations of Lebofsky (1975) \[\dot{\Sigma}=\Big{(}P_{\rm vap}(T_{b})-P_{b}\Big{)}\frac{v_{K}}{c_{s}^{2}}, \tag{14}\] \(c_{s}\) represents the sound speed with \(c_{s}^{2}=R_{g}T/\mu_{a}\), where \(R_{g}=8310\) J/kg/K is the gas constant and \(\mu_{a}\) is the averaged atomic weight of the gas molecules (for CO \(\mu=28\)). The mean-averaged kinetic speed of the sublimated gas is \(v_{K}\) and given by \(\sqrt{8/\pi}c_{s}\). \(P_{b}\) denotes the ambient gas pressure at \(r=r_{b}\), i.e., \(P_{b}(t)=P(r_{b},t)\), and \(P_{\rm vap}\) is the saturation vapor pressure of CO at temperature \(T=T_{b}\). In the case of Eq. (A1), integration can be immediately performed to derive a constant mass loss rate \(\dot{M}_{0}\) for \(r\geq r_{b}\) \[\frac{\dot{M}_{0}}{4\pi}=\dot{\Sigma}r_{b}^{2}=\rho ur^{2},\longrightarrow \rho u=\frac{r_{b}^{2}}{r^{2}}\dot{\Sigma}.\] (A6) An immediate consequence of the mass conservation equation is an explicit expression for the mass flux \(\rho u\). To simplify the analytical treatment, we assume that \(c_{s}\) and \(v_{K}\) are constant values that depend only on the temperature \(T_{t}\) at the base of the seasonal thermal skin depth of Arrokoth (further discussed below in relation to Eq. (A3)). For small KBOs like Arrokoth, the error introduced by this approximation is negligible and does not significantly affect the results presented in the main text. However, future studies could more thoroughly investigate this minor effect. Let us now focus on Eq. (A2), which describes the steady-state gas flow through the CO ice-depleted matrix permeability (i.e., \(r>r_{b}\)) using a generalized Darcy-Knudsen model. To account for the transition from low to high Knudsen number flows, we adopt the empirically motivated Klinkenberg formulation \[k_{a}=k_{\infty}\big{(}1+4\bar{c}\text{Kn}\big{)},\] (A7) where \(\text{Kn}\equiv\lambda\big{/}r_{p}\), which has been shown to be effective in similar studies (e.g., Ziarani & Aguilera, 2012). Here, Kn represents the Knudsen number defined as the ratio of the gas mean free path, \(\lambda\), to the pore radius, \(r_{p}\), with \(\lambda\) being defined in terms of the gas number density \(n\) and a molecule's cross-section \(\sigma\). The empirical constant \(\bar{c}\) is typically around 1 and is assumed to be so in our study. As suggested by Bouziani & Jewitt (2022, and references therein), we set the Darcy 'liquid' permeability limit as \(k_{\infty}=r_{p}^{2}/32\), where \(r_{p}\) represents the pore radius. It is worth noting that the transition from 'liquid' to 'diffusive' flow usually occurs when \(\text{Kn}{>0.1}\), but we use the more generalized form since Kn can vary depending on the pore sizes and vapor pressures under consideration. Nonetheless, previous studies related to comets, such as Espinasse et al. (1991), have employed similar approaches that bridge both regimes. Additionally, it is expected that the amorphous H\({}_{2}\)O ice grains comprising Arrokoth's solid matrix possess adequate adhesive bonding such that the pressure of the sublimated CO gas is insufficient to displace them. This is true so long as \(P\ll\sigma_{p}\), where \(\sigma_{p}\) represents the bonding strength between pairs of individual grains or between pairs of grain-aggregates. Theoretical predictions on the strength of grain-grain contacts indicate that such materials are substantially stronger than the pressures of the sublimating gases (e.g., JKR theory Johnson et al., 1971). Nonetheless, we also rely on strength assessments based on _Rosetta_ and _Philae_ measurements of comet 67P's surface materials (Biele et al., 2022) and years of extensive laboratory studies (summarized in Figure 7 of Groussin et al., 2019). It has been found that grain-grain bonding for individual grains with radii in the few \(\mu m\) scales is \(\sigma_{p}\approx 1\)kPa (e.g., Gundlach et al., 2018), while for mm-scale aggregates of these same \(\mu\)m scale grains, it is weaker, at \(\sigma_{p}\approx 1\)Pa (e.g., Blum et al., 2014). Assuming the conservative value for \(\sigma_{p}\) (i.e., we assume that the primary constituents of interior particles are grain aggregates), we observe that the interior gas pressures are at least 3 orders of magnitude less for the temperature range of interest here (\(30-40\)K) based on recent CO vapor pressure measurements by Grundy et al. (2023). We therefore consider the sublimated vapor to be incapable of moving grains around and assume that the solid matrix (and in turn pore radii) remain fixed through time. As the gas diffuses or flows towards the surface, it is valid to use the Darcy-Knudsen flow law, Eq. (A2). Combining the relationship for \(\rho u\) in Eq. (A6) with the Klinkenberg form in Eq. (A7) and the assumption that \(c_{s}\) is independent of temperature \(T\), we obtain Eq. (A2). This equation allows us to develop a first-order differential equation for \(P\) as a function of space \[\left(P+\frac{1}{2}\tilde{P}\right)\frac{\partial P}{\partial r}=-\frac{r_{b}^ {2}v_{K}\mu}{r^{2}k_{\infty}}\Big{(}P_{\rm vap}(T_{b})-P_{b}\Big{)},\] (A8) where we have defined a transition pressure \[\tilde{P}\equiv\frac{8\bar{c}mc_{s}^{2}}{\sigma r_{p}}.\] (A9) We can integrate Eq. (A8) once, while taking into account the boundary condition that the pressure at \(r=r_{b}\) is equal to an unknown constant \(P_{b}\). This leads us to an implicit relationship for the function \(P(r)\) valid for \(r>r_{b}\), \[P^{2}+\tilde{P}P=\frac{2r_{b}v_{K}\mu}{k_{\infty}}\Big{(}P_{b}-P_{\rm vap}(T_{ b})\Big{)}\left(1-\frac{r_{b}}{r}\right)+P_{b}^{2}+\tilde{P}P_{b}.\] (A10) This expression for \(P(r)\) is found in terms of the unknown \(P_{b}\), whose value can be solved if we demand that the pressure is zero at \(r=r_{s}\) - i.e., by setting \(P(r=r_{s})=0\) in Eq. (A10). This condition leads to a relationship that must be satisfied between \(P_{b}\) and the other parameters of the system: \[P_{\rm{dar}}(r,r_{b})\Big{(}P_{b}-P_{\rm vap}(T_{b})\Big{)}+P_{b}^{2}+\tilde{P }P_{b}=0;\qquad P_{\rm{dar}}\equiv\frac{2r_{s}v_{K}\mu}{k_{\infty}}\frac{r_{b }}{r_{s}}\left(1-\frac{r_{b}}{r_{s}}\right).\] (A11) Solving the above for \(P_{b}\) leads to \[2P_{b}=P_{\rm{dar}}+\tilde{P}+\left[(P_{\rm{dar}}+\tilde{P})^{2}+4P_{\rm{dar} }P_{\rm{vap}}(T_{b})\right]^{1/2}.\] (A12) In the event that \(P_{\rm{dar}}\gg\Big{\{}P_{\rm{vap}}(T_{b}),\tilde{P}\Big{\}}\), a Taylor series analysis shows that \[P_{b}\approx P_{\rm{vap}}(T_{b})\left[1-\frac{P_{\rm{vap}}(T_{b})+\tilde{P}}{ P_{\rm{dar}}}\right].\] (A13) As previously noted, for bodies such as Arrokoth that are thought to contain CO, \(P_{\rm{dar}}=\mathcal{O}\left(10^{7}\mathrm{Pa}\right)\), which is consistently higher than both \(P_{\rm{vap}}(T_{b})\) (for temperatures \(T\) in the range of \(30-40\) K) and \(\tilde{P}\) which typically is \(\mathcal{O}\left(10^{2}\right)\) Pa for pore radii between \(0.01\) mm and \(1\) mm. We can also determine the transition from the diffusive to the fluid regime (i.e., \(\mathrm{Kn}\approx\mathcal{O}\left(0.1\right)\)) based on the relative magnitudes of \(\tilde{P}\) and \(P_{\rm{vap}}(T_{b})\). In particular, if \(P_{\rm{vap}}(T_{b})\gg\tilde{P}\), the flow is in the fluid regime, whereas if \(P_{\rm{vap}}(T_{b})\ll\tilde{P}\), the flow is in the diffusive regime. For Arrokoth, we find that we are always in the diffusive regime, which implies that gas flow rates are generally suppressed. Models like NIMBUS (Davidsson, 2021), implicitly work within the diffusive regime as well. Next, we proceed to solve the heat equation (Eq. 14) to obtain the temperature at \(T_{b}\). We assume that the subsurface low-density gas is in thermodynamic equilibrium with the refractory amorphous H\({}_{2}\)O ice matrix through which it diffuses towards the surface. Therefore, we seek solutions for the temperature of the static matrix structure. To do so, we must first present a model for the effective conductivity of the matrix, which we assume to have the general form derived in Umurhan et al. (2022), given by \[K_{\rm eff}=K_{c}+K_{r}, \tag{15}\] where \(K_{c}\) is thermal conductivity of the solid amorphous H\({}_{2}\)O ice matrix and \(K_{r}\) is the radiative conductivity across pore spaces within the matrix. For the former we adopt \[K_{c}=K_{A}(1-\Psi)h, \tag{16}\] where \(K_{A}\) is the conductivity of amorphous H\({}_{2}\)O ice and \(h\) is the adhesive fractional contact area given by Johnson Kendall Roberts theory (Johnson et al., 1971). We estimate value of \(h\) to range from 0.01 to 1, while \(K_{A}\) is experimentally found to be approximately 0.01 W/m/K (see discussion after Eq. (38) of Umurhan et al., 2022). This yields an effective range of \(10^{-4}\)W/m/K \(<K_{c}<10^{-2}\)W/m/K, consistent with values used in other studies (e.g., Bouziani and Jewitt (2022)). The radiative conductivity is typically assumed to be \[K_{r}=8\epsilon_{IR}\sigma_{B}r_{p}T^{3}, \tag{17}\] where \(\sigma_{B}\) is the Stefan-Boltzmann constant and \(\epsilon_{IR}\) is the infrared emissivity, which is typically around 0.9. The radiative conductivity (\(K_{r}\)) is linearly dependent on the pore size \(r_{p}\), as shown in Eq. (17). However, for our assumed values of \(T\) and \(r_{p}\) (Table 1), we find that \(K_{r}\) is much smaller than \(K_{c}\), so we neglect the \(K_{r}\) dependence and assume that \(K_{\rm eff}=K_{c}\), which we treat as a constant. We assume values of \(K_{\rm eff}\) as large as \(10^{-1}\)W/m/K for completeness, as discussed in the main text. As in previous, analogous studies on comets such as Espinasse et al. (1991) and Orosei et al. (1995), we neglect heat conduction through the gas under the expected rarefied and dilute conditions of Arrokoth's interior. Here, we provide a brief justification. Using elementary Chapman theory to estimate molecule-molecule energy transfer, we can estimate the thermal conduction in a gas as \[K_{gas}\approx\rho C_{p}\lambda v_{K}=\frac{C_{p}mv_{K}}{\sigma}, \tag{18}\] where \(\lambda\) is the collisional mean-free path (as defined earlier). Based on the range of values reported by Shulman (2004) for the specific heat capacity \(C_{p}\) in the relevant temperature range (see Table 1), the gas conductivity is expected to be on the order of \(\mathcal{O}\left(10^{-3}\right)\) W/m/K. However, this estimate is only valid if \(\lambda\ll r_{p}\). In the case where \(\lambda\gg r_{p}\) (as is likely the case for small KBOs like Arrokoth), molecules collide more frequently against pore walls, and energy transfer between molecules rarely occurs. For example, we can estimate that at \(T_{b}=35\)K, CO's \(P_{\rm vap}\approx 10^{-4}\)Pa, which gives \(\lambda\approx mc_{s}^{2}/(\sigma P_{\rm vap}(T_{b}))\). Using this approximation, we find that \(\lambda\) is on the order of 1m, which is much greater than the assumed range of \(r_{p}\). Therefore, we can safely neglect heat conduction through the gas. In our model of Arrokoth, which is assumed to be a sphere with a radius \(r=r_{s}\), we consider that the seasonal skin depth occurs at \(r=r_{t}\) (see Figure 2). At this location, we set the temperature to be the average of the extreme high (\(T=T_{\rm max}\)) and low (\(T=T_{\rm min}\)) surface temperatures, such that \(T_{t}\equiv T(r=r_{t})=0.5(T_{\rm max}+T_{\rm min})\). This temperature serves as the boundary condition for the long-term evolution of the interior. We then integrate Eq. (101) for \(r>r_{b}\), and express the temperature solution \(T(r)\) in terms of an unknown basal thermal flux \(F_{b}\) \[T(r)=\frac{F_{b}}{K_{\rm eff}}\frac{r_{b}^{2}}{r_{t}}\left(1-\frac{r_{t}}{r} \right)+T_{t},\qquad{\rm for}\ \ r>r_{b}, \tag{102}\] while beneath the sublimation front we have the constant solution \[T(r)=T_{b},\qquad{\rm for}\ \ r\leq r_{b}. \tag{103}\] Indeed, the solution for \(r\leq r_{b}\) represents the only possible steady-state thermal solution within the interior that avoids a singularity at \(r=0\) that, in turn, implies that the thermal flux approaches zero as one approaches the sublimation front from below, i.e., \[K_{\rm eff}\frac{\partial T}{\partial r}\bigg{|}_{r\to r_{b}^{-}}=0. \tag{104}\] Once again, this can be understood as a result of the very long timescales for sublimation compared to the timescales for thermal adjustment of the medium. In other words, since \(\tau_{sub}\gg\tau_{t}\), the region inside \(r<r_{b}\) has had enough time to reach a steady state in which it has received all the thermal energy it can hold. Once this steady state is reached, the thermal flux immediately below the front becomes zero, indicating that no further thermal energy can propagate inside (detailed in Section 3 of the main text). The volumetric energy loss rate \(\dot{Q}\) defined in Eq. (100) is represented by a Dirac delta function centered at \(r=r_{b}\). Here, \(\dot{\cal Q}\) is the energy consumption rate per unit area due to sublimation at \(r=r_{b}\), expressed in terms of \(\dot{\Sigma}\) at \(r=r_{b}\) \[\dot{\cal Q}={\cal L}\Sigma, \tag{105}\] where the enthalpy of sublimation for CO ice is denoted by \({\cal L}\). In our study, we assume that the only available energy to facilitate sublimation comes from thermal conduction, as given in Eq. (2) in the main text. Generally in this kind of treatment, the source energy is defined as the difference between the incoming thermal flux (as \(r\to r_{b}^{+}\)) and the outgoing thermal flux (as \(r\to r_{b}^{-}\)) at the front \[K_{\rm eff}\frac{\partial T}{\partial r}\bigg{|}_{r\to r_{b}^{+}}-\quad K_{ \rm eff}\frac{\partial T}{\partial r}\bigg{|}_{r\to r_{b}^{-}}=\dot{\cal Q}. \tag{106}\] In light of the fact that the thermal flux beneath the sublimation front is zero (see Eq. (104)), the aforementioned condition provides an explicit expression that relates the unknown basal thermal flux \(F_{b}\) to the rate of sublimation losses \[F_{b}={\cal L}\frac{v_{K}}{c_{s}^{2}}\Big{[}P_{vap}(T_{b})-P_{b}\Big{]}. \tag{107}\] This expression relies on knowledge of the temperature \(T_{b}\). To obtain the temperature \(T_{b}\), we can substitute the expression for \(F_{b}\) from the previous equation into Eq. (102). After some algebraic manipulation and utilizing the asymptotic form in Eq. (A13), we arrive at the following equation for \(T_{b}\) \[\frac{\mathcal{L}v_{K}}{K_{\rm eff}c_{s}^{2}}\left[P_{vap}(T_{b})+\tilde{P}\right] \frac{P_{vap}(T_{b})}{P_{\rm{dar}}}r_{b}\left(1-\frac{r_{b}}{r_{t}}\right)=T_{t }-T_{b},\] (A24) which concludes the complete description of the physical model. Let us now focus on deriving the evolution equation for \(r_{b}\). We start with the expression for the mass flux at \(r=r_{b}\) (Eq. A5) and use the approximate solution in Eq. (A13). We assume that the mass flux \(\dot{\Sigma}\) can be expressed in terms of \(\dot{r}_{b}\) (i.e., the rate of change of \(r_{b}\)) and the partial mass density of the sublimating ice, denoted as \(\rho_{\rm ice}\). Thus, we write \[\dot{\Sigma}=\dot{r}_{b}\rho_{\rm ice}=-\frac{v_{K}}{c_{s}^{2}}\left[P_{vap}(T _{b})+\tilde{P}\right]\frac{P_{vap}(T_{b})}{P_{\rm{dar}}}.\] (A25) By replacing \(P_{\rm{dar}}\) according to its definition found in Eq (A10) and re-arranging the resulting expressions, we obtain \[\left(\frac{r_{b}}{r_{s}}\right)\left(1-\frac{r_{b}}{r_{s}}\right)\left(\frac {\dot{r}_{b}}{r_{s}}\right)=\frac{1}{6\tau_{s}(T_{b})},\] (A26) which is equivalent to Eq. (9). Thus, by combining Eq. (A24) and Eq. (A26), we have fully specified the time evolution of \(r_{b}\). Lastly, let us consider the issue of \(\rho_{\rm ice}\), as the average density of KBOs can exhibit a wide range of values. Comets, which we use as an analogy, are typically assumed to have a density of \(\rho_{tot}\approx 500\) kg/m\({}^{3}\)(e.g., Patzold et al., 2019). According to Keane et al. (2022), the mean density of Arrokoth falls within the range \(155\)kg/m\({}^{3}<\rho_{\rm{arrokoth}}<600\)kg/m\({}^{3}\), which overlaps with typical cometary values. Based on theoretical predictions of the compositions of planetesimals in the outer solar system at the time of their formation (e.g., from solar nebula particle growth models such as those reported in Estrada and Cuzzi, 2022, and references therein), it is generally believed that hypervolatiles make up about a third of the total mass budget of a planetesimal, divided equally among silicates, water, and hypervolatiles such as CO. Therefore, we assume \(\rho_{\rm ice}=\rho_{tot}/3\approx 175\)kg/m\({}^{3}\). ## Appendix B Simplified Diffusive Flux Derivation The central physical effect controlling the loss of gas from the interior is gas diffusion according to Fick's Law. In the diffusion flow limit appropriate for Arrokoth (i.e., Kn\(\gg 0.1\)), the flow out from the interior is controlled by the pore spacing \(r_{p}\) and the mean molecule speed \(v_{k}\). The corresponding mass flux \(\dot{\Sigma}_{F}\) relates to the gas diffusion coefficient \(D=\mathcal{O}\left(r_{p}v_{k}\right)\) according to \[\dot{\Sigma}_{F}=D\frac{d\rho}{dr},\] (B27) which is basically Fick's Law for gas diffusion through a porous medium. Assuming there are no mass sources other than a sublimating front located a depth \(\Delta r\) beneath the surface, and if we write the difference of the gas densities between the surface and the sublimation front as \(\Delta\rho\), we can estimate \(\dot{\Sigma}_{F}\) by writing \[\dot{\Sigma}_{F}\approx r_{p}v_{k}\frac{\Delta\rho}{\Delta r}\approx r_{p}v_{k }\frac{P_{vap}(\Delta r)}{c_{s}^{2}\Delta r},\] (B28) where we have approximated \(\Delta\rho\) by saying that it is equal to the vapor density of the ice at the front (\(\rho_{vap}\), estimated to be \(P_{vap}/c_{s}^{2}\)) minus the gas pressure at the surface, the latter of which is assumed to be zero. Equating the mass loss rate to a front propagation rate through an ice with density \(\rho_{ice}\), we can further write a simple differential equation for the rate of change of \(\Delta r\) \[\rho_{ice}\frac{d\Delta r}{dt}=\dot{\Sigma}_{F}.\] (B29) On the assumption that \(P_{vap}(\Delta r)\) is constant because the temperature \(T\) with depth does not vary much, the above expression, together with the use of Eq. (B28), may be integrated to derive a lapse time \(\Delta t\) for the front to reach a depth \(\Delta r\), i.e., \[\Delta t=\frac{1}{2}\frac{c_{s}^{2}\Delta r^{2}}{\rho_{ice}r_{p}v_{k}P_{vap}(T )}.\] (B30) Setting \(\Delta r\to r_{s}\) functionally recovers the diffusive limit of our result in Eq. (10) with only minor \(\mathcal{O}\left(1\right)\) differences.
2309.16484
Local stability of spheres via the convex hull and the radical Voronoi diagram
Jamming is an emergent phenomenon wherein the local stability of individual particles percolates to form a globally rigid structure. However, the onset of rigidity does not imply that every particle becomes rigid, and indeed some remain locally unstable. These particles, if they become unmoored from their neighbors, are called \textit{rattlers}, and their identification is critical to understanding the rigid backbone of a packing, as these particles cannot bear stress. The accurate identification of rattlers, however, can be a time-consuming process, and the currently accepted method lacks a simple geometric interpretation. In this manuscript, we propose two simpler classifications of rattlers based on the convex hull of contacting neighbors and the maximum inscribed sphere of the radical Voronoi cell, each of which provides geometric insight into the source of their instability. Furthermore, the convex hull formulation can be generalized to explore stability in hyperstatic soft sphere packings, spring networks, non-spherical packings, and mean-field non-central-force potentials.
Peter K. Morse, Eric Corwin
2023-09-28T14:49:30Z
http://arxiv.org/abs/2309.16484v1
# Local stability of spheres via the convex hull and the radical Voronoi diagram ###### Abstract Jamming is an emergent phenomenon wherein the local stability of individual particles percolates to form a globally rigid structure. However, the onset of rigidity does not imply that every particle becomes rigid, and indeed some remain locally unstable. These particles, if they become unmoved from their neighbors, are called _ruttlers_, and their identification is critical to understanding the rigid backbone of a packing, as these particles cannot bear stress. The accurate identification of rattlers, however, can be a time-consuming process, and the currently accepted method lacks a simple geometric interpretation. In this manuscript, we propose two simpler classifications of rattlers based on the convex hull of contacting neighbors and the maximum inscribed sphere of the radical Voronoi cell, each of which provides geometric insight into the source of their instability. Furthermore, the convex hull formulation can be generalized to explore stability in hyperstatic soft sphere packings, spring networks, non-spherical packings, and mean-field non-central-force potentials. ## I Introduction A rigid structure is one which holds its shape when perturbed infinitesimally. If this structure consists of particles, this rigid structure is said to be jammed [1; 2; 3; 4; 5; 6; 7]. While the system as a whole may be rigid, local regions of it may still be unconstrained. The particles--or clusters of particles--making up these locally unconstrained regions are generally termed "rattlers" [1; 8; 9] and are removed from the consideration of the structure for many analyses. The rigorous rattler detection scheme in the literature [10] relies on linear programming and is both computationally expensive and lacks a simple geometric interpretation. Another, based on an event-driven packing protocol, gives direct physical meaning to rattler detection by using a stability analysis to systematically prune compressive forces, leaving rattlers fully unconstrained [11]. However, this method scales poorly with system size and dimension, as it requires matrix inversion. These methods are, however, exact, and the resulting stable networks which they find are identical. In light of the complexity of these algorithms, a naive rattler detection scheme via constraint counting has proliferated and been used widely as a proxy, despite its shortcomings. The naive algorithm exploits the fact that the minimum number of constraints necessary to stabilize a particle in \(d\) dimensions is \(d+1\). Thus, the number of contacts on each particle is counted, and those with fewer than \(d+1\) contacts are deemed rattlers. Some (but not all) of these proxy methods apply this criterion recursively, thus more closely approximating the true stable network. However, this method cannot account for the presence of particles with at least \(d+1\) stable contacting neighbors which are nevertheless not geometrically constrained. Here, we present an alternative scheme for identifying rattlers that is intuitive, efficient, and physically meaningful. In fact, we have been using it for some time without realizing that it was not yet present in the literature [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Our method is based on a fundamental link between local rigidity and the local geometry of force carrying contacts, and implemented through the computation of the convex hull of the set of contacting particles. The stable network obtained by this algorithm is identical to that found in Refs. [10; 11]. The central thrust of our algorithm is based on a comment within Ref. [10], namely that a sphere can only be locally rigid if it has greater than \(d+1\) non-cohemispheric contacts. While the authors of Ref. [10] note that simple constructions can be done in low spatial dimensions (a method adopted in Refs. [35; 36]), ours is a dimensionally independent construction: a particle whose center is \(\mathbf{r}_{0}\) is locally stable if the sum of all forces acting on it is zero, and if the surface of the convex hull of the particle's center and the centers of all of its contacting neighbors \(\{\mathbf{r}_{i}\}\) does not include \(\mathbf{r}_{0}\), i.e. \(\mathbf{r}_{0}\notin\partial\text{Conv}(\mathbf{r}_{0},\{\mathbf{r}_{i}\})\), where \(\partial\text{Conv}\) is the surface of the convex hull. We also prove a related theorem, which can be shown to be equivalent to this, which states that a particle is locally stable if the maximum inscribed sphere of its radical Voronoi cell is unique and identical to the particle itself. The rest of this article is structured as follows. In Sec. II, we provide definitions for the generalized packing models that we can consider and a series of mathematical definitions which will allow us to prove the two main theorems. In Sec. III, we provide a formal proof that each construction finds the correct stable network. In Sec. IV, we address computational complexity, noting that even in the worst case scenario, the convex hull algorithm is faster than the linear programming algorithm in \(d<6\). We conclude in Sec. V by discussing extensions of this construction to other models. ## II Definitions In the following, bold letters denote vectors in \(\mathbb{R}^{d}\), \(\mathbf{0}\) represents the zero-vector, \(\mathbf{a}\cdot\mathbf{b}\) denotes the dot product between vectors \(\mathbf{a}\) and \(\mathbf{b}\), \(\{\mathbf{r}_{i}\}\) denotes a finite set of points, where each point is represented by a vector from the origin, and \(\{\mathbf{r}_{i}\}\setminus\mathbf{r}_{0}\) denotes the set \(\{\mathbf{r}_{i}\}\) excluding the point \(\mathbf{r}_{0}\). All definitions assume the standard Euclidean distance metric on \(\mathbb{R}^{d}\), where the distance between points \(\mathbf{a}\) and \(\mathbf{b}\) is denoted \(|\mathbf{a}-\mathbf{b}|\). To define our packing, and to aid in later definitions and theorems, we define both open and closed balls. **Definition 1**.: _An open ball of radius \(\sigma\) around \(\mathbf{s}\) is defined as the set of points contained within a distance \(\sigma\) of \(\mathbf{s}\). The notation we will use is \(B_{\sigma}(\mathbf{s})\equiv\{\mathbf{y}:|\mathbf{s}-\mathbf{y}|<\sigma\}\)._ **Definition 2**.: _A closed ball of radius \(\sigma\) around \(\mathbf{s}\) is defined as the set of points contained within and including a distance \(\sigma\) of \(\mathbf{s}\). The notation we will use is \(\overline{B}_{\sigma}(\mathbf{s})\equiv\{\mathbf{y}:|\mathbf{s}-\mathbf{y}|< \sigma\}\)._ We thus consider particles defined by \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\) with a non-dimensional overlap between particles \(i\) and \(j\) defined as \[h_{ij}\equiv 1-\frac{|\mathbf{r}_{i}-\mathbf{r}_{j}|}{\sigma_{i}+\sigma_{j}}, \tag{1}\] subject to an additive potential \(U=\sum_{ij}u(h_{ij})\) where contacts (\(h_{ij}\geq 0\)) coincide with the potential cutoff, i.e. \(u(h_{ij}\leq 0)=0\). This form includes (but is not limited to) standard soft-sphere contact power law potentials where \(u(h_{ij}>0)\propto h_{ij}^{\gamma}\) for \(\gamma>0\) (\(\gamma=2\) for Hookean spheres, and \(\gamma=2.5\) for Hertzian spheres) and hard spheres, where \(u(h_{ij}>0)=\infty\). From this, the force on particle \(i\) from particle \(j\) can be defined as \[\mathbf{f}_{ij}\equiv\nabla u(h_{ij})=|\nabla u(h_{ij})|\frac{\mathbf{r}_{j} -\mathbf{r}_{i}}{|\mathbf{r}_{j}-\mathbf{r}_{i}|}. \tag{2}\] Here the only salient feature is that the force points towards the particle center from the point of contact. Unless otherwise mentioned, we consider only packings which are in a local energy minimum, such that the sum of forces acting on each particle is zero. Extensions to non energy minimized packings will be considered in Sec. V. **Definition 3** (Adapted from Ref. [10]).: _A particle is locally stable if the sum of the forces acting on it is zero and the forces acting on it span \(\mathbb{R}^{d}\). Particles which are not locally stable are called unstable._ In an effort to make this work as self contained as possible, we have compiled a list of the mathematical definitions necessary to follow the theorems and proofs of Sec. III such that only basic knowledge of set theory and linear algebra will be prerequisite. The definitions are adapted from Refs. [37; 38; 39]. **Definition 4**.: _An extreme point \(\mathbf{r}_{0}\) of the finite set \(\{\mathbf{r}_{i}\}\) is a point which can be separated from all other points by a \((d-1)\)-plane. Thus there exists a vector \(\mathbf{a}\in\mathbb{R}^{d}\) with at least one non-zero element and \(b\in\mathbb{R}\) for which \(\mathbf{a}\cdot\mathbf{r}_{0}-b>0\) while \(\mathbf{a}\cdot\mathbf{r}_{j}-b\leq 0\) for all \(\mathbf{r}_{j}\in\{\mathbf{r}_{i}\}\setminus\mathbf{r}_{0}\). An illustration of both extreme and non-extreme points is given in Fig. 1._ Remark: In our proofs, we only need the extreme points of finite sets. The concept of an extreme point can of course be generalized to infinite sets [38], but this makes several of the theorems unwieldy. The definition used here is non-standard but reduces to the common definition in the case of finite sets. Figure 1: Here we demonstrate the concept of an extreme point by examining three red particles labelled (a-c). While this example is embedded in \(d=2\), the demonstration extends naturally to higher dimensions, replacing lines with \((d-1)\)-planes. (a) No line can be drawn which separates the particle from all other particles, so (a) is not an extreme point. (b) A line can be drawn which separates the particle from all other points, and it is thus an extreme point and will be shown to be on the surface of the convex hull. (c) No line can be drawn which separates the particle from all other particles, so it is not an extreme point. However, a line exists which contains the particle and which divides space such that all particles exist (inclusively) in one of its half spaces, thus the point is on the surface of the convex hull. Figure 2: A simple illustration that a convex set containing points \(\mathbf{a}\) and \(\mathbf{b}\) contains all points on a straight line between them. **Definition 5**.: _A set \(K\subset\mathbb{R}^{d}\) is convex if for all \(\mathbf{a},\mathbf{b}\in K\), \(\mathbf{c}=(t-1)\mathbf{a}+t\mathbf{b}\in K\) for all \(t\in[0,1]\). Put simply, if \(\mathbf{a}\) and \(\mathbf{b}\) are in \(K\), then \(K\) is convex if every point \(\mathbf{c}\) along the straight line between \(\mathbf{a}\) and \(\mathbf{b}\) is also in \(K\). This is illustrated in Fig. 2._ **Definition 6**.: _From Ref. [39], a compact convex set \(K\subset\mathbb{R}^{d}\) is a convex polytope if the extreme points of \(K\) form a finite set. In this work, all instances of the word polytope are implied to be convex._ **Definition 7**.: _The surface \(\partial K\) of a polytope \(K\) is defined as the infinite set of points \(\mathbf{s}\in K\) for which there exists \(\mathbf{s}_{\mathrm{out}}\in B_{\sigma}(\mathbf{s})\) where \(\mathbf{s}_{\mathrm{out}}\notin K\) for all \(\sigma\)._ **Definition 8**.: _The convex hull of a set of points \(\mathrm{Conv}(\{\mathbf{r}_{i}\})\) is the unique closed \(d\)-dimensional polytope containing all points \(\{\mathbf{r}_{i}\}\) whose vertices are members of \(\{\mathbf{r}_{i}\}\). The surface of the convex hull is denoted \(\partial\mathrm{Conv}(\{\mathbf{r}_{i}\})\) and is shown visually in Fig. 3._ **Definition 9**.: _For a sphere given by \(\overline{B}_{\sigma}(\mathbf{r})\), the points \(\{\mathbf{b}_{i}\}\subset\partial\overline{B}_{\sigma}(\mathbf{r})\) are cohemispheric if there exists \(\mathbf{a}\in\mathbb{R}^{d}\) with at least one non-zero element, where \(\mathbf{a}\cdot(\mathbf{b}_{i}-\mathbf{r})\geq 0\) for all \(i\). Similarly, forces \(\{\mathbf{f}_{i}\}\) are cohemispheric if \(\mathbf{a}\cdot\mathbf{f}_{i}\geq 0\) for all \(i\). If no such \(\mathbf{a}\) exists, the points or forces are non-cohemispheric._ **Definition 10**.: _For a polytope \(K\), the maximum inscribed sphere \(M(K)\) is the largest closed ball fully contained in \(K\). That is, \(M(K)=\max_{\sigma}[\overline{B}_{\sigma}(\mathbf{r}):\overline{B}_{\sigma}( \mathbf{r})\subset K]\). An illustration of the concept, including generic, degenerate, and highly symmetric cases is given in Fig. 4. We use MIS as an abbreviation when not referring to a specific \(M(K)\)._ **Definition 11**.: _In a packing of particles with positions \(\{\mathbf{r}_{i}\}\), the Voronoi cell of particle 0 is the set \(V(\mathbf{r}_{0})=\{\mathbf{y}:|\mathbf{y}-\mathbf{r}_{0}|\leq|\mathbf{y}- \mathbf{r}_{i}|\ \forall i\}\)._ **Definition 12**.: _The power of a point \(\mathbf{c}\in\mathbb{R}^{d}\) with respect to a sphere with center \(\mathbf{r}\) and radius \(\sigma\) is given by \(\Pi_{\mathbf{r},\sigma}(\mathbf{c})=|\mathbf{r}-\mathbf{c}|^{2}-\sigma^{2}\). Points on the interior of the sphere have negative power, points on the surface of the sphere have zero power, and points outside of the sphere have positive power._ **Definition 13**.: _In a packing of particles with positions \(\{\mathbf{r}_{i}\}\) and radii \(\sigma_{i}\), the radical Voronoi cell of particle 0 is the set \(R(\mathbf{r}_{0})=\{\mathbf{y}:\Pi_{\mathbf{r}_{0},\sigma_{0}}(\mathbf{y})\leq \Pi_{\mathbf{r}_{i},\sigma_{i}}(\mathbf{y})\ \forall i\}\)._ Trivially, we see that if all particles are the same size (i.e. \(\sigma_{i}=\sigma\) for all \(i\)), then the radical Voronoi cell reduces to that of the standard Voronoi cell. Both the radical Voronoi cell and, by extension, the Voronoi cell are convex polytopes, and it is from the definitions that these cells tessellate space, i.e. there is no point in space which is not contained in the radical Voronoi cell of a particle, and the only points which can be contained in multiple radical Voronoi cells are on the shared surfaces of two or more cells. ## III Proofs of the stability theorems In this section, we provide proofs of the two main stability theorems, labelled Theorem 6 (Sec. III.1) and Theorem 10 (Sec. III.2). While Sec. III.1 is entirely self contained, Sec. III.2 uses theorems from Sec. III.1. Some of the theorems are elementary or have been proven by simpler means elsewhere, but we formulate our own versions here, as we believe that they help to build the physical intuition for the main theorems. ### Stability via the convex hull **Theorem 1** (The Krein-Milman theorem [40]).: _A compact convex subset of a Hausdorff locally convex topological vector space is equal to the closed convex hull of its extreme points._ The proof of this theorem is given in Ref. [40]. For the purposes of this work, we will use the fact that the standard vector space on \(\mathbb{R}^{d}\) with the Euclidean distance metric and standard inner product is a Hausdorff locally convex topological vector space. For clarification of these terms, we suggest any standard textbook on topology (for example, Ref. [38]). Figure 4: The maximum inscribed sphere (teal) of a convex polytope (purple) in \(d=2\). Contact points between the MIS and the polytope are shown with stars. (a) The generic case with no symmetries has \(d+1\) contact points between the polytope and the MIS. (b) When two of the contacting surfaces are parallel, it is possible to have an MIS with only 2 contacts in any dimension. (c) Highly symmetric polytopes (regular ones, as shown here, or those near jamming), may have MIS which have greater than \(d+1\) contacts with the polytope. Figure 3: Here we demonstrate the convex hull (orange) of a set of points. Points on the surface of the convex hull are colored black, while points not on the surface of the convex hull are in teal. **Corollary 1.1**.: _The convex hull of a set of points \(\mathrm{Conv}(\{\mathbf{r}_{i}\})\) is equal to the closed \(d\)-dimensional polytope whose vertices are the extreme points of \(\{\mathbf{r}_{i}\}\)._ Proof.: Given that the standard vector space on \(\mathbb{R}^{d}\) is a Hausdorff locally convex topological vector space, the Krein-Milman theorem states that a closed convex polytope is the convex hull of its extreme points, which for a convex polytope are its vertices. **Corollary 1.2**.: _If \(\mathbf{r}_{0}\) is an extreme point of \(\{\mathbf{r}_{i}\}\) then \(\mathbf{r}_{0}\in\partial\mathrm{Conv}(\{\mathbf{r}_{i}\})\)._ Proof.: To prove that \(\mathbf{r}_{0}\in\partial\mathrm{Conv}(\{\mathbf{r}_{i}\})\), we must show that there exists a point \(\mathbf{s}_{\mathrm{out}}\in B_{\sigma}(\mathbf{r}_{0})\) such that \(\mathbf{s}_{\mathrm{out}}\notin\mathrm{Conv}(\{\mathbf{r}_{i}\})\). Because \(\mathbf{r}_{0}\) is an extreme point, there exists \(\mathbf{a}\in\mathbb{R}^{d}\) with at least one non-zero element and \(b\in\mathbb{R}\) such that \(\mathbf{a}\cdot\mathbf{r}_{0}-b>0\) while \(\mathbf{a}\cdot\mathbf{r}_{j}-b\leq 0\) for all \(\mathbf{r}_{j}\in\{\mathbf{r}_{i}\}\backslash\mathbf{r}_{0}\). We can thus construct \(\mathbf{s}_{\mathrm{out}}=\mathbf{r}_{0}+\frac{\sigma\mathbf{a}}{2|\mathbf{a}|}\), for which \(|\mathbf{r}_{0}-\mathbf{s}_{\mathrm{out}}|=\frac{\sigma}{2}\), and thus \(\mathbf{s}_{\mathrm{out}}\in B_{\sigma}(\mathbf{r}_{0})\). By construction, \(\mathbf{s}_{\mathrm{out}}\) is an extreme point of the set \(\{\mathbf{s}_{\mathrm{out}},\mathbf{r}_{i}\}\), and thus by Corollary 1.1, \(\mathbf{s}_{\mathrm{out}}\notin\mathrm{Conv}(\mathbf{r}_{i})\). This statement is true for any value of \(\sigma\), and thus \(\mathbf{r}_{0}\in\partial\mathrm{Conv}(\{\mathbf{r}_{i}\})\). **Theorem 2**.: _A full dimensional convex polytope is equivalently defined by either its vertices (V-Representation) or the intersection of half-planes representing its surface (H-Representation)._ The proof of this theorem is contained in standard texts on convex polytopes, for example following the proofs of Theorems 3.1.1 and 3.1.2 of Ref. [39] or Theorem 1.1 of Ref. [37]. The theorem only applies to full dimensional polytopes (i.e. ones which are \(d\)-dimensional objects), but if the polytope is a \(d^{\prime}\) dimensional object, where \(d^{\prime}\neq d\), it is sufficient for our purposes to consider the V-Representation and the H-Representation in \(\mathbb{R}^{d^{\prime}}\), in which the polytope is full dimensional. **Corollary 2.1**.: _A point \(\mathbf{r}_{0}\) which is contained on a \((d-1)\)-plane, which defines a halfspace containing all \(\mathbf{r}_{i}\) is contained in \(\partial\mathrm{Conv}(\{\mathbf{r}_{i}\})\). That is, if there exists \(\mathbf{a}\in\mathbb{R}^{d}\) with at least one non-zero element and \(b\in\mathbb{R}\) such that \(\mathbf{a}\cdot\mathbf{r}_{0}-b=0\) and \(\mathbf{a}\cdot\mathbf{r}_{j}-b\leq 0\) for all \(\mathbf{r}_{j}\in\{\mathbf{r}_{i}\}\setminus\mathbf{r}_{0}\), then \(\mathbf{r}_{0}\in\partial\mathrm{Conv}(\{\mathbf{r}_{i}\})\)_ Proof.: There are two cases here which need to be proven. If \(\mathbf{r}_{0}\) is an extreme point, then \(\mathbf{r}_{0}\in\partial\mathrm{Conv}(\{\mathbf{r}_{i}\})\) by Corollary 1.2. If \(\mathbf{r}_{0}\) is not an extreme point, then the half-plane representation described here is equivalent to that defining the H-Representation of a convex polytope, and thus \(\mathbf{r}_{0}\in\mathrm{Conv}(\{\mathbf{r}_{i}\})\) by Theorem 2. The further statement that \(\mathbf{r}_{0}\in\partial\mathrm{Conv}(\{\mathbf{r}_{i}\})\) comes directly from the definition of a halfspace. **Theorem 3**.: _Any set of \(d\) or fewer points on the surface of a sphere are cohemispheric. That is, for a sphere centered at \(\mathbf{r}_{0}\) with radius \(\sigma_{0}\) and points \(\{\mathbf{c}_{i}\}\) satisfying \(|\mathbf{c}_{i}-\mathbf{r}_{0}|=\sigma_{0}\), there exists a vector \(\mathbf{a}\in\mathbb{R}^{d}\) such that \(\mathbf{a}\cdot(\mathbf{c}_{i}-\mathbf{r}_{0})\geq 0\) for all \(i\)._ Proof.: Here, we can relax the condition \(|\mathbf{c}_{i}-\mathbf{r}_{0}|=\sigma_{0}\) and prove a more general theorem. A hyperplane in \(\mathbb{R}^{d}\) can always be formed which passes through the \(d\) contact points. That is, there exist \(\mathbf{a}^{\prime}\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}^{d}\) such that \(\mathbf{a}^{\prime}\cdot\mathbf{c}_{i}=b\) for all \(\mathbf{c}_{i}\). Note that if we construct a matrix \(C\) with rows \(\mathbf{c}_{i}\), then this hyperplane is not unique if \(\det(C)=0\), but any of the infinitely many solutions will suffice. We can define \(b^{\prime}\in\mathbb{R}^{d}\) by \(\mathbf{a}^{\prime}\cdot\mathbf{r}_{0}=b^{\prime}\), then \(\mathbf{a}^{\prime}\cdot(\mathbf{c}_{i}-\mathbf{r}_{0})=b-b^{\prime}\). If \(b^{\prime}\leq b\), then \(b-b^{\prime}\geq 0\), and we can take \(\mathbf{a}=\mathbf{a}^{\prime}\), whereupon the theorem is proven. If \(b^{\prime}>b\), then we can take \(\mathbf{a}=-\mathbf{a}^{\prime}\), whereupon the theorem is proven. From this, we note that the minimal number of points \(\mathbf{c}_{i}\) for which this theorem no longer holds is \(d+1\). This is not to say that _any_\(d+1\) points on the surface are non-cohemispheric (see for example, Fig. 5a), but to state that _the minimal_ number of points on a sphere which are non-cohemispheric is \(d+1\). **Theorem 4**.: _Given \(\mathbf{f}_{i1}\in\{\mathbf{f}_{i}\}\) and \(\mathbf{a}\in\mathbb{R}^{d}\) where \(\mathbf{f}_{i1}\neq\mathbf{0}\), \(\mathbf{a}\neq\mathbf{0}\), and \(\mathbf{a}\cdot\mathbf{f}_{i1}\neq 0\), \(\sum_{i}\mathbf{f}_{i}=\mathbf{0}\) only if there exists \(\mathbf{f}_{i2}\in\{\mathbf{f}_{i}\}\setminus\mathbf{f}_{i1}\) such that \(\mathrm{sign}(\mathbf{a}\cdot\mathbf{f}_{i1})=-\mathrm{sign}(\mathbf{a}\cdot \mathbf{f}_{i2})\). _ Proof.: Here we project \(\mathbf{a}\) onto the sum yielding \(\sum_{i}(\mathbf{a}\cdot\mathbf{f}_{i})=\mathbf{a}\cdot\mathbf{f}_{i1}+\sum_{i \neq i1}(\mathbf{a}\cdot\mathbf{f}_{i})=0\). This last equality can only be true if there is at least one element of the sum which is of the opposite sign of \(\mathbf{a}\cdot\mathbf{f}_{i1}\), implying that there exists \(\mathbf{f}_{i2}\in\{\mathbf{f}_{i}\}\setminus\mathbf{f}_{i1}\) such that \(\mathrm{sign}(\mathbf{a}\cdot\mathbf{f}_{i1})=-\mathrm{sign}(\mathbf{a}\cdot \mathbf{f}_{i2})\). This theorem is meant to be a vector extension of the trivial theorem that a sum of numbers can only be zero if either all elements are zero, or if it contains both positive and negative elements. Setting aside the null case, this theorem simply states that a sum of vectors with at least one non-zero element can only be zero if it contains positive and negative elements when projected onto (almost) any axis. A mild caveat must be added, namely that the projection is not onto a vector normal to a chosen non-zero vector in the set. This caveat is only a formality as the projecting vector \(\mathbf{a}\) is arbitrary. **Corollary 4.1**.: _A particle with zero net force and at least \(d+1\) non-cohemispheric non-zero forces is locally stable._ Remark: This theorem applies more generally to both point particles and any shape of particle with forces pointing towards its center of mass. Such a particle will be stable to translations, but not to rotations. Proof.: We label the set of non-zero forces \(\{\mathbf{f}_{i}\}\) and the particle center by \(\mathbf{r}\). Note that the minimum number of vectors needed to span \(\mathbb{R}^{d}\) is \(d\), so a particle is unstable with fewer than \(d\) forces acting upon it. Furthermore, a particle with \(d\) forces acting upon it is unstable by Theorem 3, as these forces are necessarily cohemispheric, and thus there exists \(\mathbf{a}\in\mathbb{R}^{d}\) such that \(\mathbf{a}\cdot\mathbf{f}_{i}\geq 0\) for all \(i\). By Theorem 4, \(\sum\mathbf{f}_{i}\neq 0\) unless \(\mathbf{f}_{i}=0\) for all \(i\), and thus a particle with \(d\) non-zero forces acting upon it is unstable. By definition, if there are \(d+1\) non-cohemispheric non-zero forces, then no \(\mathbf{a}\) exists for which \(\mathbf{a}\cdot\mathbf{f}_{i}\geq 0\) for all \(i\). Thus, for all \(\mathbf{a}\in\mathbb{R}^{d}\) with at least one non-zero element, Theorem 4 states that there will be positive and negative projections, and thus the net force can sum to zero without all forces being trivially zero, and thus the particle is locally stable. **Theorem 5**.: _A particle with center \(\mathbf{r}_{0}\) and with contacting particles centered at \(\{\mathbf{r}_{i}\}\) is unstable if \(\mathbf{r}_{0}\in\partial\mathrm{Conv}(\mathbf{r}_{0},\{\mathbf{r}_{i}\})\)._ Proof.: We have two instances to prove. If \(\mathbf{r}_{0}\) is an extreme point, then by Definition 4, there exist \(\mathbf{a}\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}\) where \(\mathbf{a}\cdot\mathbf{r}_{0}-b>0\) while \(\mathbf{a}\cdot\mathbf{r}_{j}-b\leq 0\) for all \(\mathbf{r}_{j}\in\{\mathbf{r}_{i}\}\setminus\mathbf{r}_{0}\). The contact forces on \(\mathbf{r}_{0}\) are all of the form \(\mathbf{f}_{j}=c_{j}(\mathbf{r}_{0}-\mathbf{r}_{j})\) with \(c_{j}\in\mathbb{R}\) and \(c_{j}\geq 0\). Thus \(\sum_{j}\mathbf{f}_{j}=\sum_{j}c_{j}(\mathbf{r}_{0}-\mathbf{r}_{j})\). Taking the projection on \(\mathbf{a}\), we have \(\sum_{j}\mathbf{a}\cdot\mathbf{f}_{j}=\sum_{j}c_{j}(\mathbf{a}\cdot\mathbf{r}_ {0}-\mathbf{a}\cdot\mathbf{r}_{j})\). Depending on the sign of \(b\), the non-zero terms are either all positive or all negative, meaning that the sum cannot be zero unless all \(c_{j}\) are zero. Thus, by Theorem 4, either \(\sum_{j}\mathbf{f}_{j}\neq\mathbf{0}\), or \(\overline{\mathbf{f}}_{j}=0\) for all \(j\). Either condition means that the particle is unstable. If \(\mathbf{r}_{0}\) were not an extreme point, then the sum \(\sum_{j}\mathbf{a}\cdot\mathbf{f}_{j}\) could only be \(0\) if \(\mathbf{a}\cdot\mathbf{r}_{j}-b=0\) for all \(j\). These forces would then all be co-hemispheric, and thus by Theorem 4, either \(\sum_{j}\mathbf{f}_{j}\neq\mathbf{0}\), or \(\mathbf{f}_{j}=0\) for all \(j\), and thus the particle is unstable. Here we note that this is a sufficient condition for instability, and not a necessary one. If \(\mathbf{r}_{0}\) is out of force balance with neighboring contacts \(\{\mathbf{r}_{i}\}\), but \(\mathbf{r}_{0}\notin\mathrm{Conv}(\mathbf{r}_{0},\{\mathbf{r}_{i}\})\), then \(\mathbf{r}_{0}\) is still unstable. **Theorem 6**.: _A particle with center \(\mathbf{r}_{0}\) and with a non-empty set of stable contacting particles centered at \(\{\mathbf{r}_{i}\}\) is locally stable if and only if \(\mathbf{r}_{0}\notin\partial\mathrm{Conv}(\mathbf{r}_{0},\{\mathbf{r}_{i}\})\) and the sum of forces acting on the particle is zero._ Proof.: The statement that \(\mathbf{r}_{0}\) is locally stable if \(\mathbf{r}_{0}\notin\partial\mathrm{Conv}(\mathbf{r}_{0},\{\mathbf{r}_{i}\})\), and the sum of all forces acting on the particle from \(\{\mathbf{r}_{i}\}\) is zero follows a recursive application of Definition 3 and Theorem 5. Next, we must prove that \(\mathbf{r}_{0}\notin\partial\mathrm{Conv}(\mathbf{r}_{0},\{\mathbf{r}_{i}\})\) with stable contacts \(\{\mathbf{r}_{i}\}\) and zero net force implies that \(\mathbf{r}_{0}\) is locally stable and thus has a set of stable forces acting on the particle centered at \(\mathbf{r}_{0}\) which both span \(\mathbb{R}^{d}\) and sum to zero. Because \(\mathbf{r}_{0}\notin\partial\mathrm{Conv}(\mathbf{r}_{0},\{\mathbf{r}_{i}\})\), we know that \(\mathbf{r}_{0}\) is neither an extreme point of the convex hull, nor is it on the surface. Thus no \(\mathbf{a}\) exists for which the contact forces, labelled \(\{\mathbf{f}_{i}\}\) have the property \(\mathbf{a}\cdot\mathbf{f}_{i}\geq 0\) for all \(i\). These forces are thus non-cohemispheric, and so from Theorem 3, there must be \(d+1\) of them. And because this particle has zero net force acting upon it, by Corollary 4.1, the particle is locally stable. An illustration of this theorem is given in Fig. 5. ### Stability via the radical Voronoi cell **Theorem 7**.: _If \(i\) and \(j\) are hard particles with centers \(\mathbf{r}_{i}\) and \(\mathbf{r}_{j}\) and radii \(\sigma_{i}\) and \(\sigma_{j}\) and \(h_{ij}=0\), then \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\overline{B}_{\sigma_{j}}( \mathbf{r}_{j})\) contains exactly one point \(\mathbf{c}_{ij}\) where \(\mathbf{c}_{ij}\in\partial R(\mathbf{r}_{i})\) and \(\mathbf{c}_{ij}\in\partial R(\mathbf{r}_{j})\)._ Proof.: We define \[\mathbf{c}_{ij}=\mathbf{r}_{i}+\sigma_{i}\frac{\mathbf{r}_{j}-\mathbf{r}_{i}}{ |\mathbf{r}_{j}-\mathbf{r}_{i}|} \tag{3}\] and note that \(|\mathbf{c}_{ij}-\mathbf{r}_{i}|=\sigma_{i}\) so that \(\mathbf{c}_{ij}\in\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\) and \(|\mathbf{c}_{ij}-\mathbf{r}_{j}|=|(\mathbf{r}_{j}-\mathbf{r}_{i})|-\sigma_{i}\). We then note that \(h_{ij}=0\) implies \(\sigma_{j}=|(\mathbf{r}_{j}-\mathbf{r}_{i})|-\sigma_{i}\), and thus \(|\mathbf{c}_{ij}-\mathbf{r}_{j}|=\sigma_{j}\) and so \(\mathbf{c}_{ij}\in\overline{B}_{\sigma_{j}}(\mathbf{r}_{j})\). To show that the intersection contains only one point, we assume that \(\mathbf{c}^{\prime}_{ij}\in\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap \overline{B}_{\sigma_{j}}(\mathbf{r}_{j})\) so that \(|\mathbf{c}^{\prime}_{ij}-\mathbf{r}_{i}|\leq\sigma_{i}\) and \(|\mathbf{c}^{\prime}_{ij}-\mathbf{r}_{j}|\leq\sigma_{j}\), but \(\mathbf{c}^{\prime}_{ij}\neq\mathbf{c}_{ij}\) (as in Fig. 6). By the triangle inequality, \(|\mathbf{r}_{j}-\mathbf{r}_{i}|\leq|\mathbf{r}_{i}-\mathbf{c}^{\prime}_{ij}|+| \mathbf{r}_{j}-\mathbf{c}^{\prime}_{ij}|\), which becomes the degenerate statement \(\sigma_{i}+\sigma_{j}\leq\sigma_{i}+\sigma_{j}\). The degeneracy implies a triangle of zero area, so that \(\mathbf{c}^{\prime}_{ij}\) lies on the line between \(\mathbf{r}_{i}\) and \(\mathbf{r}_{j}\), and by simple algebra, we Figure 5: We test whether the blue particle is stable by looking at the convex hull (red) of its own center and the centers of its stable neighboring particles (in black). Note that there may be other contacts with the blue particle which have been determined to be unstable and are thus not shown. In (a) the blue particle is unstable, because its center lies on the surface of the convex hull. In (b) the blue particle is stable, because its center is not on the surface of the convex hull. Figure 6: The radical Voronoi diagram is shown between contacting particles \(i\) and \(j\) with contact point \(\mathbf{c}_{ij}\). A second contact point between the two particles \(\mathbf{c}^{\prime}_{ij}\) is assumed, so that we can show \(\mathbf{c}^{\prime}_{ij}=\mathbf{c}_{ij}\) via the triangle inequality. find that \(\mathbf{c}^{\prime}_{ij}=\mathbf{c}_{ij}\). This is a contradiction, and thus the intersection \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\overline{B}_{\sigma_{j}}(\mathbf{ r}_{j})\) contains only one point. To show that \(\mathbf{c}_{ij}\in R(\mathbf{r}_{i})\) and \(\mathbf{c}_{ij}\in R(\mathbf{r}_{j})\), we calculate the power of \(\mathbf{c}_{ij}\) with respect to each sphere. Here we find that \(\Pi_{\mathbf{r}_{i},\sigma_{i}}(\mathbf{c}_{ij})=\Pi_{\mathbf{r}_{j},\sigma_ {j}}(\mathbf{c}_{ij})=0\). The only lower power would be negative (interior of a sphere), and because these are hard spheres, that is not possible. Thus, \(\mathbf{c}_{ij}\in R(\mathbf{r}_{i})\) and \(\mathbf{c}_{ij}\in R(\mathbf{r}_{j})\). **Corollary 7.1**.: _In a hard particle system, \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\partial R(\mathbf{r}_{i})\) contains only the contact points between particle \(i\) and its contacting neighbors, centered at \(\{\mathbf{r}_{j}\}\)._ Proof.: We know from Theorem 7 that \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\partial R(\mathbf{r}_{i})\) contains the contact points between particle \(i\) and its contacting neighbors, so we need now only show that it contains no other points. Suppose \(\mathbf{b}\in\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\partial R(\mathbf{ r}_{i})\) and that \(\mathbf{b}\neq\mathbf{c}_{ij}\) from Eq. (3) for any \(j\). Points on \(\partial R(\mathbf{r}_{i})\) have equal power with respect to at least one other sphere, which we will generically call \(\mathbf{r}_{j}\). We have so far covered the case of zero power, and now consider points with negative power. As per Definition 12, points of negative power are on the interior of both spheres, i.e. \(\mathbf{b}\in B_{\sigma_{i}}(\mathbf{r}_{i})\cap B_{\sigma_{j}}(\mathbf{r}_{j})\), but because \(i\) and \(j\) are hard spheres \(B_{\sigma_{i}}(\mathbf{r}_{i})\cap B_{\sigma_{j}}(\mathbf{r}_{j})=\emptyset\). Thus points of negative power are not in the intersection \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\partial R(\mathbf{r}_{i})\). Points of positive power are not contained within \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\) and are thus not in the intersection \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\partial R(\mathbf{r}_{i})\). Therefore, \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\partial R(\mathbf{r}_{i})\) contains only the contact points between particle \(i\) and its contacting neighbors, centered at \(\{\mathbf{r}_{j}\}\). **Theorem 8**.: _In a convex region \(K\), if \(\overline{B}_{\sigma}(\mathbf{a})\subset K\) and \(\overline{B}_{\sigma}(\mathbf{b})\subset K\), then \(\overline{B}_{\sigma}(\mathbf{c})\subset K\) for all \(\mathbf{c}=(t-1)\mathbf{a}+t\mathbf{b}\) where \(t\in[0,1]\)._ Proof.: From Definition 5, this property is true for every individual point within the closed ball, so it is true for the closed ball itself. An illustration of the concept is given in Fig. 7, where every ball contained on the line between \(\mathbf{a}\) and \(\mathbf{b}\) is contained in the convex region if the closed balls centered at \(\mathbf{a}\) and \(\mathbf{b}\) are contained in the region. Remark: We note that a further generalization of Theorem 8 is true when we have different radii balls at the endpoints \(\overline{B}_{\sigma_{a}}(\mathbf{a})\) and \(\overline{B}_{\sigma_{b}}(\mathbf{a})\), where then the interpolated ball has radius \(\sigma_{c}=(t-1)\sigma_{a}+t\sigma_{b}\). This generalization is, however, not necessary for our purposes and would potentially obscure the results. **Theorem 9**.: _If \(M(R(\mathbf{r}_{0}))\) is not unique in a hard particle system, then the particle centered at \(\mathbf{r}_{0}\) is not locally stable._ Proof.: We assume \(M(R(\mathbf{r}_{0}))\) is not unique, such that \(\overline{B}_{\sigma}(\mathbf{r}_{1})\subset R(\mathbf{r}_{0})\) and \(\overline{B}_{\sigma}(\mathbf{r}_{2})\subset R(\mathbf{r}_{0})\) with \(\mathbf{r}_{1}\neq\mathbf{r}_{2}\) and there is no solution to \(\overline{B}^{\prime}_{\sigma}(\mathbf{r}_{3})\subset R(\mathbf{r}_{0})\) where \(\sigma^{\prime}>\sigma\). We then assume that the particle centered at \(\mathbf{r}_{0}\) is locally stable and try to find a contradiction. If the particle is stable, there exist at least \(d+1\) non-cohemispheric points \(\mathbf{c}_{ij}\) given by Eq. (3) which, by Theorem 7, have the property \(\mathbf{c}_{ij}\in\overline{B}_{\sigma_{0}}(\mathbf{r}_{0})\cap\partial R( \mathbf{r}_{0})\). Because the particle centered at \(\mathbf{r}_{0}\) is fully locally constrained, there exist no dilations or translations which maintain the hard sphere condition. We now have two scenarios to consider, which each contain a contradiction: \(\sigma<\sigma_{0}\) and \(\sigma\geq\sigma_{0}\). If \(\sigma<\sigma_{0}\), then neither \(\overline{B}_{\sigma}(\mathbf{r}_{1})\) nor \(\overline{B}_{\sigma}(\mathbf{r}_{1})\) represent the MIS, because \(\overline{B}_{\sigma_{0}}(\mathbf{r}_{0})\subset R(\mathbf{r}_{0})\) has a larger radius. If \(\sigma\geq\sigma_{0}\), then \(\overline{B}_{\sigma_{0}}(\mathbf{r}_{1})\subset\overline{B}_{\sigma}(\mathbf{r} _{1})\subset R(\mathbf{r}_{0})\). Theorem 8 states that all closed balls of radius \(\sigma_{0}\) on the straight line between \(\mathbf{r}_{0}\) and \(\mathbf{r}_{1}\) are also contained in \(R(\mathbf{r}_{0})\). However, because the particle centered at \(\mathbf{r}_{0}\) with radius \(\sigma_{0}\) is stable, no translations \(T\) exists such that \(T(\overline{B}_{\sigma_{0}}(\mathbf{r}_{0}))\subset R(\mathbf{r}_{0})\). Because no case relating \(\sigma\) and \(\sigma_{0}\) exists without a contradiction, this implies that if \(M(R(\mathbf{r}_{0}))\) is not unique in a hard particle system, then the particle centered at \(\mathbf{r}_{0}\) is not locally stable. A packing with highly degenerate (non-unique) maximum inscribed spheres is illustrated in Fig. 8, where clearly the particles are not stable. **Corollary 9.1**.: _If \(M(K)\) is unique for a polytope \(K\), then \(M(K)\cap\partial K\) contains at least \(d+1\) non-cohemispheric points._ Figure 8: An example of a radical Voronoi diagram (orange lines) for a set of particles (blue) which is highly degenerate. Here, because the MIS of each particle is not unique despite having radii equal to that of the particles, none of the particles are locally stable. Figure 7: An illustration of the fact that if two closed balls exist within a convex region (centered at \(\mathbf{a}\) and \(\mathbf{b}\) respectively), every closed ball on the line between the two is also contained in the region. Proof.: If \(M(K)\) is unique, then there are no translations represented by the transformation \(T\) which can be done such that \(T(M(K))\subset K\). Thus \(M(K)\) is fully constrained by the boundary \(\partial K\). By Corollary 4.1, if we impose a fictive force on \(M(K)\) from each point of contact \(\{\mathbf{c}_{i}\}\) between \(M(K)\) and \(\partial K\), then there must be at least \(d+1\) non-cohemispheric \(\mathbf{c}_{i}\) for \(M(K)\) to be stable. Thus \(M(K)\cap\partial K\) contains at least \(d+1\) non-cohemispheric points. **Theorem 10**.: _In a packing of hard particles, a particle with center \(\mathbf{r}_{0}\) and radius \(\sigma_{0}\) is locally stable if and only if \(M(R(\mathbf{r}_{0}))\) is unique and has center \(\mathbf{r}_{0}\) and radius \(\sigma_{0}\)._ Proof.: First, we must prove that in a hard sphere system, a particle with center \(\mathbf{r}_{0}\) and radius \(\sigma_{0}\) being locally stable implies that \(M(R(\mathbf{r}_{0}))\) is unique and has center \(\mathbf{r}_{0}\) and radius \(\sigma_{0}\). Following the logic of the proof of Theorem 9, we assume that \(M(R(\mathbf{r}_{0}))\) has center \(\mathbf{r}_{1}\) and radius \(\sigma_{1}\) with \(\mathbf{r}_{1}\neq\mathbf{r}_{0}\) and \(\sigma_{1}\neq\sigma_{0}\) and find a contradiction to show that \(\mathbf{r}_{1}\neq\mathbf{r}_{0}\) and \(\sigma_{1}=\sigma_{0}\). If \(\sigma_{1}<\sigma_{0}\), then this does not correspond to the maximum inscribed sphere. If \(\sigma_{1}\geq\sigma_{0}\), then \(\overline{B}_{\sigma_{0}}(\mathbf{r}_{1})\subset\overline{B}_{\sigma_{1}}( \mathbf{r}_{1})\subset R(\mathbf{r}_{0})\) and thus by Theorem 8\(\overline{B}_{\sigma_{0}}(\mathbf{r}_{2})\subset R(\mathbf{r}_{0})\) for all \(\mathbf{r}_{2}\) on a straight line between \(\mathbf{r}_{0}\) and \(\mathbf{r}_{1}\). But because \(\overline{B}_{\sigma_{0}}(\mathbf{r}_{0})\) is locally stable, no translations of dilations exist which remain in \(R(\mathbf{r}_{0})\), so \(\mathbf{r}_{1}=\mathbf{r}_{0}\) and \(\sigma_{1}=\sigma_{0}\). Second, we must prove that in a hard sphere system, for a particle centered at \(\mathbf{r}_{0}\) with radius \(\sigma_{0}\), \(M(R(\mathbf{r}_{0}))\) being unique and having center \(\mathbf{r}_{0}\) and \(\sigma_{0}\) implies that the particle is stable. This follows immediately from Corollary 9.1, as the particle has \(d+1\) non-cohemispheric points of contact with \(R(\mathbf{r}_{0})\), which by Corollary 7.1, correspond to contacts with neighboring particles. Thus, by Corollary 4.1, the particle centered at \(\mathbf{r}_{0}\) with radius \(\sigma_{0}\) is stable. ## IV Algorithmic Complexity Theorems 6 and 10 provide a natural recursive algorithm for determining the stable set of particles in a packing, and, through its complement, the set of rattlers. The algorithm begins with a tentative statement that all particles are stable, and it loops over each particle testing for stability, taking the function is\(\mathrm{stable}(i)\) from either Theorem 6, Theorem 10, or Eq. 12 of Ref. [10], considering only the stable set of particles. The algorithm ends when no changes are made to the stable list in a full loop. The structure of the algorithm is similar to that of Ref. [10], and as expected, it produces an identical stable list. The worst-case scenario for this algorithm is a packing in which only a single particle is initially unstable, but its removal destabilizes one of its neighbors, and so on. Such a situation will require \(N\) iterations through the algorithm, each of which takes \(\mathcal{O}(N)\) time, yielding a total worst case runtime of \(\mathcal{O}(N^{2})\). We note, however, that no typical case approaches this complexity. The method of Ref. [11], meanwhile, scales as at least \(\mathcal{O}(d^{3}N^{3})\)[34]. The only difference between the methods of Ref. [10], Theorem 6, and Theorem 10 is the speed of the function \(\mathrm{isStable}(i)\). For a particle with \(n\) contacting particles (where \(n\sim\mathcal{O}(d)\)), the linear programming method scales as \(\mathcal{O}(n^{2+a})\) where \(a=\frac{1}{18}\)[41] while the convex hull scales as \(\mathcal{O}(n^{\lfloor d/2\rfloor})\) in the worst case scenario, where \(\lfloor\cdot\rfloor\) is the floor function [42]. The radical Voronoi diagram for an individual cell can be computed in \(\mathcal{O}(n^{\lceil d/2\rceil})\) where \(\lceil\cdot\rceil\) is the ceiling function. Thus while the radical Voronoi method is slower than the convex hull method in odd dimensions, it is of the same order in even dimensions. The calculation of the MIS is then either a linear Figure 9: The radical Voronoi diagram (black lines) is computed for a set of particles of different radii with contacts displayed as blue lines. All blue particles, labelled \(i\), have \(M(R(\mathbf{r}_{i}))=\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\) and are thus stable. The red particle, which we will call \(0\) is a rattler, and its MIS is shown as a dashed magenta line. We see clearly that \(M(R(\mathbf{r}_{0}))\neq\overline{B}_{\sigma_{0}}(\mathbf{r}_{0})\) programming problem [43; 44] or a minimization problem [13] whose complexity has not yet been interrogated. The worst case scenario then makes this calculation the rate determining step, and it is thus no faster than the linear programming methods of Ref. [10]. By comparison, we see that the convex hull algorithm is faster than the linear programming algorithm for at least \(d<6\). ## V Further extensions We have shown that the convex hull and the radical Voronoi cell can be used to quickly determine the stability of individual spheres in a packing with only minimal requirements on the interparticle potential. It is straightforward to show that the construction can be applied more generally in a variety of cases. Here, we list several: 1. In a spring network under compression, an individual node is unstable if it is on the surface of the convex hull of its connecting nodes. 2. A particle of any shape is unstable if the only forces acting on it are point forces directed towards its center of mass, and the center of mass is on the surface of the convex hull of the contact points and the center of mass. 3. In Mari-Kurchan (MK) interactions [45; 46], where the distance between particles is given by \(h^{\rm MK}_{ij}=\frac{|\mathbf{r}_{i}-\mathbf{r}_{j}+\boldsymbol{\Lambda}_{ ij}|}{\sigma_{i}+\sigma_{j}}\) where \(\boldsymbol{\Lambda}_{ij}\) is a random vector with \(\boldsymbol{\Lambda}_{ij}=-\boldsymbol{\Lambda}_{ji}\), a particle \(\mathbf{r}_{0}\) with contacts \(\{\mathbf{r}_{j}\}\) is unstable if \(\mathbf{r}_{0}\in\partial\text{Conv}(\mathbf{r}_{0},\{\mathbf{r}_{j}+ \boldsymbol{\Lambda}_{ij}\})\). This method was used in Ref. [34]. Note that this is true despite not technically being a central force potential. 4. Several recent studies have analyzed soft sphere systems during energy minimization [47; 48; 49; 31; 34], wherein it may be important to study the evolution of rattlers and stable subsystems. Here, the convex hull theorem may be used, with the additional caveat that a particle is only locally stable if the sum of all forces acting on it is zero, and if the forces acting on it span \(\mathbb{R}^{d}\). 5. Following the logic of Sec. III.2, we conjecture that Theorem 10 also holds for additively-weighted Voronoi cells and any generalization of Voronoi cells \(G\) for which the contact point of two hard spheres (\(i\) and \(j\)) is contained on the surface of the generalized Voronoi cell, i.e. \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\overline{B}_{\sigma_{j}}( \mathbf{r}_{j})\in\partial G(\mathbf{r}_{i})\) and \(\overline{B}_{\sigma_{i}}(\mathbf{r}_{i})\cap\overline{B}_{\sigma_{j}}( \mathbf{r}_{j})\in\partial G(\mathbf{r}_{j})\). However, these cells are generically non-convex, and so some of the tools we have used do not suffice. These extensions show the utility of our methods, which extend beyond simple sphere packings. It is our hope that this work not only provides a simple computational tool, but helps to illuminate the interplay between geometry and mechanical rigidity.
2304.13500
Subspace Coding for Error Control and Interference Mitigation in Routing Solutions with Cooperative Destination Nodes
Random Linear Network Coding (RLNC) is a transmission scheme that opts for linear combinations of the transmitted packets at a subset of the intermediate nodes. This scheme is usually considered when Network Coding (NC) is desired over non-coherent networks. In order to integrate error correction in RLNC, subspace codes have been proposed. The codewords in those codes are unchanged under rank-preserving row operations, making them convenient for RLNC. In this paper, we investigate the use of those codes for the interference channel in a system model consisting of an array of SISO communication systems with cooperative destination nodes. This system model is deemed as a simplified model of a network with a routing based transmission scheme. Results have indicated that the use of subspace codes have allowed for better performance in terms of the decoding failure probability.
Amine Brahimi, Fatiha Merazka
2022-12-24T01:08:56Z
http://arxiv.org/abs/2304.13500v1
Subspace Coding for Error Control and Interference Mitigation in Routing Solutions with Cooperative Destination Nodes ###### Abstract Random Linear Network Coding (RLNC) is a transmission scheme that opts for linear combinations of the transmitted packets at a subset of the intermediate nodes. This scheme is usually considered when Network Coding (NC) is desired over non-coherent networks. In order to integrate error correction in RLNC, subspace codes have been proposed. The codewords in those codes are unchanged under rank-preserving row operations, making them convenient for RLNC. In this paper, we investigate the use of those codes for the interference channel in a system model consisting of an array of SISO communication systems with cooperative destination nodes. This system model is deemed as a simplified model of a network with a routing based transmission scheme. Results have indicated that the use of subspace codes have allowed for better performance in terms of the decoding failure probability. Subspace codes, interference channel, network coding, error control, SISO, routing. + Footnote †: publicationid: pubid: 978–1–6654–1741–9/22/531.00 ©2022 IEEE ## I Introduction Classical routing protocols are based on the store-and-forward mechanism [1], where intermediate nodes are not allowed to alter the contents of their received packets. In fact, the payload of the transmitted packets in those systems is treated as a commodity flow and therefore packets are not to be combined or altered for the whole transmission session. In the seminal paper [2], the authors introduced Network Coding (NC) as a paradigm for data transmission where packets may be combined during transmission to achieve higher throughput. The main contribution of [2] may be seen as the leap that authors made in how data is treated. In this regard, data is no longer considered as a commodity flow but rather as information flow. Using NC for practical applications was still challenging until Linear NC (LNC) was introduced in [3]. In this scheme, packets are represented as vectors from a finite field \(\mathbb{F}_{q}\) and encoding operations are just linear tranformations of the original packets using coefficients from the underlying finite field. While this scheme was practically feasible, it still required apriori knowledge of network topology for the proper selection of the encoding coefficients. This limitation made LNC undesired in non-coherent situations where both the source and destination nodes are topology-oblivious. Random LNC (RLNC) [4, 5]was later proposed as a solution to this problem. In this setting, encoding coefficients are no longer deterministically selected but rather randomly chosen from a sufficiently large finite field. Note that the size of the finite field is important to minimize the probability of choosing an encoding coefficient that results in a linear dependency between the encoded packets [6]. While NC schemes have been proven to outperform classical routing solutions in terms of throughput [7, 8], security [9, 10]... etc, they still face a set of challenges, of which error propagation is the crucial one. This is basically since packets are mixed in NC, which allows for errors to propagate from corrupt packets to valid ones. In RLNC, subspace codes [11, 12] have been proposed for error control due to their properties that match those of how information is transmitted in RLNC. In those codes, codewords are seen as vector spaces from an ambient vector space over some finite field \(\mathbb{F}_{q}\) where \(q\) is a prime power. the correction capability of subspace codes is not affected by rank-preserving row operations and they provide error correction up to a certain number of row deletions and insertions, which makes them optimal for RLNC-based networks. The use of subspace codes is not generally considered when classical routing solutions are used instead of RLNC. In this paper, we try to investigate the effects of their deployment in those systems as compared to other classical error correction solutions. We prove that there exist situations where subspace codes may provide a degree of resilience against both network errors and crosstalk for classical routing systems with cooperative destination nodes. The remainder of this paper is organized as follows. In section II, we provide some background on subspace codes as well as the notation that is used throughout this paper. Section III will be dedicated to the system model, on which our work is based. In Section IV, we show how subspace codes can be used to provide both error control and interference mitigation for our system and Section V will constitute a general conclusion to the work done in this paper as well as a set of perspectives for eventual future work. ## II Preliminaries A finite field with \(q\) elements with \(q=p_{r}^{m}\), where \(p_{r}\) is a prime number and \(m\in\mathbb{N}^{*}\), is denoted by \(\mathbb{F}_{q}\). The \(n\)-dimensional vector space over \(\mathbb{F}_{q}\) is denoted by \(\mathbb{F}_{q}^{n}\) where \(n\in\mathbb{N}^{*}\). Over this vector space, we define the projective space \(\mathcal{P}(n)\), which is the set of all subspaces of the vector space \(\mathbb{F}_{q}^{n}\). The term "projective" comes from its geometrical meaning and uses in projective geometry, where it is also defined as the projective geometry of dimension \(n-1\) over \(\mathbb{F}_{q}\). A subset of this space with the criterion that all subspaces are of equal dimension is called a Grassmannian or Grassmann variety to link it to its geometrical properties. As for notation, a Grassmannian whose elements are the \(k\)-dimensional subspaces of \(\mathbb{F}_{q}^{n}\) is denoted by \(\mathcal{G}(k,n)\). The relationship between the Grassmannians and the projective space on the vector space can be stated as follows, \[\mathcal{G}(k,n)=\{V|V\in\mathcal{P}(n),\ dim(V)=k,\ 0\leq k\leq n\} \tag{1}\] In other words, the set of all Grassmannians over the vector space \(\mathbb{F}_{q}^{n}\) constitutes \(\mathcal{P}(n)\). We are intereseted in the aforementioned structures due to their importance in the definition of subspace codes. A subspace code \(\mathcal{C}\) can be defined as a non-empty subset of the projective space \(\mathcal{P}(n)\). When \(\mathcal{C}\) satisfies \(\mathcal{C}\subseteq\mathcal{G}(k,n)\), with \(0\leq k\leq n\), we call \(\mathcal{C}\) a Grassmannian code or a constant dimension code. Those codes are the ones that have attracted most of the attention in the litterature of subspace codes. An element \(V\in\mathcal{C}\) will be represented by a matrix \(M\) in the reduced row echelon form and we write \(V=\langle M\rangle\), where \(\langle\cdot\rangle\) denote the row space. ## III System Model We consider an array of parallel single-input single-output (SISO) systems that are simplified into basic communication systems that comprise three elements : a source node, a destination node and a channel. The source node is the communicating entity that is the origin of the information transmitted on the system. The destination node is the communicating entity to which the source information is intended and the channel is the medium through which information is transmitted. While, in general, this system can be full-duplex, we can safely consider the two opposite channels as being identical and therefore limit our analysis to only one channel ( direction : source \(\longrightarrow\) destination ). The source node has also the ability to perform encoding on the transmitted data to allow for error correction in the system. Similarly, the destination nodes are endowed with decoders to allow for the reverse operation. The channels used in this system are all unit capacity channels, in which they can transmit one packet per time slot. Packets will be taken as \(n\)-dimensional vectors over a finite field \(\mathbb{F}_{q}\). Fig.1. illustrates the system model. We would like to allow for interference between the different channels of the system. Therefore, the set of channels form together an interference channel where all channels are affected by each other through electromagnetic coupling. In other words, the information transmitted in one channel will affect the information transmitted in the other \(m-1\) channels. Note that this interference is usually referred to as crosstalk and therefore the terms interference and crosstalk will be used interchangeably. In this paper, interference is modelled as the superposition of channel signals. In other words, if \(M_{i}\) is the message or packet transmitted by a source \(s_{i}\) to a destination \(d_{i}\) with \(i\in\{1,2,\cdots,m\}\), the received signal \(R_{i}\) at \(d_{i}\) will then take the form : \[R_{i}=\sum_{j=1}^{m}\alpha_{j}M_{j}+Z_{i} \tag{2}\] where \(Z_{i}\) is the error vector that is assumed to be an element of \(\mathbb{F}_{q}^{n}\). As for \(\alpha_{j}\), if \(j=i\), \(\alpha_{j}=1\). Otherwise ( \(j\neq i\)), \(\alpha_{j}\) is a discrete random variable that reflects the contribution of the corresponding signal in the output of the interference channel. A value of 0 means that the power of the interfering signal is too low and will not affect the channel output and therefore, it is safely neglected. However, a value of 1 will indicate that the signal power is high and therefore the corresponding vector will add up to the main signal. Concerning its distribution, \(\forall j\in\{1,2,\cdots,m\}\quad\text{and}\quad j\neq i,\quad\alpha_{j}\) is an independent Bernoulli random variable with the two probabilities \(P(1)=p\), \(P(0)=1-p\). The system model will later be modified into that in Fig.2 by introducing a new node \(S\) considered as a main source that feeds information to all the other sources such that each source information is a distinct packet (from the other sources' fed packets). In this case, the rate of the source \(S\) will be equal to \(m\) packets/ time slot. Moreover, we will be treating interference as a form of uncontrolled RLNC with the random coefficients being taken from the binary field. To allow for data recovery, we will be assuming cooperation between the destination nodes to allow for decoding operations. Fig. 1: System model ## IV Subspace Codes as a Solution for the Interference Channel As mentioned earlier in this paper, the codewords of a given subspace code are subspaces from some vector space \(\mathbb{F}_{q}^{n}\). This property has made subspace codes suitable for RLNC-based networks by treating data as a vector space instead of a set of independent packets. Since the crosstalk problem that characterizes the interference channel can be seen as a superposition of information from different channels, one can make use of subspace codes as a solution to mitigate this problem while providing error correction capability for the system. Moreover, since routing systems are a set of independent unicast transmissions through a set of relays, one may simplify routing scenarios to an array of SISO systems (Fig. 1. or Fig.2.) by eliminating the intermediate relays, given that they do not affect information across the network. Therefore, a solution to the SISO system considered in this paper may be extended to networks that are based on a routing solution to convey data as long as the destination nodes are allowed to cooperate. In order to see the effects of subspace codes, we will be considering two scenarios: error-free transmission through the interference channel and transmission in the presence of errors. ### _Error-free Transmission_ In this scenario, equation (2) is simplified into \[R_{i}=\sum_{j=1}^{m}\alpha_{j}M_{j} \tag{3}\] by eliminating the error vector. To see the effects of susbspace codes, we try to consider three transmission schemes: * Routing: in this transmission, both system models (Fig.1 and Fig. 2) are acceptable and destination nodes are not cooperative ( they do not exchange any information). * RLNC: in this transmission, we also consider that both system models (Fig.1 and Fig. 2) are allowed and destination nodes are cooperative ( they can exchange useful information to allow for RLNC decoding by assuming the interference signal as a form of uncontrolled RLNC encoding operation). * Subspace coding: In this last scenario, only the system in Fig. 2. is acceptable. The reason behind this choice is the fact that while all the packets in this scenario are linearly independent, they are still vectors from the same codeword from a subspace code. The source \(S\) will be thought of here as the source node with a subspace encoder. In order to test the three scenarios, we will be using a set of 8 packets that are the vectors of the codeword in Fig.3. This codeword has been taken from the kk subspace code (16,256,16,8). In the first two scenarios, the vectors are used as being independent data packets. However, in the third scenario, they form the codeword in question. We have used the vectors of this codeword for the three scenarios to provide consistency for our experiment. Note that, in this system, \(m=8\). The packets will be sent following the three aforementioned transmission scenarios while changing the value of \(p\). We then evaluate the probability that the destination nodes will not be able to get the source message. This probability will be referred to as the decoding failure probability. While there is no decoding in the routing scenario, we will still be using this term for the sake of consistency. The results of this experiment are depicted in Fig.4. Note that the term " Average decoding failure probability " is used instead of " Decoding failure probability " in this latter figure because the experiment was repeated 1000 times and the average was taken as an estimation of the expected decoding failure probability. From Fig. 4, we see that both RLNC and RLNC with subspace coding have allowed for the recovery of the intended source data packet for every destination node for all values of \(p\). This means that since in those scenarios the destination nodes can cooperate to perform either RLNC or subspace decoding operation, the superposition of signals does not result in packet loss. As for the routing scenario, the decoding failure probability depends on \(p\). Since this latter parameter depicts the probability that interference occurs with the other signals. The results of Fig. 4 are consistent with what is expected. In fact, we see that as long as \(p>0.3\), it is more likely that destination nodes will not be able to correctly receive their intended packets for the routing scenario. ### _Transmission in the presence of errors_ The previous experiment will be repeated with the assumption that network errors occur on the network. Fig. 3: A codeword from the (16,256,16,8) KK code. Fig. 2: Modified system model \[R_{i}=\sum_{j=1}^{m}\alpha_{j}M_{j}+Z_{i} \tag{4}\] Since the used packets and error vectors are vectors from the binary field, we can model errors as being bit-flips of the vector \(\sum_{j=1}^{m}M_{j}\). In this case, the number of errors in a given vector will be seen as the number of non-zero entities in the error vector. The three previous transmission scenarios will also be adopted here. However, for this experiment, we will fix the value of \(p\) to be \(p=0.2\) while monitoring the performance of the system under the presence of errors. The rationale behind choosing a small value of \(p\) i.e. 0.2, is based on the results of Fig.4. as a higher value of \(p\) results in a higher average decoding failure probability per se, even in the absence of errors, which would conceal the effects of errors on this latter. Note that packets will be treated as the row vectors of a data matrix \(R\) and errors are random bit-flips of the entries of the matrix \(R^{\prime}\) whose row vectors are the vectors \(R^{\prime}_{i}=R_{i}-Z_{i}\). The results of this experiment are shown in Fig. 5. Routing alone is very sensitive to errors followed by RLNC which provides a slight improvement on the decoding failure probability. Subspace coding on the other side has resulted in a better performance where errors are less likely to affect the decoding process as long as they are under 40. ## V Conclusion In this paper, we have investigated the use of subspace codes as a solution for the interference channel with the absence as well as the presence of errors for a system model consisting of an array of SISO communication systems with cooperative destination nodes. Interference may be deemed as a form of uncontrolled RLNC with coefficients taken from the binary field following a Bernoulli distribution. In an error-free environment, as long as the destination nodes are cooperative and able to perform RLNC decoding operations, interference will not result in data loss. However, in the presence of errors, cooperation amongst destination nodes will not result in a significant change in the error performance of the system. In this case, the use of subspace codes will be useful as the results of our experiments suggest. This comes from the fact that subspace codes are designed to correct errors affecting subspaces and by treating data in our system as a vector space, subspace codes will provide error correction as well as interference mitigation. As for our future work, we will focus on the effects of other code families on mitigating the effects of interference as well as providing error control solutions for the system such as rank-metric codes.
2309.04080
On finite categories of algebraic varieties
We prove that the finiteness of a finitely generated category of irreducible algebraic varieties over a field of characteristic zero is decidable. We also obtain a Burnside finiteness criterion for such a category, with applications to algebraic dynamical systems of several maps.
Junho Peter Whang
2023-09-08T02:42:15Z
http://arxiv.org/abs/2309.04080v1
# On finite categories of algebraic varieties ###### Abstract. We prove that the finiteness of a finitely generated category of irreducible algebraic varieties over a field of characteristic zero is decidable. We also obtain a Burnside finiteness criterion for such a category, with applications to algebraic dynamical systems of several maps. ## 1. Introduction Jacob [4] showed that the finiteness of a finitely generated monoid of matrices over a field is decidable. This paper provides a nonlinear generalization of this result for finitely generated categories of irreducible algebraic varieties, over fields of characteristic zero. In this paper, by an algebraic variety we mean a reduced separated scheme of finite type over a field. Let us define a _system_ in a category \(C\) to be a quiver (i.e., directed multigraph) whose vertices are objects in \(C\) and whose arrows are morphisms in \(C\) between the vertices. In other words, a system in \(C\) specifies the generators of a subcategory of \(C\). **Theorem 1.1**.: _Let \(k\) be a field of characteristic zero. There exists an algorithm to determine, given an explicit finite system of irreducible algebraic varieties over \(k\), whether or not the category it generates is finite._ We make precise the notion of an explicitly given system of varieties in Section 2.2. We prove Theorem 1.1 by induction on the complexity of the system, using two ingredients: an effective form of nonlinear Selberg's lemma due to Bass-Lubotzky [1], and the observation that dominant endormophisms of finite order on integral schemes are automorphisms (Lemma 3.1). A similar argument yields a solution to the strong Burnside problem for categories of varieties in characteristic zero, generalizing (in characteristic zero) the work of McNaughton-Zalcstein [6] on matrix monoids. Let us say that a category is _torsion_ if every endomorphism of every object generates a cyclic monoid of finite order under composition. **Theorem 1.2**.: _Let \(k\) be a field of characteristic zero. Let \(C\) be a finitely generated subcategory of the category of irreducible algebraic varieties over \(k\). Then \(C\) is finite if and only if it is torsion._ Over general fields, one can deduce a weaker finiteness criterion for categories of varieties, establishing the analogue of the bounded Burnside problem. This relies on Zelmanov's resolution of the restricted Burnside problem [11, 12]. One source of motivation for our work is the study of finite orbits in dynamics of several maps on algebraic varieties. For instance, we prove the following. **Corollary 1.3**.: _Let \(k\) be a field. Let \(M\) be a finitely generated monoid acting on an algebraic variety \(V/k\). Let \(x\in V(k)\). Then \(M\cdot x\) is finite if and only if_ \[\sup_{N}|N\cdot x|<\infty\] _where \(N\) runs over all \(2\)-generated submonoids of \(M\)._ Given a set \(X\) and a monoid \(M\) of endomorphisms of \(X\), let us say that a point \(x\in X\) is _\(M\)-periodic_ if the \(M\)-orbit \(M\cdot x\) is finite and \(M\) permutes the elements of \(M\cdot x\). In the case where \(k\) is a field of characteristic zero and \(M\) is a finitely generated monoid acting on a variety \(V/k\), one can combine a refined form of Corollary 1.3 with [10, Theorem 1.2] to obtain the following. **Theorem 1.4**.: _Let \(k\) be a field of characteristic zero. Let \(M\) be a finitely generated monoid acting on an algebraic variety \(V/k\). Let \(x\in V(k)\). Then the following are equivalent:_ 1. \(x\) _is_ \(M\)_-periodic._ 2. \(x\) _is_ \(N\)_-periodic for every_ \(2\)_-generated submonoid_ \(N\leq M\)_._ _If moreover \(M\) is a group, then the above are equivalent to:_ 1. \(x\) _is_ \(\langle f\rangle\)_-periodic for every_ \(f\in M\)_._ This paper is organized as follows. Section 2 collects the necessary background, including results of Bass-Lubotzky [1] and Zelmanov [11, 12]. In Section 3, we prove Theorems 1.1 and 1.2. In Section 4, we consider dynamical corollaries of our main results, and in particular prove Corollary 1.3 and Theorem 1.4. ### Acknowledgments I thank Abhishek Oswal for helpful conversations. In particular, the proof of Theorem 1.1 was inspired by an unpublished collaborative work on finite matrix monoids. This work was supported by the Samsung Science and Technology Foundation under Project Number SSTF-BA2201-03. ## 2. Background ### Notations Let us set up the notations and terminology that will be used throughout the paper. A quiver is a directed multigraph. Given a quiver \(Q\), we write \(Q_{0}\) for the class of its vertices and \(Q_{1}\) for the class of its arrows. Let \(s,t:Q_{1}\rightrightarrows Q_{0}\) denote the source and target maps of \(Q\). We shall say that a quiver is _small_ if \(Q_{0}\) and \(Q_{1}\) are sets. We shall view a category as a quiver along with a composition law on its arrows that satisfies the usual axioms. Let \(C\) be a category. If \(X,Y\in C_{0}\), we write \(\operatorname{Hom}_{C}(X,Y)\) for the set of morphisms in \(C\) from \(X\) to \(Y\). We write \(\operatorname{End}_{C}(X)=\operatorname{Hom}_{C}(X,X)\). Given \(X\in C_{0}\), we write \(C_{X}\) for the full subcategory of \(C\) with a single object \(X\). Thus, \((C_{X})_{1}=\operatorname{End}_{C}(X)\). In this paper, a _monoid_ is a small category whose object set is a singleton. A monoid is _cyclic_ if it is generated by a single endomorpism. A _groupoid_ is a small category whose morphisms are all invertible. **Definition 2.1**.: Let \(C\) be a small category. 1. The _order_ of \(C\) is \(|C|=|C_{1}|\). 2. We say \(C\) is _finite_ if it has finite order. 3. We say \(C\) is _torsion_ if every cyclic submonoid of \(C\) is finite. 4. We say \(C\) is _\(n\)-torsion_ if every cyclic submonoid of \(C\) has order \(\leq n\). Forgetting the composition law on a category \(C\), we obtain a quiver (i.e. directed multigraph) whose collection of vertices is \(C_{0}\), whose collection of arrows is \(C_{1}\), and whose source and target maps are \(s\) and \(t\). **Definition 2.2**.: A _system_ in a category \(C\) to be a small subquiver of the quiver underlying \(C\). Given a system \(S\) in \(C\), let \(\langle S\rangle\) denote the subcategory of \(C\) generated by \(S\), i.e. smallest subcategory \(C^{\prime}\) of \(C\) such that \(S_{0}=C^{\prime}_{0}\) and \(S_{1}\subseteq C^{\prime}_{1}\). We say that a category \(C\) is _finitely generated_ if there is a finite quiver \(S\) in \(C\) (i.e. with \(|S_{0}|\) and \(|S_{1}|\) finite) such that \(C=\langle S\rangle\). **Definition 2.3**.: Let \(S\) be a quiver. A _path_ in \(S\) from \(v\in S_{0}\) to \(w\in S_{0}\) is a sequence \(f_{1},\ldots,f_{k}\in S_{1}\) of arrows such that \(s(f_{1})=v\), \(t(f_{k})=w\), and \(t(f_{i})=s(f_{i+1})\) for \(i=1,\ldots,k-1\). Two vertices \(v,w\in S_{0}\) are _path-equivalent_ if there is a path in \(S\) from \(v\) to \(w\) and there is a path in \(S\) from \(w\) to \(v\). We denote by \(S^{\circ}\) the quiver obtained from \(S\) by deleting the arrows between vertices that are not path-equivalent. A system \(S\) is _path-connected_ if \(S=S^{\circ}\). A _path-component_ of \(S\) is a maximal subquiver of \(S\) that is path-connected. **Lemma 2.4**.: _A finitely generated category \(C\) is finite if and only if \(C^{\circ}\) is finite._ Proof.: If \(C\) is finite, then obviously \(C^{\circ}\) is finite. Suppose conversely that \(C^{\circ}\) is finite. Let \(N\) be the number of path-components of \(C\). Let \(S\) be a finite system of generators for \(C\). Every morphism of \(f\) can be written in the form \[f=g_{k}h_{k}g_{k-1}h_{k-1}\ldots g_{2}h_{1}g_{0}\] for some \(k\leq N\), where \(g_{i}\in c(C)_{1}\) and \(h_{i}\in S_{1}\setminus c(C)_{1}\) for each \(i=0,\ldots,k\). It follows that \(C_{1}\) is finite, so \(C\) is finite. ### Bass-Lubotzky We recall a theorem of Bass-Lubotzky [1, Corollary (1.2)]. **Theorem 2.5**.: _Let \(k\) be an arbitrary ring. Let \(G\) be a finitely generated group of automorphisms of a scheme \(V\) of finite presentation over \(k\)._ 1. \(G\) _is residually finite._ 2. _If_ \(V\) _is flat over_ \(\mathbb{Z}\)_, then_ \(G\) _is virtually torsionfree._ We shall also need an effective form of the second part of Theorem 2.5, whose formulation we recall as follows. Let \(k\) be a finitely generated subring of \(\bar{\mathbb{Q}}\). Let \(Z\) be a scheme flat of finite type over \(k\). Given a finite set \(X\) of closed points of \(Z\), let \[A_{X}=\prod_{x\in X}\mathcal{O}_{Z,x}\quad\text{and}\quad J_{X}=\prod_{x\in X }\mathfrak{m}_{Z,x}\] where \(\mathcal{O}_{Z,x}\) denotes the local ring of \(Z\) at \(x\) and \(\mathfrak{m}_{Z,x}\) is the maximal ideal of \(\mathcal{O}_{Z,x}\), with residue field \(\kappa(x)=\mathcal{O}_{Z,x}/\mathfrak{m}_{Z,x}\). We shall say that \(X\) has _residue characteristic_\(p\) if \(\kappa(x)=p\) for every \(x\in X\). Following [1], we shall say that \(X\) is _effective_ if there is a finite affine open covering \((U_{i})_{i\in I}\) of \(Z\) such that the natural morphism \(\mathcal{O}_{Z}(U_{i})\to\prod_{x\in X\cap U_{i}}\mathcal{O}_{Z,x}\) is injective for every \(i\in I\). The following holds. **Proposition 2.6**.: _Suppose that \(X\) is a finite effective set of closed points in \(Z\) with residue characteristic \(p\). Then the order of any torsion element in_ \[\Gamma_{X}=\ker(\operatorname{Aut}(Z/k)\to\operatorname{Aut}Z(A_{X}/J_{X}^{ 2}))\] _is a power of \(p\). In particular, if \(X\) and \(X^{\prime}\) are finite effective sets in \(Z\) having distinct residue characteristics \(p\neq p^{\prime}\), then \(\Gamma_{X}\cap\Gamma_{X^{\prime}}\) is a normal torsionfree subgroup of finite index in \(\operatorname{Aut}(Z/k)\)._ Proof.: The proof is given in [1, pp.4-5]. See also [10, Section 2.1] for a summary. ### Burnside problem We refer to [8] for a summary of the history of the Burnside problem. Here, we recall their formal statements of its variants. **Problem 2.7** (Strong Burnside's problem).: _Let \(G\) be a finitely generated group all of whose elements are torsion. Is \(G\) finite?_ **Problem 2.8** (Bounded Burnside's problem).: _Let \(G\) be a finitely generated group all of finite exponent. Is \(G\) finite?_ **Problem 2.9** (Restricted Burnside's problem).: _Are there only finitely many finite groups with given number of generators and given exponent?_ While both the strong Burnside's problem and bounded Burnside's problem have negative answers in general (in the strong case by work of Golod-Shafarevich [3], and in the bounded case by work of Adian and Novikov [7]), for linear groups they admit affirmative answers (due to Schur [9] and Burnside [2], respectively). The restricted Burnside problem was solved affirmatively by Zelmanov [11, 12]. An immediate corollary of his work is the following characterization of finite groups. **Theorem 2.10**.: _A group \(G\) is finite if and only if it is finitely generated, residually finite, and of finite exponent._ In the meanwhile, analogues of Burnside's problems for other algebraic structures such as semigroups have been studied. In [6], McNaughton-Zalcstein [6] established the analogue of the strong Burnside's problem for semigroups (or monoids) of matrices over arbitrary fields. Theorem 1.2 serves to generalize this result, in characteristic zero, to categories of algebraic varieties. ## 3. Proofs of Theorems 1.1 and 1.2 ### Explicitly given systems of varieties Here, we make precise our notion of an explicitly given finite systems of algebraic varieties, generally following the spirit of the paragraph after [10, Theorem 1.2]. 1. A finitely presented ring \(k\) is explicitly given if it is given as a quotient of a polynomial ring with coefficients in \(\mathbb{Z}\), and a finite set of generators for the kernel of the quotient map is specified. A ring homomorphism between explicitly given finitely presented rings is explicitly given if the images of the generators (given by the explicit finite presentation) of the domain ring are specified. In what follows, let \(k\) be an explicitly given finitely presented ring. 2. An affine scheme of finite presentation over \(k\) is explicitly given if it is the spectrum of an explicitly given \(k\)-algebra. A morphism between two explicitly given affine schemes of finite presentation over \(k\) is explicitly given if it is induced by an explicitly given \(k\)-algebra homomorphism between their coordinate rings. 3. A scheme \(V\) separated of finite presentation over \(k\) is explicitly given if it is written as an explicit finite union of explicitly given open affine schemes \(U_{i}\) with affine overlaps \(U_{i}\cap U_{j}\), such that the gluing morphisms \(U_{i}\cap U_{j}\to U_{i}\) are explicitly given. We shall call \((U_{i})\) an effective presentation of \(V\). 4. Let \(V\) and \(W\) be explicitly given schemes of finite presentation over \(k\), with effective presentations \((U_{i})_{i\in I}\) and \((T_{j})_{j\in J}\) respectively. A morphism \(f:V\to W\) over \(k\) is explicitly given if there is another effective presentation \((U^{\prime}_{i})_{i\in I^{\prime}}\) of \(V\) and an explicit function \(j:I^{\prime}\to J\) such that \(f(U^{\prime}_{i})\subseteq T_{j(i)}\) for every index \(i\), and the following hold: 1. \(f|_{U^{\prime}_{i}}:U^{\prime}_{i}\to T_{j(i)}\) is an explicitly given morphism for all \(i\in I\), and 2. the inclusions of \(U^{\prime}_{i}\cap U_{k}\) into \(U^{\prime}_{i}\) and \(U^{\prime}_{k}\) are explicitly given morphisms for all \((i,k)\in I^{\prime}\times I\). Finally, a finite system \(S\) of algebraic varieties over a field \(K\) is explicitly given if there is an explicitly given finitely presented domain \(k\subset K\) and a finite system \(S_{k}\) of schemes over \(k\), whose vertices are explicitly given separated schemes of finite type over \(k\) and whose arrows are explicitly given morphisms between those schemes, such that \(S\) is obtained from \(S_{k}\) by base change. ### Decidability of finiteness Here, we shall prove Theorem 1.1. Let us begin with a lemma. **Lemma 3.1**.: _Let \(X\) be an integral separated scheme. If \(f\) is a dominant endomorphism of \(X\) of finite order, then \(f\) is an automorphism._ Proof.: Let \(k(X)\) be the function field of \(X\). Since \(f\) is dominant, it induces an inclusion \(f^{*}:k(X)\to k(X)\). Since there exist \(0\leq m<n\) such that \(f^{m}=f^{n}\), in fact \(f^{*}\) must induce an automorphism of \(k(X)\), and \((f^{n-m})^{*}\) is the identity on \(k(X)\). This implies that \(f^{n-m}\) is the identity on a dense open subscheme of \(X\). Since \(X\) is separated, this implies that \(f^{n-m}=\operatorname{id}_{X}\). **Proposition 3.2**.: _Let \(k\) be a finitely presented domain of characteristic zero. Let \(S\) be a finite path-connected system of dominant morphisms between integral separated schemes of finite type over \(k\). If \(\langle S\rangle\) is finite, then \(|\langle S\rangle|\leq C(S_{0})\) where \(C(S_{0})\) is a constant that only depends on \(S_{0}\)._ Proof.: Let \(S\) be a finite system as in the statement of the proposition. By Lemma 3.1 and our assumptions on \(S\), every endomorphism of an object in \(\langle S\rangle\) is an automorphism of finite order. It follows that \(\langle S\rangle\) is a groupoid. Fix \(Z\in S_{0}\). For every \(W\in S_{0}\), the set \(\operatorname{Hom}_{\langle S\rangle}(Z,W)\) is a torsor under the group \(\langle S\rangle_{Z}\), so we have \(|\langle S\rangle|=|\langle S\rangle_{Z}|^{|S_{0}|}\). Now, \(\langle S\rangle_{Z}\) is a finite subgroup of \(\operatorname{Aut}(Z/k)\). Fix two closed points \(x\) and \(x^{\prime}\) of the integral scheme \(Z\) such that the characteristics of the residue fields \(\kappa(x)\) and \(\kappa(x^{\prime})\) are coprime. Setting \(X=\{x\}\) and \(X^{\prime}=\{x^{\prime}\}\), we see by Proposition 2.6 that the composition of group homomorphisms \[\langle S\rangle_{Z}\to\operatorname{Aut}(Z/k)\to\operatorname{Aut}Z(A_{X}/J_ {X}^{2})\times\operatorname{Aut}(A_{X}/J_{X}^{2})\] is injective. Since the right hand side only depends on \(Z\), we are done. **Theorem 1.1**.: _Let \(k\) be a field of characteristic zero. There exists an algorithm to determine, given an explicit finite system of irreducible algebraic varieties over \(k\), whether or not the category it generates is finite._ Proof.: Let \(S\) be a finite system of irreducible algebraic varieties over \(\tilde{\mathbb{Q}}\). By spreading out, we can explicitly determine a finitely generated subring of \(\tilde{\mathbb{Q}}\) such that \(S\) descends to a finite system of integral separated schemes flat of finite type over \(k\). We shall proceed by induction on \(|S_{1}|\) and \(\max_{V\in S_{0}}\dim V\). By Lemma 2.4, we may assume that \(S\) is path-connected. If every \(f\in S_{1}\) is dominant, then by Proposition 3.2 it is decidable whether or not \(\langle S\rangle\) is finite. So assume that some \(f\in S_{1}\) is not dominant, and write \(A,B\in S_{0}\) for the source and target of \(f\) respectively. Let \(Z\) be the Zariski closure of the image of \(f\) in \(B\), equipped with the reduced closed subscheme structure. Note that \(Z\) is integral, separated, and flat of finite type over \(k\). Consider the finite systems \(S^{\prime}\) and \(S^{\prime\prime}\) where: * \(S^{\prime}_{0}=S_{0}\) and \(S^{\prime}_{1}=S_{1}\setminus\{f\}\), and * \(S^{\prime\prime}_{0}=\{Z\}\) and \(S^{\prime\prime}_{1}=\{(f\circ g)|_{Z}:g\in\operatorname{Hom}_{\langle S^{ \prime}\rangle}(B,A)\}\). Since \(|S^{\prime}_{1}|<|S_{1}|\), by inductive hypothesis it is decidable whether or not \(\langle S^{\prime}\rangle\) is finite, so we may assume it is. This implies that \(|S^{\prime\prime}_{1}|\) is finite, and since \(Z\) has smaller dimension than \(B\), by inductive hypothesis it is decidable whether or not \(\langle S^{\prime\prime}\rangle\) is finite. We may assume \(\langle S^{\prime\prime}\rangle\) is finite. Then for every \(V,W\in S_{0}\), if \(g\in\operatorname{Hom}_{\langle S\rangle}(V,W)\) then either \(g\in\operatorname{Hom}_{\langle S^{\prime}\rangle}(V,W)\) or \[g=h_{2}\circ e\circ f\circ h_{1}\] for some \(h_{1}\in\operatorname{Hom}_{\langle S^{\prime}\rangle}(V,A)\), \(e\in\langle S^{\prime\prime}\rangle_{1}\), and \(h_{2}\in\operatorname{Hom}_{\langle S^{\prime}\rangle}(B,W)\). It follows that \(\langle S\rangle\) is finite. This completes the induction and the proof. ### Burnside finiteness criterion We now prove the following extended version of Theorem 1.2, by adapting our proof of Theorem 1.1. **Theorem 3.3**.: _Let \(k\) be a field. Let \(C\) be a finitely generated category of irreducible algebraic varieties over \(k\). Then \(C\) is finite if and only if:_ 1. \(C\) _is_ \(n\)_-torsion for some_ \(n\geq 1\)_, or_ 2. \(C\) _is torsion and_ \(k\) _has characteristic zero._ Proof.: The "only if" direction is clear, so we shall focus on the "if" direction. Let \(S\) be a finite system generating \(C\). As in the proof of Theorem 1.1, we proceed by induction on \(|S_{1}|\) and \(\max_{V\in S_{0}}\dim V\). By Lemma 2.4, we may assume that \(S\) is path-connected. First, suppose that every \(f\in S_{1}\) is dominant. Since \(C\) is torsion, arguing as in the proof of Proposition 3.2, we see that \(C\) is a finitely generated groupoid, and hence its finiteness is equivalent to the finiteness of the finitely generated group \(C_{V}\) for some (or any) \(V\in C_{0}\). Now, we have the following. 1. If \(C\) is \(n\)-torsion for some \(n\geq 1\), then since the group \(C_{V}\) is residually finite by Theorem 2.5(1), it is finite by the resolution of the restricted Burnside problem (Theorem 2.10). 2. If \(C\) is torsion and \(k\) has characteristic zero, then since \(C_{V}\) is virtually torsionfree by Theorem 2.5 it is finite. This proves the theorem in the case where every \(f\in S_{1}\) is dominant. If some \(f\in S_{1}\) is not dominant, then we argue as in the proof of Theorem 1.1 to construct systems \(S^{\prime}\) and \(S^{\prime\prime}\) with smaller complexity, such that finiteness of \(C^{\prime}=\langle S^{\prime}\rangle\) and \(C^{\prime\prime}=\langle S^{\prime\prime}\rangle\) imply the finiteness of \(C\). If \(C\) satisfies condition (1) or (2) of the theorem, then clearly \(C^{\prime}\) and \(C^{\prime\prime}\) also satisfy the same condition. Thus, \(C^{\prime}\) and \(C^{\prime\prime}\) are finite by inductive hypothesis, and hence \(C\) is finite. ## 4. Dynamical corollaries We record some dynamical corollaries of Theorem 3.3, by transferring properties of monoids to their orbits. The following is a refined version of Corollary 1.3. **Corollary 4.1**.: _Let \(k\) be a field. Let \(M\) be a finitely generated monoid acting on an algebraic variety \(V/k\). Let \(x\in V(k)\). The following are equivalent._ 1. \(M\cdot x\) _is finite._ 2. _There exists_ \(n\geq 1\) _with_ \(|N\cdot x|\leq n\) _for every_ \(2\)_-generated submonoid_ \(N\leq M\)_._ 3. _There exists_ \(n\geq 1\) _with_ \(|\langle f\rangle\cdot g(x)|\leq n\) _for every_ \(f,g\in M\)_._ Proof.: It is clear that (1) \(\implies\) (2) \(\implies\) (3), so it remains to show (3) \(\implies\) (1). Suppose (3) holds. Let \(Z\) be the Zariski closure of \(M\cdot x\) in \(V\), equipped with the reduced closed subscheme structure. Let us now construct a finite system \(S\) of irreducible varieties, where the elements of \(S_{0}\) are the irreducible components of \(Z\) and the elements of \(S_{1}\) are the restrictions of the elements of a finite generating set of \(M\) to the irreducible components of \(Z\). Let \(C=\langle S\rangle\). If \(f\in C_{1}\) is any endomorphism of an object \(A\) in \(C\), we claim that \(f\) has finite order \(\leq n\), i.e., \(C\) is \(n\)-torsion. Indeed, there exists some \(0\leq m<n\) such that \(A(k)\) contains a Zariski dense set \(T\) of points satisfying \(f^{n}(y)=f^{m}(y)\) for all \(y\in T\). Since \(A\) is integral and separated, it follows that \(f^{n}=f^{m}\) on \(A\). Thus \(C\) is \(n\)-torsion, and by Theorem 3.3 it follows that \(C\) is finite. But then the orbit of \(x\) must be finite. **Theorem 1.4**.: _Let \(k\) be a field of characteristic zero. Let \(M\) be a finitely generated monoid acting on an algebraic variety \(V/k\). Let \(x\in V(k)\). Then the following are equivalent:_ 1. \(x\) _is_ \(M\)_-periodic._ 2. \(x\) _is_ \(N\)_-periodic for every_ \(2\)_-generated submonoid_ \(N\leq M\)_._ _If moreover \(M\) is a group, then the above are equivalent to:_ 1. \(x\) _is_ \(\langle f\rangle\)_-periodic for every_ \(f\in M\)_._ Proof.: The implication (a) \(\Longrightarrow\) (b) is clear, so let us show (b) \(\Longrightarrow\) (a). Assume (b). By [10, Theorem 1.2], there is a constant \(n\geq 0\), depending only on \(V\) and the finitely generated ring over which \(V\), \(x\), and \(M\) are defined, such that \(|N\cdot x|\leq n\) for every \(2\)-generated submonoid \(N\leq M\) by \(N\)-periodicity of \(x\). This implies (a) by Corollary 4.1. Suppose now that \(M\) is a group. Since (a) \(\Longrightarrow\) (c) is clear, we shall show (c) \(\Longrightarrow\) (a). Let us assume (c). In light of Corollary 4.1, it suffices to show that there is a constant \(n\geq 0\) such that \(|\langle f\rangle\cdot g(x)|\leq n\) for every \(f,g\in M\), or (since \(M\) is a group) equivalently \(|\langle h^{-1}fh\rangle\cdot x|\leq n\) for every \(f,h\in M\), or equivalently \(|\langle g\rangle\cdot x|\leq n\) for every \(g\in M\). Now, since \(\langle g\rangle\cdot x\) is finite for every \(g\in M\) by hypothesis, the desired result follows again by Theorem [10, Theorem 1.2].
2302.14687
Tunable Feshbach resonances in collisions of ultracold molecules in $^2Σ$ states with alkali-metal atoms
We consider the magnetically tunable Feshbach resonances that may exist in ultracold mixtures of molecules in $^2\Sigma$ states and alkali-metal atoms. We focus on Rb+CaF as a prototype system. There are likely to be Feshbach resonances analogous to those between pairs of alkali-metal atoms. We investigate the patterns of near-threshold states and the resonances that they cause, using coupled-channel calculations of the bound states and low-energy scattering on model interaction potentials. We explore the dependence of the properties on as-yet-unknown potential parameters. There is a high probability that resonances will exist at magnetic fields below 1000 G, and that these will be broad enough to control collisions and form triatomic molecules by magnetoassociation. We consider the effect of CaF rotation and potential anisotropy, and conclude that they may produce additional resonances but should not affect the existence of rotation-free resonances.
Robert C. Bird, Michael R. Tarbutt, Jeremy M. Hutson
2023-02-28T15:57:02Z
http://arxiv.org/abs/2302.14687v2
# Tunable Feshbach resonances in collisions of ultracold molecules in \({}^{2}\Sigma\) states ###### Abstract We consider the magnetically tunable Feshbach resonances that may exist in ultracold mixtures of molecules in \({}^{2}\Sigma\) states and alkali-metal atoms. We focus on Rb+CaF as a prototype system. There are likely to be Feshbach resonances analogous to those between pairs of alkali-metal atoms. We investigate the patterns of near-threshold states and the resonances that they cause, using coupled-channel calculations of the bound states and low-energy scattering on model interaction potentials. We explore the dependence of the properties on as-yet unknown potential parameters. There is a high probability that resonances will exist at magnetic fields below 1000 G, and that these will be broad enough to control collisions and form triatomic molecules by magnetoassociation. We consider the effect of CaF rotation and potential anisotropy, and conclude that they may produce additional resonances but should not affect the existence of rotation-free resonances. ## I Introduction Ultracold molecules have many applications that are now emerging, ranging from quantum simulation [1; 2], quantum computing [3; 4; 5], the study of novel quantum phases [6; 7], and tests of fundamental physics [8; 9; 10]. Key to most of these applications are polar molecules, which can have long-range anisotropic interactions resulting from their permanent dipoles. Many such molecules have been produced at microkelvin temperatures by association of pairs of alkali-metal atoms, followed by laser transfer to the vibrational ground state [11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Another class of molecules, exemplified by CaF and SrF, have been cooled directly by magneto-optical trapping followed by sub-Doppler laser cooling [21; 22; 23; 24; 25; 26]. Elastic and inelastic collisions are at the heart of ultracold physics. For ultracold atoms, it is often possible to control ultracold collisions by adjusting an applied magnetic field close to a zero-energy Feshbach resonance [27]. Such a resonance occurs whenever a molecular bound state can be tuned across a scattering threshold as a function of applied field. The s-wave scattering length then passes through a pole as a function of field, allowing the effective interaction strength to be tuned to any desired value. This control has been applied in many areas of ultracold physics, including condensate collapse [28], soliton creation [29], Efimov physics [30] and investigations of the BCS-BEC crossover in degenerate Fermi gases [31]. Feshbach resonances are also used for magnetoassociation, in which pairs of ultracold atoms are converted to weakly bound diatomic molecules by sweeping a magnetic field across the resonance [32; 33]. Much new physics will become accessible when molecular collisions can be controlled with tunable Feshbach resonances. Such resonances have now been observed in collisions between ultracold \({}^{40}\)K atoms and \({}^{23}\)Na\({}^{40}\)K molecules in singlet states [34; 35; 36; 18; 37] and between \({}^{23}\)Na atoms and \({}^{6}\)Li\({}^{23}\)Na molecules in triplet states [38]. These systems have also been investigated theoretically [34; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 199; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 259; 251; 252; 253; 254; 255; 256; 257; 258; 260; 259; 261; 250; 251; 254; 257; 259; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 287; 289; 288; 289; 291; 289; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 324; 336; 337; 311; 338; 341; 35; 361; 378; 38; 390; 311; 339; 329; 333; 342; 343; 35; 362; 379; 381; 391; 33; 392; 301; 331; 332; 344; 36; 363; 371; 382; 393; 383; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 42; 430; 409; 421; 431; 44; 44; 45; 46; 47; 48; 49; 50; 51; 52; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 73; 74; 75; 76; 77; 78; 79; 80; 81; 83; 84; 85; 86; 87; 88; 89; 91; 88; 89; 92; 80; 89; 81; 800; 82; 84; 86; 89; 87; 88; 89; 93; 941; 88; 89; 82; 85; 89; 95; 86; 87; 89; 96; 97; 98; 99; 99; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 119; 121; 122; 123; 124; 125; 126; 127; 128; 129; 131; 140; 141; 15; 156; 157; 158; 169; 170; 183; 184; 185; 186; 187; 188; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 201; 203; 204; 205; 206; 207; 208; 209; 211; 214; 209; 215; 216; 217; 219; 222; 233; 241; 235; 256; 267; 278; 289; 293; 36; 281; 294; 295; 296; 297; 298; 370; 383; 399; 401; 41; 43; 44; 45; 46; 47; 48; 49; 51; 52; 53; 54; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 75; 76; 78; 79; 81; 89; 99; 900; 91; 92; 93; 9402; 94; 95; 96; 97; 98; 99; 101; 103; 104; 105; 107; 108; 109; 111; 122; 123; 124; 125; 126; 127; 128; 129; 133; 141; 142; 143; 144; 15 tion VI presents conclusions and offers perspectives for future work to take advantage of the resonances. ## II Theory ### Monomer Hamiltonians and levels The Hamiltonian of an alkali-metal atom A in its ground \({}^{2}\)S state is \[\hat{h}_{\rm A}=\zeta_{\rm A}\hat{\bf z}_{\rm A}^{\rm A}\cdot\hat{\bf s}_{\rm A} +\left(g_{s,{\rm A}}\hat{\bf s}_{{\rm A},z}+g_{i}\hat{i}_{{\rm A},z}\right)\mu_ {\rm B}B, \tag{1}\] where \(\hat{\bf s}_{\rm A}\) and \(\hat{\bf z}_{\rm A}\) are vector operators for the electron and nuclear spin, \(\hat{s}_{{\rm A},z}\) and \(\hat{i}_{{\rm A},z}\) are their components along the \(z\) axis defined by the magnetic field \(B\), \(\zeta_{\rm A}\) is the hyperfine coupling constant, and \(g_{s,{\rm A}}\) and \(g_{i}\) are the g-factors for the electron and nuclear spins [44]. The nuclear spins vary from 1 for \({}^{6}\)Li to 9/2 for \({}^{40}\)K, and the hyperfine splittings \(A_{\rm hfs}=\zeta_{\rm A}(i_{\rm A}+\frac{1}{2})/h\) vary from 228 MHz for \({}^{6}\)Li to 9.19 GHz for \({}^{133}\)Cs. We focus here on \({}^{87}\)Rb, \(i=3/2\) and \(A_{\rm hfs}\approx 6.83\) GHz. The resulting levels are well known, but are shown in Fig. 1(a) for convenience. At zero field the levels are labeled by total angular momentum \(f_{\rm Rb}=1\) and 2. When a field is applied, each level splits into \(2f_{\rm Rb}+1\) sublevels, color-coded according to the projection \(m_{f,\rm Rb}\). At sufficiently high field, pairs of levels with \(f_{\rm Rb}=1\) and 2 but the same value of \(m_{f,\rm Rb}\) mix sufficiently that the levels are better described by \(m_{s,\rm Rb}\) and \(m_{i,\rm Rb}\) than by \(f_{\rm Rb}\). For \({}^{87}\)Rb this transition is still incomplete at 2000 G, but it occurs at much lower fields for alkali-metal atoms with small hyperfine splittings, such as Li and Na. The CaF or SrF molecule may be treated at different levels of complexity. The stable isotopes \({}^{40}\)Ca, \({}^{88}\)Sr, \({}^{86}\)Sr and \({}^{84}\)Sr all have zero nuclear spin, while \({}^{87}\)Sr has \(i=9/2\); only the spin-zero isotopes will be considered here. The simplest useful approximation is to neglect the molecular rotation, and in this case the molecular Hamiltonian \(\hat{h}_{\rm CaF}^{n=0}\) is the same as Eq. 1, with \(i_{\rm F}=1/2\) for \({}^{19}\)F in CaF. However, when rotation is included, several extra terms are needed. The ones important here are \[\hat{h}_{\rm CaF}^{\rm rhf}=b_{0}\hat{\bf n}^{2}+\gamma\hat{\bf s}_{\rm CaF} \cdot\hat{\bf n}+t\sqrt{6}T^{2}(C)\cdot T^{2}(\hat{i}_{\rm F},\hat{\bf s}_{\rm CaF }), \tag{2}\] where \(\hat{\bf n}\) is the vector operator for the molecular rotation. The first term represents the rotational energy, the second represents the electron spin-rotation interaction, and the third accounts for the anisotropic interaction between electron and nuclear spins: \(T^{2}(\hat{\bf i},\hat{\bf s})\) is the rank-2 spherical tensor formed from \(\hat{\bf i}\) and \(\hat{\bf s}\), and \(T^{2}(C)\) is a spherical tensor whose components are the Racah-normalized spherical harmonics \(C_{q}^{2}(\theta,\phi)\) involving the orientation of the molecular axis. Values of \(b_{0}/h\approx 10.3\) GHz, \(\gamma/h\approx 40\) MHz, \(\zeta_{\rm F}/h\approx 120\) MHz and \(t/h\approx 14\) MHz are taken from ref. [45][46]. A more complete version of Eq. 2, including additional contributions of the order of kHz that are unimportant here, has been given in ref. [47]. The full Hamiltonian for CaF is \(\hat{h}_{\rm CaF}=\hat{h}_{\rm CaF}^{n=0}+\hat{h}_{\rm CaF}^{\rm rhf}\). The resulting level diagram is shown as a function of magnetic field in Fig. 1(b), with expanded views for \(n=0\) and 1 in Figs. 1(c) and (d). There are only very small matrix elements that are off-diagonal in \(n\), so the levels for \(n=0\) are very similar to those of an alkali-metal atom with \(i_{\rm F}=1/2\). The hyperfine splitting is small, so \(i_{\rm F}\) and \(s_{\rm CaF}\) are mostly decoupled by 50 G. At higher field, the states are well described by \(m_{s,\rm CaF}\) and \(m_{i,\rm F}\). In a rotating molecule at low field, \(i_{\rm F}\) and \(s_{\rm CaF}=1/2\) couple to give a resultant \(g=0\) or 1, and \(g\) couples to the rotational angular momentum \(n\) to produce the total molecular angular momentum \(f_{\rm CaF}\). For \(n=1\), there are zero-field levels with \(f_{\rm CaF}=0\), 1, 1, 2, as labeled on Fig. 1(d). The lower level with \(f=1\) is predominantly \(g=0\) and the remaining three are predominantly \(g=1\). In a magnetic field, however, \(i_{\rm F}\), \(s_{\rm CaF}\) and \(n\) are again mostly decoupled by 50 G; at higher fields, the states are better described by \(m_{s,\rm CaF}\), \(m_{i,\rm F}\) and \(m_{n}\) than by \(g\) and \(f_{\rm CaF}\). States of different \(m_{s,\rm CaF}\) are well separated; within the group for a particular value of \(m_{s,\rm CaF}\), there are 2 subgroups with \(m_{i,\rm F}=\pm\frac{1}{2}\), with splitting about \(\zeta/2=60\) MHz, and each subgroup is further divided into states with different \(m_{n}\), with adjacent states separated by about \(\gamma/2=20\) MHz. The projection quantum numbers are not fully conserved, but these qualitative arguments help to understand the general patterns at high field. ### Calculations of bound states and scattering The Hamiltonian for an alkali-metal atom interacting with a CaF molecule is \[\hat{H}=\frac{\hbar^{2}}{2\mu}\left(-R^{-1}\frac{d^{2}}{dR^{2}}R+\frac{\hat{\bf L }^{2}}{R^{2}}\right)+\hat{h}_{\rm A}+\hat{h}_{\rm CaF}+\hat{V}_{\rm int}, \tag{3}\] where \(R\) is the intermolecular distance, \(\mu\) is the reduced mass, \(\hat{\bf L}^{2}\) is the operator for relative rotation of the pair and \(\hat{V}_{\rm int}\) is the interaction operator described below. We carry out calculations of both bound states and scattering using coupled-channel methods [48; 27; 49]. The total wavefunction is expanded as \[\Psi(R,\xi)=R^{-1}\sum_{j}\Phi_{j}(\xi)\psi_{j}(R). \tag{4}\] Here \(\{\Phi_{j}(\xi)\}\) is a set of basis functions that span all coordinates except \(R\), including the relative rotation; these coordinates are collectively designated \(\xi\). In the coupled-channel calculations describe in Sec. III.1, \(\xi\) includes only electron and nuclear spins. However, in more complete treatments, it may also include basis functions for overall rotation of the collision complex and rotation and vibration of CaF. Figure 1: Energies as a function of magnetic field for (a) \({}^{87}\)Rb atom in ground \({}^{2}\)S state; (b) Lowest two rotational levels of CaF, with expanded views of \(n=0\) and 1 in (c) and (d), respectively; (e) Scattering thresholds of \({}^{87}\)Rb+CaF, with expanded views of \((f_{\rm Rb},n)=(1,0)\), (2,0), (1,1) and (2,1) in (f) and (g), (h) and (i), respectively. All level energies are shown relative to the ground state at zero field and are color-coded as shown in the legend according to \(m_{f,\rm Rb}\), \(m_{f,\rm CaF}\) or \(M_{F}=m_{f,\rm Rb}+m_{f,\rm CaF}\), as appropriate; negative values are indicated by dashed lines. Substituting the expansion (4) into the total Schrodinger equation produces a set of coupled differential equations that are solved by propagation with respect to the internuclear distance \(R\). The coupled equations are identical for bound states and scattering, but the boundary conditions are different. Scattering calculations are performed with the molscat package [50; 51]. Such calculations produce the scattering matrix \(\mathbf{S}\), for a single value of the collision energy and magnetic field each time. The complex s-wave scattering length \(a(k_{0})\) is obtained from the diagonal element of \(\mathbf{S}\) in the incoming channel, \(S_{00}\), \[a(k_{0})=\frac{1}{ik_{0}}\left(\frac{1-S_{00}(k_{0})}{1+S_{00}(k_{0})}\right), \tag{5}\] where \(k_{0}\) is the incoming wavenumber, related to the collision energy \(E_{\rm coll}\) by \(E_{\rm coll}=\hbar^{2}k_{0}^{2}/(2\mu)\). The scattering length \(a(k_{0})\) becomes constant at sufficiently low \(E_{\rm coll}\), with limiting value \(a\). In the present work, s-wave scattering lengths are calculated at \(E_{\rm coll}/k_{\rm B}=10\) nK, which is low enough to neglect the dependence on \(k_{0}\). A zero-energy Feshbach resonance occurs where a bound state of the atom-molecule pair (triatomic molecule) crosses a scattering threshold as a function of applied field. At the lowest threshold, or in the absence of inelastic processes, the scattering length is real. Near a resonance, \(a(B)\) passes through a pole, and is approximately \[a(B)=a_{\rm bg}\left(1-\frac{\Delta}{B-B_{\rm res}}\right), \tag{6}\] where \(B_{\rm res}\) is the position of the resonance, \(\Delta\) is its width, and \(a_{\rm bg}\) is a slowly varying background scattering length. In the presence of inelastic processes, \(a(B)\) is complex and the pole is replaced by an oscillation [52]. molscat can converge on Feshbach resonances automatically and characterize them to obtain \(B_{\rm res}\), \(\Delta\) and \(a_{\rm bg}\) (and the additional parameters needed in the presence of inelasticity) as described in ref. [53]. Coupled-channel bound-state calculations are performed using the packages bound and field[54; 55], which converge upon bound-state energies at fixed field, or bound-state fields at fixed energy, respectively. The methods used are described in ref. [56]. In the present work, the coupled equations for both scattering and bound-state calculations are solved using the fixed-step symplectic log-derivative propagator of Manolopoulos and Gray [57] from \(R_{\rm min}=3\ a_{0}\) to \(R_{\rm mid}=15\ a_{0}\), with an interval size of \(0.001\ a_{0}\), and the variable-step Airy propagator of Alexander and Manolopoulos [58] between \(R_{\rm mid}\) and \(R_{\rm max}\), where \(R_{\rm max}=300\ a_{0}\) for bound and field and \(3,000\ a_{0}\) for molscat. ### The interaction operator Rb(\({}^{2}\)S) and CaF(\({}^{2}\Sigma\)) interact to give two electronic surfaces of \({}^{1}\)A\({}^{\prime}\) and \({}^{3}\)A\({}^{\prime}\) symmetry. These are to some extent analogous to the singlet and triplet curves of alkali-metal dimers: the singlet surface is expected to be deep, and the triplet surface much shallower. The surfaces have not been characterized in any detail, either experimentally or theoretically, but both of them are expected to be strongly anisotropic at short range. We designate them \(V^{S}(R,\theta)\), with \(S=0\) for the singlet and \(S=1\) for the triplet. Here \(\theta\) is the angle between the CaF bond and the intermolecular axis in Jacobi coordinates. The interaction operator is \[\hat{V}_{\rm int}=V^{0}(R,\theta)\hat{\mathcal{P}}^{0}+V^{1}(R,\theta)\hat{ \mathcal{P}}^{1}+\hat{V}^{\rm d}, \tag{7}\] where \(\hat{\mathcal{P}}^{0}\) and \(\hat{\mathcal{P}}^{1}\) are projection operators onto the singlet and triplet spin spaces, respectively, and \(\hat{V}^{\rm d}\) is a small electron spin-spin term described below. The Feshbach resonances of interest here depend mostly on the properties of near-threshold states. These are bound by amounts comparable to the hyperfine and Zeeman splittings of Rb and CaF and (to a lesser extent) the low-lying rotational states of CaF. The most important states are those with binding energies less than about 30 GHz below their respective thresholds; this is considerably less than 0.1% of the expected singlet well depth. The binding energies of these states are dependent mostly on long-range dispersion and induction forces, which are the same for the singlet and triplet surfaces. The leading term is of the form \[V^{S}(R,\theta)=\left[-C_{6}^{(0)}-C_{6}^{(2)}P_{2}(\cos\theta)\right]R^{-6}, \tag{8}\] with \(C_{6}^{(0)}\approx 3084\ E_{\rm h}a_{0}^{6}\)[59]. For \(C_{6}^{(2)}\) there is substantial cancelation between the dispersion and induction contributions; we estimate \(C_{6}^{(2)}\approx 100(20)\ E_{\rm h}a_{0}^{6}\). For Rb+CaF, the outer turning point at a binding energy of 30 GHz is near 30 \(a_{0}\). Potential terms that are the same for the singlet and triplet surfaces cannot cause couplings between orthogonal spin states. They are therefore unlikely to cause magnetically tunable Feshbach resonances. The most important interactions that mix different spin states are spin-exchange interactions, due to the difference between the singlet and triplet surfaces. Julienne _et al._[60] have shown that, for a pair of atoms, spin-exchange interactions can cause nonadiabatic transitions between coupled channels at distances \(R_{\rm X}\) where the interaction approximately matches the asymptotic energy difference between the channels concerned. For \({}^{87}\)Rb this occurs around 22 \(a_{0}\)[60]. The strength of the interaction is modulated by overall phases due to the short-range parts of the potentials for the channels concerned, and (if the long-range potentials are identical from \(R_{\rm X}\) to \(\infty\)) is smallest when the two channels have the same scattering length. There is also a spin-spin term \(\hat{V}^{\rm d}\) in the interaction operator that results from magnetic dipole-dipole interactions between the electron spins on Rb and CaF, supplemented at short range by second-order spin-orbit terms that have the same overall dependence on spin coordinates. This term is important for heavy alkali-metal atoms such as Cs [61], and may cause additional weak resonances in Rb+CaF as discussed below, but its effect is not considered in detail in the present work. ### Thresholds Figure 1(e) shows the scattering thresholds for \({}^{87}\)Rb+CaF, which are simply sums of energies of Rb and CaF. Figures (f) to (i) show expanded views of each group. The thresholds are color-coded according to \(M_{F}=m_{f,\text{Rb}}+m_{f,\text{CaF}}\), because this quantity is conserved in collisions if anisotropic terms in \(V_{\text{int}}\) are neglected. The importance of the thresholds lies in the fact that near-threshold levels lie approximately parallel to them, within well-defined energy intervals known as bins. This concept will be used extensively in discussing the patterns of near-threshold levels and the resulting resonances in the following sections. ### Near-threshold levels Each scattering threshold \(j\) supports a series of levels of the collision complex that have binding energies \(E_{j\eta}^{\text{b}}(B)\) below the threshold concerned. Here \(\eta\) is a vibrational quantum number, defined so that the least-bound rotationless state below each threshold is labelled \(\eta=-1\) and successively deeper levels are labeled \(-2\), \(-3\), etc. To a first approximation, the near-threshold levels retain the character of the threshold that supports them. Because of this, each level lies approximately parallel to the threshold that supports it and may be described in a single-channel approximation. There are nevertheless interactions between levels supported by different channels \(j\), which cause \(B\)-dependent shifts and avoided crossings between levels. These interactions, and the strengths of the resulting avoided crossings, generally become larger as \(|\eta|\) increases; these will be discussed below. For a single-channel system with an asymptotic potential \(-C_{6}R^{-6}\), the least-bound s-wave state (with \(L=0\) and \(\eta=-1\)) lies within \(\sim 36\tilde{E}\) of threshold, where \(\tilde{E}=\hbar^{2}/(2\mu\bar{a}^{2})\) and \(\bar{a}\) is the mean scattering length of Gribakin and Flambaum [62], \(\bar{a}=(2\mu C_{6}/\hbar^{2})^{1/4}\times 0.4779888\ldots\). We refer to this energy interval as the top bin. The position of the bound state within this bin depends on the background scattering length \(a_{\text{bg}}\) for the channel concerned, neglecting resonances (which themselves arise from couplings between channels). Each subsequent level (\(\eta=-2\), \(-3\), etc.) lies within its own bin, with successive bins becoming wider and bin boundaries at energies roughly proportional to \(|\eta+\frac{1}{2}|^{3}\)[63; 64]. For Rb+CaF, \(\bar{a}=67.3\)\(a_{0}\), \(\tilde{E}/h=11.4\) MHz, and the first 5 bin boundaries are at about 410, 2900, 9100, 21000 and 40000 MHz. These values may be shifted by the influence of terms beyond \(-C_{6}R^{-6}\). In general, the levels lie near the top of their bins when \(a_{\text{bg}}\gg\bar{a}\) and towards the bottom of the bins for \(a_{\text{bg}}\ll\bar{a}\). ## III Bound states and resonances in the absence of anisotropy ### Bound states below the lowest threshold The coupling between CaF rotational levels is fairly small at long range. It is driven mostly by the anisotropic part of the long-range interaction potential, characterized by \(C_{6}^{(2)}\). The effects of the anisotropy will be considered in Section IV. In this section we will consider a simpler model, with the anisotropy neglected. This is expected to be a reasonably good approximation for collisions involving CaF (\(n=0\)), though it will neglect some additional resonances considered later. If anisotropy is neglected, the scattering is largely controlled by the isotropic dispersion coefficient \(C_{6}^{(0)}\) and by scattering lengths \(a_{\text{s}}\) and \(a_{\text{t}}\) that characterize overall phases due to the short-range parts of the singlet and triplet potentials. These scattering lengths are completely unknown for Rb+CaF, so we explore the pattern of near-threshold bound states, and the resulting Feshbach resonances, for a representative sample of values of them. Scattering lengths take values from \(-\infty\) to \(+\infty\), but some values are more likely than others [62]. The most likely value is the mean scattering length \(\bar{a}\) defined above, and for a randomly chosen potential curve that decays as \(-C_{6}R^{-6}\) at long range there is a 50% probability of a scattering length between 0 and \(2\bar{a}\). To a good approximation, different interaction potentials that produce the same \(a_{\text{s}}\) and \(a_{\text{t}}\), and have the same value of \(C_{6}\), have the same low-energy scattering properties and near-threshold bound states. We use singlet and triplet potential curves based on those for Cs [61], but with the value of \(C_{6}\) replaced with \(C_{6}^{(0)}\) for Rb+CaF. These potentials are then adjusted at short range to give the desired scattering length as described in ref. [61]. As an initial sample, we pick 3 values \(a_{\text{s}}=-79\), 71 and 242 \(a_{0}\) and \(a_{\text{t}}=-47\), 86 and 297 \(a_{0}\). These are purposely not exact multiples of \(\bar{a}\), because such values can produce shape resonances at atypically low energy, and are slightly different for \(a_{\text{s}}\) and \(a_{\text{t}}\), because \(a_{\text{s}}=a_{\text{t}}\) is a special case that produces unusually weak interchannel couplings [60]. We consider all 9 combinations of these values of \(a_{\text{s}}\) and \(a_{\text{t}}\). The solid lines in Figure 2 shows the near-threshold energy levels for 4 combinations of \(a_{\text{s}}\) and \(a_{\text{t}}\), obtained from coupled-channel calculations using the package bound. In this case we use a basis set of fully uncoupled functions [65], including only rotationless functions, \(n=0\) and \(L=0\). All energies are shown with respect to the (field-dependent) energy of the lowest threshold, which has approximate quantum numbers \((f_{\text{Rb}},m_{f,\text{Rb}},m_{s,\text{CaF}},m_{i,\text{F}})=(1,1,-\frac{1}{ 2},\frac{1}{2})\) at fields above 50 G. All states shown have \(M_{F}=1\), which is the same as the lowest threshold, because spin-exchange interactions cannot change \(M_{F}\). Also shown are dashed and dot Figure 2: Near-threshold levels of Rb+CaF with \(M_{F}=1\), neglecting anisotropy, shown relative to the energy of the lowest threshold, for four representative combinations of the singlet and triplet scattering lengths. Solid black lines show results from coupled-channel calculations. Dashed (dot-dashed) lines show uncoupled states parallel to thresholds with \(f_{\rm Rb}=1\) (2). Values of \(m_{s,\rm CaF}\) are encoded with red (blue) for \(\frac{1}{2}\) (\(-\frac{1}{2}\)), with darker (lighter) colors for \(m_{i,\rm F}=\frac{1}{2}\) (\(-\frac{1}{2}\)). \(m_{f,\rm Rb}\) is given by \(M_{F}-m_{s,\rm CaF}-m_{i,\rm F}\). Above each plot of energies is the corresponding plot of scattering length, with Feshbach resonances where states cross threshold. dashed lines, parallel to thresholds but offset from them: these represent hypothetical states that would exist in the absence of interchannel couplings; the real states may be interpreted in terms of these, but with shifts and avoided crossings of various strengths due to the couplings. The simplest case is that in Fig. 2(a), for \(a_{\rm s}=71\ a_{0}\) and \(a_{\rm t}=86\ a_{0}\). Here the real states lie close to the uncoupled ones, with only small shifts and weak avoided crossings. \(a_{\rm s}\) and \(a_{\rm t}\) are close to one another, so the interchannel coupling is weak, and they are comparable to \(\bar{a}\), so each state lies fairly high in its bin. The near-horizontal states are those supported by the lowest threshold in the first and second bins. There is also a pair of states that originate near \(-1.0\) GHz at zero field; the thresholds that support these have approximate quantum numbers \((1,0,\frac{1}{2},\frac{1}{2})\) (upper) and \((1,1,\frac{1}{2},-\frac{1}{2})\) (lower). In first order the spin-exchange coupling can change \(f_{\rm Rb}\), \(m_{f,\rm Rb}\) and \(m_{s,\rm CaF}\) by \(\pm 1\) while conserving \(m_{f,\rm Rb}+m_{s,\rm CaF}\), but cannot change \(m_{i,\rm F}\). There is therefore a much wider avoided crossing between the near-threshold horizontal state and the upper state of the sloping pair, which is predominantly \(m_{i,\rm F}=\frac{1}{2}\), than with the lower one, which is predominantly \(m_{i,\rm F}=-\frac{1}{2}\). A case with somewhat stronger coupling is shown in Fig. 2(b), for \(a_{\rm s}=-79\ a_{0}\) and \(a_{\rm t}=-47\ a_{0}\). Here \(a_{\rm s}\) and \(a_{\rm t}\) are negative, so the states lie much deeper in their bins than in (a). The real states still lie close to the uncoupled ones, but there is a strong avoided crossing between the states shown as red dashed and blue dot-dashed lines. States approximately parallel to the thresholds with \(f_{\rm Rb}=1\) can again be identified, with the states in the second bin now originating from around \(-2.3\) GHz at zero field. These are echoed by similar states in the top bin. However, there are two further pairs of states; these are supported by thresholds with \(f_{\rm Rb}=2\), and lie in the third bin beneath their thresholds. The pair originating near \(-1.6\) GHz have approximate quantum numbers \((2,1,-\frac{1}{2},\frac{1}{2})\) (lower, involving ground-state CaF but excited Rb) and \((2,2,-\frac{1}{2},-\frac{1}{2})\) (upper), while the pair originating near \(-1.2\) GHz have \((2,0,\frac{1}{2},\frac{1}{2})\) (lower) and \((2,1,\frac{1}{2},-\frac{1}{2})\) (upper). Once again the strong avoided crossings are those between states with the same values of \(m_{i,\rm F}\). Figs. 2(c) and (d) show further examples for cases with much stronger coupling, with \(a_{\rm s}\) and \(a_{\rm t}\) substantially different. For these cases the identification of the dashed and dotted lines is less certain, because real states are substantially shifted from the uncoupled states by interchannel couplings. Plausible assignments are shown with the same coding as in (a) and (b). Additional bound states exist with \(M_{F}\neq 1\). These are not connected to the lowest incoming threshold by spin-exchange coupling. However, there are additional small couplings due to the spin-spin interaction \(\dot{V}^{\rm d}\). This has matrix elements off-diagonal in \(f_{\rm Rb}\), \(m_{f,\rm Rb}\) and \(m_{s,\rm CaF}\) by \(\pm 1\), but can change \(m_{f,\rm Rb}+m_{s,\rm CaF}\) by up to \(\pm 2\), with \(M_{L}\) changing by up to \(\mp 2\) to conserve \(M_{\rm tot}=M_{F}+M_{L}\). Rotationally excited states with \(L=2\) and \(M_{F}\) from \(-1\) to \(3\) can therefore cause additional resonances at the lowest threshold. These are expected to be narrow, and are not included in the present calculations because there is no information available on the strength of second-order spin-orbit coupling for Rb+CaF. States with other values of \(L\) and \(M_{F}\) might in principle cause resonances, but with higher-order coupling via \(\dot{V}^{\rm d}\), so the resonances will be even narrower. ### Resonances Each bound state with \(M_{F}=1\) causes a magnetically tunable Feshbach resonance where it crosses threshold as a function of \(B\). For all the cases considered, several such resonances exist at fields below 1000 G. However, their widths vary greatly. Figure 2 includes a panel above each energy-level plot that shows the variation of scattering length with magnetic field. In addition, we have characterized the resonances to extract \(B_{\rm res}\), \(\Delta\) and \(a_{\rm bg}\) for all resonances below 1000 G for all 9 of our representative combinations of \(a_{\rm s}\) and \(a_{\rm t}\), using the method of ref. [53], and the results are given in Table 1. The resonance widths may be rationalized using the same arguments about interchannel couplings used to interpret the strength of avoided crossings in Section III.1. First, the resonances are generally broadest in cases where \(a_{\rm s}\) and \(a_{\rm t}\) are substantially different, providing strong spin-exchange coupling. Secondly, for any given combination of \(a_{\rm s}\) and \(a_{\rm t}\), the strongest resonances are those where the bound state causing the resonance has a substantial component with the same value of \(m_{i,\rm F}\) as the incoming channel, which for the lowest threshold is dominated by \(m_{i,\rm F}=\frac{1}{2}\) at fields above 50 G. The specific uncoupled states that cause the widest resonances are \((1,0,\frac{1}{2},\frac{1}{2})\ (2,1,-\frac{1}{2},\frac{1}{2})\) and \((2,0,\frac{1}{2},\frac{1}{2})\), though in some cases their character is spread across more than one real state. It is noteworthy that, even when \(a_{\rm s}\approx a_{\rm t}\) and spin-exchange coupling is weak, there are resonances that are wide enough to use to control collisions or form triatomic molecules by magnetoassociation. ## IV The Role of CaF Rotation There can also be resonances due to states supported by rotationally excited thresholds. This section will consider the structure of such states and the likelihood that they produce resonances at experimentally accessible fields. The thresholds for CaF (\(n=1\)) are from 20 to 30 GHz above the lowest threshold, so states that can cause Feshbach resonances must be bound by about this amount. The outer turning point at this depth is at around \(R=30\ a_{0}\). The potential anisotropy at this distance, due to dispersion and induction, is around 1 GHz. This is substantially less than the CaF rotational spacing, so \begin{table} \begin{tabular}{r r r r r r r r r r} \hline \hline \(a_{\rm s}\) (\(a_{0}\)) & \(a_{\rm t}\) (\(a_{0}\)) & \(B_{\rm res}\) (G) & \(\Delta\) (G) & \(a_{\rm bg}\) (\(a_{0}\)) & \(f_{\rm Rb}\) & \(m_{f,\rm Rb}\) & \(m_{s,\rm CaF}\) & \(m_{i,\rm F}\) & \(\eta\) \\ \hline \(-79\) & \(-47\) & \(50\) & \(-6.8\times 10^{-4}\) & \(-47\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-1\) \\ & & \(81\) & \(-2.3\times 10^{-4}\) & \(-47\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-1\) \\ & & \(319\) & \(-8.2\times 10^{-3}\) & \(-46\) & \(2\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-3\) \\ & & \(375\) & \(-4.8\times 10^{-1}\) & \(-46\) & \(2\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-3\) \\ & & \(658\) & \(-1.7\times 10^{-2}\) & \(-46\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-2\) \\ & & \(692\) & \(-1.0\times 10^{-2}\) & \(-46\) & \(2\) & \(2\) & \(-\frac{1}{2}\) & \(-2\) & \(-3\) \\ & & \(843\) & \(-8.6\times 10^{-4}\) & \(-45\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-2\) \\ & & \(914\) & \(-1.1\) & \(-46\) & \(2\) & \(1\) & \(-\frac{1}{2}\) & \(\frac{1}{2}\) & \(-3\) \\ \(-79\) & \(86\) & \(164\) & \(16\) & \(112\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-2\) * \\ & & \(188\) & \(3.3\) & \(27\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-2\) \\ & & \(688\) & \(49\) & \(87\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-3\) * \\ & & \(806\) & \(3.0\times 10^{-2}\) & \(51\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-3\) \\ \(-79\) & \(297\) & \(72\) & \(24\) & \(577\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-1\) * \\ & & \(124\) & \(1.8\) & \(273\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-1\) \\ & & \(599\) & \(88\) & \(383\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-2\) * \\ & & \(725\) & \(5.7\times 10^{-2}\) & \(117\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-2\) \\ & & \(934\) & \(9.5\times 10^{-2}\) & \(294\) & \(2\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-3\) \\ & & \(953\) & \(3.8\times 10^{-1}\) & \(292\) & \(2\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-3\) \\ \(71\) & \(-47\) & \(99\) & \(-32\) & \(-82\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-1\) \\ & & \(134\) & \(-2.4\times 10^{-1}\) & \(-144\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-1\) \\ & & \(343\) & \(-1.9\times 10^{-1}\) & \(-49\) & \(2\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-2\) \\ & & \(455\) & \(-27\) & \(-51\) & \(2\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-2\) \\ & & \(689\) & \(-3.2\) & \(-50\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-2\) \\ & & \(776\) & \(-3.9\times 10^{-2}\) & \(-49\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-2\) \\ \(71\) & \(86\) & \(312\) & \(1.1\times 10^{-1}\) & \(85\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-2\) \\ & & \(413\) & \(9.3\times 10^{-4}\) & \(85\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-2\) \\ \(71\) & \(297\) & \(172\) & \(18\) & \(202\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-2\) * \\ & & \(285\) & \(9.5\times 10^{-2}\) & \(185\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-2\) \\ & & \(860\) & \(2.9\times 10^{-2}\) & \(279\) & \(2\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-3\) \\ & & \(952\) & \(3.2\times 10^{-1}\) & \(294\) & \(2\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-3\) \\ \(242\) & \(-47\) & \(61\) & \(-2.8\) & \(-65\) & \(1\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-1\) \\ & & \(99\) & \(-1.8\times 10^{-1}\) & \(-67\) & \(1\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-1\) \\ & & \(331\) & \(-9.6\times 10^{-2}\) & \(-50\) & \(2\) & \(1\) & \(\frac{1}{2}\) & \(-\frac{1}{2}\) & \(-3\) \\ & & \(414\) & \(-10\) & \(-54\) & \(2\) & \(0\) & \(\frac{1}{2}\) & \(\frac{1}{2}\) & \(-3\) \\ & will cause only weak mixing between different CaF rotational states at this distance. However, it is substantially larger than the rotational constant of the triatomic complex, \(B=\hbar^{2}/(2\mu R^{2})\), which is about 60 MHz at this distance. It is also larger than the spin-rotation coupling constant, \(\gamma\approx 40\) MHz. The long-range anisotropy is thus sufficient to quantize \(n\) along the intermolecular axis, with projection \(K\), instead of along the axis of the field. This is exactly analogous to the situation for Van der Waals complexes in coupling case 2 [66]. For each CaF rotational level \((n,K)\) there will be a set of spin states, labeled at fields above 50 G by \((f_{\rm Rb},m_{f,\rm Rb},m_{s,\rm CaF},m_{i,\rm F})\). Each such set \((n,K)\) will sample the short-range singlet and triplet potentials over a different range of Jacobi angles \(\theta\), so each group will be characterized by different singlet and triplet scattering lengths \(a_{\rm s}(n,K)\) and \(a_{\rm t}(n,K)\). These will probably be unrelated to the corresponding quantities for the channels with \(n=0\), \(a_{\rm s}(0,0)\) and \(a_{\rm t}(0,0)\) (designated simply \(a_{\rm s}\) and \(a_{\rm t}\) in Sec. III.1). For a particular interaction potential, the sets of spin states for \(n>0\) may therefore lie at quite different depths within their bins from those for \(n=0\). The patterns of levels will nevertheless be characterized by \(a_{\rm s}(n,K)\) and \(a_{\rm t}(n,K)\) and by quantum numbers \((f_{\rm Rb},m_{f,\rm Rb},m_{s,\rm CaF},m_{i,\rm F})\), in a similar way to those for the states with \(n=0\) described above. For Rb+CaF, the spin-exchange interaction may be characterized in terms of an anisotropic surface \(V^{-}(R,\theta)\) that is half the difference between the singlet and triplet surfaces, \[V^{-}(R,\theta)=\tfrac{1}{2}\left[V^{0}(R,\theta)-V^{1}(R,\theta)\right]. \tag{9}\] This may be expanded in Legendre polynomials, \[V^{-}(R,\theta)=\sum_{\lambda}V^{-}_{\lambda}(R)P_{\lambda}(\cos\theta). \tag{10}\] Such a potential is diagonal in \(K\), but each term in the expansion can couple \((n,K)=(0,0)\) to \((\lambda,0)\). The term \(V^{-}_{1}(R)\) can thus couple an incoming state at the lowest threshold to states with \((n,K)=(1,0)\). The spin selection rules are the same as for \(n=0\), so the strongest resonances will be those due to states dominated by \(m_{i,\rm F}=\tfrac{1}{2}\). As for \(n=0\), there are 3 such uncoupled states, with quantum numbers \((f_{\rm Rb},m_{f,\rm Rb},m_{s,\rm CaF},m_{i,\rm F})=(1,0,\tfrac{1}{2},\tfrac{1 }{2})\)\((2,1,-\tfrac{1}{2},\tfrac{1}{2})\) and \((2,0,\tfrac{1}{2},\tfrac{1}{2})\). \(V^{-}(R,\theta)\) is strongly anisotropic at short range, so there will always be some intermolecular distance \(R\) where it matches the separation between the incoming and resonant thresholds, where nonadiabatic couplings can occur by extension of the theory of ref. [60]. As seen in Sec. III.1, the states that can cause strong resonances traverse about 3 GHz of binding energy between zero field and 1000 G. Since the thresholds with \((n,f_{\rm Rb})=(1,1)\) lie about 20 GHz above \((0,1)\), the zero-field binding energy of a state must be between 20 and 23 GHz if it is to cause a Feshbach resonance below 1000 G. For an unknown potential, there is only about a 19% probability that the background scattering length for a single channel will lie in a range to give such a binding energy. The corresponding probability for \(n=2\) is about 9%, and the probabilities decrease for successively higher \(n\), because the bins are correspondingly wider at the required binding energy. The overall conclusion of this section is that there _may_ be resonances due to states involving rotationally excited CaF, but that they will occur at fields below 1000 G for a fairly small subset of possible interaction potentials. In any case, the mixing between rotational states of CaF due to long-range anisotropy is weak enough that it will not affect the likelihood of resonances due to the ground rotational state. ## V Potential effects of chaos The interaction potentials for Rb+CaF are very strongly anisotropic at short range, and provide strong coupling between CaF rotational and vibrational states. It is quite likely that Rb+CaF will possess short-range states that exhibit quantum chaos, in the same way as alkali-metal 3-atom [67; 39] and 4-atom systems [68; 69]. The onset of chaos has also been studied in Li+CaH and Li+CaF [70]. For Rb+CaF, the density of short-range singlet vibrational states at the energy of the lowest threshold has been estimated as 4 K\({}^{-1}\)[41], corresponding to a mean spacing of 5 GHz. If these states are fully chaotic, it is likely to produce structure in the singlet scattering length on this energy scale. However, the hyperfine couplings in singlet states will be small, probably dominated by nuclear electric quadrupole couplings of no more that a few MHz, which is tiny compared to the state separations. Furthermore, Zeeman shifts are very small for singlet states. At most, the presence of chaos at short range might make the singlet scattering length different for collisions involving Rb(\(f=1\)) and Rb(\(f=2\)). This would affect the details of the level structure, but not the probabilities of observing Feshbach resonances. The density of short-range triplet states at threshold is likely to be much smaller, perhaps by an order of magnitude. This corresponds to a mean spacing of order 50 GHz. The difference arises because the density of states for an atom-diatom system scales approximately with \(D^{3/2}\)[39], where \(D\) is the well depth, and the triplet surface of Rb+CaF is expected to be substantially shallower than the singlet surface, as for the alkali-metal dimers. The hyperfine couplings for triplet states will be comparable to those for the separated atom and molecule (6.8 GHz for Rb, 120 MHz for CaF) but these are still substantially smaller than the likely spacings between short-range triplet states. Zeeman effects are also much larger for triplet states than for singlet states, but are still only a few GHz at fields below 1000 G, so will not cause substantial mixings between short-range triplet states. It thus appears that the qualitative arguments in this paper about the patterns of energy levels and likelihood of Feshbach resonances will remain valid even if the short-range levels of Rb+CaF exhibit quantum chaos. ## VI Conclusions We have investigated magnetically tunable Feshbach resonances that may be expected in collisions between molecules in \({}^{2}\Sigma\) states and alkali-metal atoms, focussing on the prototype system Rb+CaF. The details of the short-range interaction potential are unknown, but expected to have minor influence, except to determine singlet and triplet scattering lengths \(a_{\mathrm{s}}\) and \(a_{\mathrm{t}}\). We have carried out coupled-channel calculations of the near-threshold bound states and scattering properties for a variety of values of these scattering lengths. We find that the large majority of plausible interaction potentials produce multiple resonances at magnetic field below 1000 G, which are likely to be experimentally accessible. In each case, at least some of these resonances are wide enough to be experimentally useful for tuning scattering lengths or for forming triatomic molecules by magnetassociation. The patterns of bound states may be understood in terms of underlying uncoupled states that lie parallel to atom-molecule thresholds as a function of magnetic field. There are varying degrees of coupling between these states, which depend on the values of \(a_{\mathrm{s}}\) and \(a_{\mathrm{t}}\). The coupling is weakest when \(a_{\mathrm{s}}\) and \(a_{\mathrm{t}}\) are similar. The widths of the resonances may be explained in terms of the nature of the states that cross threshold, together with effects due to the scattering lengths. We have considered the effect of potential anisotropy, which causes coupling between CaF rotational states. This coupling is very strong at short range. Even at long range, it is sufficient to quantize the CaF rotation along the intermolecular axis instead of along the magnetic field. It is likely that each rotational state of CaF will be characterized by different values of the singlet and triplet scattering lengths. We have found that there is a small but significant probability of additional wide resonances due to states supported by rotationally excited thresholds. We have also considered the potential influence of chaotic behavior for short-range states of Rb+CaF. We expect that, even if present, it will have limited effects on the long-range states that are principally responsible for the resonances and will not change the qualitative conclusions. This work indicates that atom-molecule systems such as Rb+CaF will have a rich spectrum of magnetically tunable Feshbach resonances at experimentally accessible magnetic fields. The resonances can be used to form a more detailed understanding of the atom-diatom potential energy surfaces. Much new physics will be accessible when these resonances are located. For example, a resonance can be used to tune the s-wave scattering length for interspecies collisions. In this way we can expect to find favorable conditions for sympathetic cooling, which can greatly increase the phase-space density of the molecular gas. The resonances may also be used to form polyatomic molecules by magnetoassociation. Many applications have already been identified for such molecules. They have unique advantages for probing interactions beyond the Standard Model that violate time-reversal symmetry [71, 72] and for testing theories of ultralight dark matter [73]. Their usefulness for quantum information processing has been highlighted [74, 75], and the very large number of stable, accessible internal states make them interesting as qudits [76]. They can also be used to explore a rich diversity of many-body phenomena such as quantum magnetism [77]. ## Data availability statement The data presented in this work are available from Durham University [URL to be supplied] ## Acknowledgement We are grateful to Matthew Frye for valuable discussions and to Ruth Le Sueur for software support. This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC) Grant Nos. EP/P01058X/1, EP/W00299X/1, EP/V011499/1 and EP/V011677/1.
2309.10793
Fano varieties with torsion in the third cohomology group
We construct first examples of Fano varieties with torsion in their third cohomology group. The examples are constructed as double covers of linear sections of rank loci of symmetric matrices, and can be seen as higher-dimensional analogues of the Artin--Mumford threefold. As an application, we answer a question of Voisin on the coniveau and strong coniveau filtrations of rationally connected varieties.
John Christian Ottem, Jørgen Vold Rennemo
2023-09-19T17:41:15Z
http://arxiv.org/abs/2309.10793v2
# Fano varieties with torsion ###### Abstract. We construct first examples of Fano varieties with torsion in their third cohomology group. The examples are constructed as double covers of linear sections of rank loci of symmetric matrices, and can be seen as higher-dimensional analogues of the Artin-Mumford threefold. As an application, we answer a question of Voisin on the coniveau and strong coniveau filtrations of rationally connected varieties. ## 1. Introduction If \(X\) is a nonsingular complex projective variety, the torsion subgroup of the integral cohomology group \(H^{3}(X,\mathbb{Z})\) is an important stable birational invariant. It was introduced by Artin and Mumford in [2], where they used the invariant to show that a certain unirational threefold is not rational. For rationality questions, perhaps the most interesting class of varieties is that of Fano varieties, that is, smooth varieties with ample anticanonical divisor. In dimension at most \(2\), these are all rational, with \(H^{3}(X,\mathbb{Z})=0\). In dimension \(3\), there are \(105\) deformation classes of Fano varieties [20, 21, 23], and direct inspection shows that in each class the group \(H^{3}(X,\mathbb{Z})\) is torsion free. Beauville asked on MathOverflow whether the same statement holds for Fano varieties in all dimensions [1].1 In this paper, we answer the question in the negative. Footnote 1: An incorrect counterexample is proposed in the answer to [1]; see Section 4.4. **Theorem 1.1**.: _For each even \(d\geq 4\), there is a \(d\)-dimensional Fano variety \(X\) of Picard rank 1 with \(H^{3}(X,\mathbb{Z})=\mathbb{Z}/2\)._ As a consequence, by [7, 15], the variety \(X\) is rationally connected but not stably rational. We do not know if it is unirational. The \(X\) in the theorem is a complete intersection in a double cover of the space of rank \(\leq 4\) quadrics in \(\mathbb{P}^{d/2+2}\). The families of maximal linear subspaces of these quadrics give Brauer-Severi varieties over \(X\), and via the isomorphism \(\operatorname{Br}(X)\cong\operatorname{Tors}H^{3}(X,\mathbb{Z})\), the associated Brauer class maps to the nonzero element in \(H^{3}(X,\mathbb{Z})\). Our examples can be regarded as higher-dimensional analogues of the Artin-Mumford threefold from [2], whose construction is closely related to that of our \(X\) (see Section 4.3). Starting in dimension \(6\), the Fano varieties we consider have a further exotic property. **Theorem 1.2**.: _When \(d\geq 6\), the \(d\)-dimensional Fano variety \(X\) from Theorem 1.1 has the property that the coniveau and strong coniveau filtrations differ. More precisely,_ \[0=\widetilde{N}^{1}H^{3}(X,\mathbb{Z})\subsetneqq N^{1}H^{3}(X,\mathbb{Z})=H^{3 }(X,\mathbb{Z})\cong\mathbb{Z}/2. \tag{1.1}\] The two coniveau filtrations \(\widetilde{N}^{c}H^{l}(X,\mathbb{Z})\subseteq N^{c}H^{l}(X,\mathbb{Z})\) of \(H^{l}(X,\mathbb{Z})\) were introduced in the paper [4]. The subgroups of the filtrations contain the cohomology classes in \(H^{l}(X,\mathbb{Z})\) obtained via pushforward from smooth projective varieties (resp. possibly singular projective varieties) of codimension at least \(c\). In the case \(c=1\), \(l=3\), they are described as follows. The group \(N^{1}H^{3}(X,\mathbb{Z})\) consists of classes in \(H^{3}(X,\mathbb{Z})\) supported on some divisor of \(X\). Its subgroup \(\widetilde{N}^{1}H^{3}(X,\mathbb{Z})\) consists of pushforwards \(f_{*}\beta\) of classes \(\beta\in H^{1}(S,\mathbb{Z})\) via proper maps \(f:S\to X\) where \(S\) is nonsingular of dimension \(\dim X-1\). An inequality of the two coniveau filtrations is particularly interesting for \(c=1\) because for each \(l\geq 0\), the quotient \[N^{1}H^{l}(X,\mathbb{Z})/\widetilde{N}^{1}H^{l}(X,\mathbb{Z}) \tag{1.2}\] is a stable birational invariant for smooth projective varieties [4, Proposition 2.4]. While the examples of [4] show that this quotient can be non-zero in general, it is known to be zero for certain classes of varieties. Voisin [30] proved that for a rationally connected threefold, any class in \(H^{3}(X,\mathbb{Z})\) modulo torsion lies in \(\widetilde{N}^{1}H^{3}(X,\mathbb{Z})\). Tian [27, Theorem 1.23] strenghtened this to show that \(H^{3}(X,\mathbb{Z})=\widetilde{N}^{1}H^{3}(X,\mathbb{Z})\) for any rationally connected threefold. Theorem 1.2 shows that the quotient (1.2) can be non-zero for rationally connected \(X\) of higher dimension, answering a question of Voisin (see [30, Question 3.1] and [4, Section 7.2]). The paper is organised as follows. Section 2 begins with background on the geometry of symmetric determinantal loci and their double covers. In Section 2.2, we explain how these symmetric determinental loci and their double covers are GIT quotients of affine space by an action of an orthogonal similitude group. In Section 2.3 (more specifically Definition 2.13), we define the main examples in Theorem 1.1 as linear sections of the double covers of symmetric rank loci. In Section 3, we use the presentation of the double symmetric rank loci as GIT quotients to show that their smooth part has non-trivial torsion classes \(\alpha\in H^{3}(X,\mathbb{Z})\). Taking a linear section and applying a generalised Lefschetz hyperplane theorem then proves Theorem 1.1, restated more precisely as Theorem 4.1. In Section 4, we study some special examples appearing in our construction and compute their geometric invariants, in particular, the "minimal" example of a \(4\)-dimensional Fano variety. In the final Section 5 we prove Theorem 1.2, restated precisely as Theorem 5.3. The key point is that the mod \(2\) reduction of the generator \(\alpha\) of \(H^{3}(X,\mathbb{Z})\) satisfies \(\overline{\alpha}^{2}\neq 0\pmod{2}\), which implies that \(\alpha\) is not of strong coniveau \(1\) by a topological obstruction described in [4]. We would like to thank N. Addington, O. Benoist, J. Kollar, S. Schreieder, F. Suzuki and C. Voisin for useful discussions. The work on this paper was begun at the Oberwolfach workshop _Algebraic Geometry: Moduli Spaces, Birational Geometry and Derived Aspects_ in the summer of 2022. J.V.R. is funded by the Research Council of Norway grant no. 302277. ### Notation We work over the complex numbers \(\mathbb{C}\). We use the notation for projective bundles where \(\mathbb{P}(\mathscr{E})\) consists of lines in \(\mathscr{E}\). By a Fano variety we mean a nonsingular projective variety with ample anticanonical bundle. ## 2. Symmetric determinantal loci and related varieties Here we survey basic facts on symmetric determinant loci. Some of these are well known; we in particular follow the works of Hosono-Takagi [13, Section 2] and Tyurin [28]. Let \(V=\mathbb{C}^{n}.\) We identify \(\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})\) with the space of quadrics in \(\mathbb{P}(V)\) and let \(Z_{r,n}\subset\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})\) denote the subset of quadrics of rank \(r\). \(Z_{r,n}\) is a quasi-projective variety; its closure \(\overline{Z}_{r,n}\) parameterizes the quadrics of rank \(\leq r\) and is defined by the vanishing of the \((r+1)\times(r+1)\)-minors of a generic \(n\times n\) symmetric matrix. These give a nested chain of subvarieties of \(\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})\), \[\overline{Z}_{1,n}\subset\overline{Z}_{2,n}\subset\ldots\subset\overline{Z}_ {n,n}=\mathbb{P}(\operatorname{Sym}^{2}V^{\vee}),\] where \(\overline{Z}_{1,n}\) is the 2nd Veronese embedding of \(\mathbb{P}^{n-1}\), and \(\overline{Z}_{n-1,n}\) is the degree \(n\) hypersurface defined by the determinant. **Proposition 2.1**.: _The variety \(Z_{r,n}\) is irreducible of dimension_ \[\dim Z_{r,n}=rn-\tfrac{1}{2}r^{2}+\tfrac{1}{2}r-1. \tag{2.1}\] _The singular locus of \(\overline{Z}_{r,n}\) is \(\overline{Z}_{r-1,n}\), unless \(r=1\) or \(r=n\), in which case \(\overline{Z}_{r,n}\) is smooth._ This proposition can be checked using the incidence variety \(\widetilde{Z}_{r,n}\) parameterizing \((n-r-1)\)-planes contained in the singular loci of quadrics \[\widetilde{Z}_{r,n}=\big{\{}([L],Q)\,\big{|}\,\mathbb{P}(L)\subset\operatorname {sing}(Q)\big{\}}\subset\operatorname{Gr}(n-r,V)\times\mathbb{P}(\operatorname {Sym}^{2}V^{\vee}).\] For \([L]\in\operatorname{Gr}(n-r,V)\), the fiber of the first projection \(\pi_{1}\), can be identified with the space of quadrics in \(\mathbb{P}(V/L)\simeq\mathbb{P}^{r-1}\), so \(\pi_{1}\) is a \(\mathbb{P}^{r(r+1)/2-1}\)-bundle over \(\operatorname{Gr}(n-r,V)\). It follows that \(\widetilde{Z}_{r,n}\) is nonsingular, and its dimension is given by (2.1). Moreover, it is straightforward to check that the second projection gives a desingularization \(\pi_{2}:\widetilde{Z}_{r,n}\to Z_{r,n}.\) For the claim about the singular locus, see [13, Section 2]. **Example 2.2**.: For \(n=5\), \(\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})\simeq\mathbb{P}^{14}\), and there are 4 closed rank loci: * \(\overline{Z}_{4,5}\) is a quintic hypersurface defined by the determinant of the generic \(5\times 5\)-symmetric matrix. * \(\overline{Z}_{3,5}\) is a codimension 3 subvariety of degree 20. * \(\overline{Z}_{2,5}\simeq\operatorname{Sym}^{2}\mathbb{P}^{4}\) is a codimension 6 subvariety of degree 35. * \(\overline{Z}_{1,5}\) is the 2nd Veronese embedding of \(\mathbb{P}^{4}\) in \(\mathbb{P}^{14}\); it is a fourfold of degree 16. ### Double covers We will only be interested in the case when the rank \(r\) is even. In this case, we can define a double cover \[\sigma:W_{r,n}\longrightarrow\overline{Z}_{r,n}\] which is ramified exactly over the locus \(\overline{Z}_{r-1,n}\), of codimension \(n-r+1\) in \(\overline{Z}_{r,n}.\) The construction is based on the classical fact that for a quadric \(Q\) of rank \(r\) in \(n\) variables, the variety of \((n-r/2-1)\)-planes in \(Q\subset\mathbb{P}^{n-1}\) is isomorphic to the orthogonal Grassmannian \(\operatorname{OG}(r/2,r)\), which has two connected components. The formal construction of \(W_{r,n}\) from this observation starts with the incidence variety \[U_{r,n}=\left\{(L,Q)\ |\ Q\in\overline{Z}_{r,n}\ \text{and}\ \mathbb{P}(L)\subset Q \right\}\subset\operatorname{Gr}(n-r/2,V)\times\overline{Z}_{r,n}. \tag{2.2}\] Taking the Stein factorisation of the projection \(U_{r,n}\to\overline{Z}_{r,n}\) we get a new variety \(W_{r,n}\) and morphisms \[\eta:U_{r,n}\to W_{r,n}\text{ and }\sigma:W_{r,n}\to\overline{Z}_{r,n}, \tag{2.3}\] where \(\eta\) has connected fibres and \(\sigma\) is finite. The fibre of \(\eta\) at a general point of \(W_{r,n}\) is isomorphic to a connected component of \(\operatorname{OG}(r/2,r)\). The morphism \(\sigma\) is a double cover, ramified exactly along \(\overline{Z}_{r-1,n}\) (see [13, Proposition 2.3]). For the remainder of the paper, we will let \[H=\sigma^{*}\mathcal{O}_{\overline{Z}_{r,n}}(1)\] be the pullback of the polarization from \(\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})\). The basic geometric properties of \(W_{r,n}\) are as follows. **Proposition 2.3**.: \(W_{r,n}\) _has Gorenstein canonical singularities contained in \(\sigma^{-1}(\overline{Z}_{r-2,n})\). It has Picard number 1 and its anticanonical divisor is_ \[-K_{W_{r,n}}=\frac{rn}{2}H. \tag{2.4}\] _In particular, \(W_{r,n}\) is a singular Fano variety._ Proof.: See [13, Proposition 2.5]. **Example 2.4**.: In the setting of Example 2.2, \(-K_{W_{4,5}}=10H\) follows because \(Z_{4,5}\subset\mathbb{P}^{14}\) is a quintic hypersurface and \(\sigma\) is etale over \(Z_{4,5}-Z_{2,5}\). ### (Double) symmetric determinantal loci as GIT quotients In this section, we explain how the varieties \(\overline{Z}_{r,n}\) and \(W_{r,n}\) can be presented as GIT quotients of affine spaces, which is a key ingredient in the cohomology computations needed in Theorems 1.1 and 1.2. Let \(r\) be even, let \(S=\mathbb{C}^{r}\), and let \(\omega_{S}\in\operatorname{Sym}^{2}S^{\vee}\) be a nondegenerate quadratic form. The _orthogonal similitude group_\(\operatorname{GO}(S)\subset\operatorname{GL}(S)\) consists of the linear automorphisms of \(S\) which preserve \(\omega_{S}\) up to scaling.2 In other words, an invertible linear map \(\phi\colon S\to S\) lies in \(\operatorname{GO}(S)\) if there exists a \(\chi(\phi)\in\mathbb{C}^{*}\) such that for all \(v\in S\), Footnote 2: The group \(\operatorname{GO}(S)\) could more properly be denoted \(\operatorname{GO}(S,\omega_{S})\), but since the choice of \(\omega_{S}\) does not matter, we omit it from the notation. \[\chi(\phi)\omega_{S}(v,v)=\omega_{S}(\phi(v),\phi(v)).\] The map \(\chi\colon\operatorname{GO}(S)\to\mathbb{C}^{*}\) defined by this relation is a group homomorphism, and we have an exact sequence \[1\longrightarrow\operatorname{O}(S)\longrightarrow\operatorname{GO}(S) \stackrel{{\chi}}{{\longrightarrow}}\mathbb{C}^{*}\longrightarrow 1.\] The group \(\operatorname{GO}(S)\) naturally acts on the orthogonal Grassmannian \(\operatorname{OG}(r/2,S)\). The variety \(\operatorname{OG}(r/2,S)\) has two connected components, and the action of \(\operatorname{GO}(S)\) on this two-element set gives an exact sequence \[1\longrightarrow\operatorname{GO}(S)^{\circ}\longrightarrow\operatorname{ GO}(S)\longrightarrow\mu_{2}\longrightarrow 1\] where \(\operatorname{GO}(S)^{\circ}\) is connected. We further have \(\operatorname{SO}(S)=\operatorname{O}(S)\cap\operatorname{GO}(S)^{\circ}\), and an exact sequence \[1\longrightarrow\operatorname{SO}(S)\longrightarrow\operatorname{GO}(S)^{ \circ}\stackrel{{\chi}}{{\longrightarrow}}\mathbb{C}^{*} \longrightarrow 1.\] Consider now the affine space \(\operatorname{Hom}(V,S)\simeq\mathbb{A}^{rn}.\) The group \(\operatorname{GO}(S)\) acts on \(\operatorname{Hom}(V,S)\) via \[\operatorname{GO}(S)\times\operatorname{Hom}(V,S) \to\operatorname{Hom}(V,S)\] \[(\phi,f)\mapsto\phi\circ f.\] We have a morphism of affine spaces \[\tau\colon\operatorname{Hom}(V,S)\to\operatorname{Sym}^{2}V^{\vee},\] defined by, for any \(f\in\operatorname{Hom}(V,S)\) and \(v,w\in V\), \[\tau(f)(v,w)=f^{*}\omega_{S}(v,w)=\omega_{S}(f(v),f(w)).\] Let \(CZ_{r,n}\subset\operatorname{Sym}^{2}V^{\vee}\) be the subset of \(\operatorname{Sym}^{2}V^{\vee}\) corresponding to quadratic forms of rank \(r\), so that \(Z_{r,n}=CZ_{r,n}/\mathbb{C}^{*}.\) The set \(\tau^{-1}(CZ_{r,n})\subset\operatorname{Hom}(V,S)\) consists of the \(f\colon V\to S\) such that \(f^{*}\omega_{S}\) has rank \(r\). **Lemma 2.5**.: _The group \(\operatorname{GO}(S)\) acts freely on \(\tau^{-1}(CZ_{r,n}).\)_ Proof.: If \(f\in\tau^{-1}(CZ_{r,n}),\) then \(f\) is surjective, so no element of \(\operatorname{GO}(S)\) fixes \(f.\) **Lemma 2.6**.: _The group \(\operatorname{GO}(S)^{\circ}\) acts freely on \(\tau^{-1}(CZ_{r,n}\cup CZ_{r-1,n}).\)_ Proof.: The previous lemma shows that \(\operatorname{GO}(S)^{\circ}\) acts freely on \(\tau^{-1}(CZ_{r,n}).\) So let \(f\in\tau^{-1}(CZ_{r-1,n}),\) and let \(\phi\in\operatorname{GO}(S)\) be an element which fixes \(f.\) We will show that \(\phi\) is the identity. Since \(f^{*}\omega_{S}\) has rank \(r-1,\) we may find a basis \(v_{1},\dots,v_{n}\) of \(V\) such that \[f^{*}(\omega_{S})(v_{i},v_{j})=\omega_{S}(f(v_{i}),f(v_{j}))=\begin{cases}1 \text{ if }1\leq i=j\leq r-1\\ 0\text{ otherwise.}\end{cases}\] The elements \(f(v_{1}),\dots,f(v_{r-1})\in S\) are orthonormal, so we can choose a vector \(e\in S\) such that \(f(v_{1}),\dots,f(v_{r-1}),e\) is an orthonormal basis for \(S.\) Since \(\phi\) fixes \(f,\) we have \(\phi(f(v_{i}))=f(v_{i})\) for \(1\leq i\leq r-1.\) We have \[\omega_{S}(\phi(e),\phi(e))=\omega_{S}(e,e).\] and, for each \(i,\) \[\omega_{S}(\phi(f(v_{i})),\phi(e))=\omega_{S}(f(v_{i}),e).\] This implies that \(\phi(e)=\pm e,\) and then the fact that \(\phi\in\operatorname{GO}(S)^{\circ}\) forces \(\phi(e)=e.\) This means that \(\phi\) is the identity element. **Lemma 2.7**.: _The codimension of \(\tau^{-1}(CZ_{r-2,n})\) in \(\operatorname{Hom}(V,S)\) equals \(n-r+2.\)_ Proof.: Let \(f\in\operatorname{Hom}(V,S).\) The rank of \(f^{*}\omega_{S}\) equals the rank of \(\omega_{S}|_{f(V)}.\) Thus if \(f^{*}\omega_{S}\) has rank \(r-2\) we either have that \(f(V)\) has dimension \(r-2\) or that \(f(V)\) has dimension \(r-1\) and \(\mathbb{P}(f(V))\) is tangent to the quadric \(V(\omega_{S})\subset\mathbb{P}(S).\) The set of maps \(f\) of rank \(r-2\) has codimension \(2(n-r+2),\) while the set of \(f\) with rank \(r-1\) has codimension \(n-r+1.\) The further requirement that \(\mathbb{P}(f(V))\) is tangent to the quadric gives codimension \(n-r+2.\) The character \(\chi\) of \(\operatorname{GO}(S)\) induces a \(\operatorname{GO}(S)\)-linearisation of \(\mathcal{O}_{\operatorname{Hom}(V,S)}\) such that \(x\in H^{0}(\operatorname{Hom}(V,S),\mathcal{O}_{\operatorname{Hom}(V,S)})\) is \(\operatorname{GO}(S)\)-invariant if and only if for all \(\phi\in\operatorname{GO}(S)\) and \(f\in\operatorname{Hom}(V,S)\) we have \[x(\phi f)=\chi(\phi)x(f).\] Let \(\operatorname{Hom}(V,S)^{ss}\subset\operatorname{Hom}(V,S)\) denote the associated GIT semistable locus and let \(\operatorname{Hom}(V,S)^{us}=\operatorname{Hom}(V,S)-\operatorname{Hom}(V,S) ^{ss}.\) **Lemma 2.8**.: _We have_ \[\operatorname{Hom}(V,S)^{us}=\tau^{-1}(0),\] _and an isomorphism_ \[\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)\simeq \overline{Z}_{r,n}.\] Proof.: Let \(R\) be the coordinate ring of \(\operatorname{Hom}(V,S).\) The GIT quotient \(\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)\) is given by \(\operatorname{Proj}R^{\operatorname{O}(S)},\) where the ring \(R^{\operatorname{O}(S)}\) is graded by the action of \(\operatorname{GO}(S),\) an action which factors through \(\chi\colon\operatorname{GO}(S)\to\mathbb{C}^{*}.\) Any linear function \(x\) on \(\operatorname{Sym}^{2}V^{\vee}\) defines an element \(\tau^{*}(x)\in R^{\operatorname{O}(S)},\) and the first fundamental theorem of invariant theory for orthogonal groups says that these \(\tau^{*}(x)\) generate \(R^{\operatorname{O}(S)}\)[24, p. 390]. This shows that \(\operatorname{Hom}(V,S)^{us}=\tau^{-1}(0),\) and moreover that \(\tau\) gives a closed embedding \(\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)\to\operatorname {\mathbb{P}(Sym}^{2}V^{\vee}).\) It is easy to see that its image is \(\overline{Z}_{r,n}.\) Thinking of \(\chi\) as a character of \(\operatorname{GO}(S)^{\circ},\) we get a \(\operatorname{GO}(S)^{\circ}\)-linearisation of \(\mathcal{O}_{\operatorname{Hom}(V,S)}.\) The associated GIT semistable locus in \(\operatorname{Hom}(V,S)\) is the same as for the \(GO(S)\)-linearisation, since \(\operatorname{GO}(S)^{\circ}\) has finite index in \(\operatorname{GO}(S).\) **Lemma 2.9**.: _The GIT quotient \(\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ}\) is isomorphic to \(W_{r,n}\)._ Proof.: Since \(\operatorname{GO}(S)^{\circ}\) has finite index in \(\operatorname{GO}(S),\) the morphism \(\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ}\to \operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)\) is finite. Since \(\operatorname{Hom}(V,S)^{ss}\) is smooth, \(\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ}\) is normal [22, p. 5]. The open subset \(\tau^{-1}(CZ_{r,n})\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ}\subset \operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ}\) is isomorphic to \(\sigma^{-1}(Z_{r,n})\subset W_{r,n}\) by the following construction. Fix an \(r/2\)-dimensional isotropic linear subspace \(L\subset S.\) Recall the variety \(U_{r,n}\) from (2.2) and define a morphism \(\gamma\colon\tau^{-1}(CZ_{r,n})\to U_{r,n}\) by sending \(f\in\tau^{-1}(CZ_{r,n})\) to the pair of the quadric \[Q=\{[v]\in\mathbb{P}(V)\mid f^{*}\omega_{S}(v,v)=0\}\] and the linear subspace \(f^{-1}(L)\subset V.\) Composing with \(\eta\colon U_{r,n}\to W_{r,n}\) gives a morphism \(\eta\circ\gamma\colon\tau^{-1}(CZ_{r,n})\to W_{r,n}.\) This morphism is \(\operatorname{GO}(S)^{\circ}\)-invariant, and one checks that it gives a bijection between the \(\operatorname{GO}(S)^{\circ}\)-orbits in \(\tau^{-1}(CZ_{r,n})\) and the points of \(\sigma^{-1}(Z_{r,n}).\) Since \(\sigma^{-1}(Z_{r,n})\) is smooth by Proposition 2.3, it follows from Zariski's main theorem that the induced morphism \[\psi\colon\tau^{-1}(CZ_{r,n})\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ}\to \sigma^{-1}(Z_{r,n})\] is an isomorphism. The birational map \(\psi\) fits in the following commutative diagram: Let \(L\) be the function field of \(\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ}\), identified with the function field of \(W_{r,n}\). Since these two varieties are normal and finite over \(\overline{Z}_{r,n}\), they are both equal to the relative normalisation of \(\overline{Z}_{r,n}\) in \(\operatorname{Spec}L\), and so \(\psi\) extends to an isomorphism of varieties. **Proposition 2.10**.: _Etale locally near a point \(p\in\sigma^{-1}(Z_{r-2,n})\), the pair \((W_{r,n},p)\) is isomorphic to_ \[(C\times\mathbb{A}^{M},(0,0))\] _where \(C\) is the affine cone over the Segre embedding of \(\mathbb{P}^{n-r+1}\times\mathbb{P}^{n-r+1}\), \(0\in C\) is the singular point, and \(M=\dim W_{r,n}-\dim C\)._ Proof.: We use the isomorphism \(\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ} \simeq W_{r,n}\). Let \(f\in\operatorname{Hom}(V,S)^{ss}\) be a point whose orbit maps to \(\sigma^{-1}(Z_{r-2,n})\) under this isomorphism. Then \(f\in\tau^{-1}(CZ_{r-2,n})\), and we can choose a basis \(v_{1},\ldots,v_{n}\) for \(V\) such that \[f^{*}\omega_{S}(v_{i},v_{j})=\omega_{S}(f(v_{i}),f(v_{j}))=\begin{cases}1\text { if }1\leq i=j\leq r-2\\ 0\text{ otherwise}\end{cases}\] This means that the elements \(f(v_{1}),\ldots,f(v_{r-2})\) are orthonormal in \(S\), and we extend this sequence to a basis of \(S\) by adding vectors \(e_{1},e_{2}\) such that \[\omega_{S}(e_{i},e_{j})=\delta_{ij}\] \[\omega_{S}(e_{i},f(v_{j}))=0\text{ for all }i,j\] Since the \(f(v_{r-1}),\ldots,f(v_{n})\) are orthogonal to each other and to each \(f(v_{i})\) with \(1\leq i\leq r-2\), the space \[\langle f(v_{r-1}),\ldots,f(v_{n})\rangle\] is an isotropic subspace of \(\langle f(v_{1}),\ldots,f(v_{r-2})\rangle^{\perp}=\langle e_{1},e_{2}\rangle\). The isotropic subspaces of \(\langle e_{1},e_{2}\rangle\) are \(\langle e_{1}\rangle\) and \(\langle e_{2}\rangle\). Reordering the \(e_{i}\), we may assume that \(f(v_{r-1}),\ldots,f(v_{n})\) are all contained in \(\langle e_{1}\rangle\). After linearly transforming the \(v_{i}\), we may assume that \(f(v_{r-1})=\gamma e_{1}\) for some \(\gamma\in\mathbb{C}\) and \(f(v_{i})=0\) for \(i\geq r\). There is a subgroup \(T\subset\operatorname{GO}(S)^{\circ}\), with \(T\cong\mathbb{C}^{*}\), consisting of elements \(\phi_{\lambda}\in\operatorname{GO}(S)^{\circ}\), with \(\lambda\in\mathbb{C}^{*}\), defined by \[\phi_{\lambda}(f(v_{i})) = f(v_{i}),\ \ \ \ \ i=1,\ldots,r-2,\] \[\phi_{\lambda}(e_{1}) = \lambda e_{1}\] \[\phi_{\lambda}(e_{2}) = \lambda^{-1}e_{2}.\] There are now two cases to consider: If \(\gamma\neq 0\), the stabiliser group of \(f\) in \(\operatorname{GO}(S)^{\circ}\) is trivial. The \(GO(S)^{\circ}\)-orbit of \(f\) is not closed in \(\operatorname{Hom}(V,S)^{ss}\), since \(\lim_{\lambda\to 0}(\phi_{\lambda}f)\) is a point of \(\operatorname{Hom}(V,S)^{ss}\). If \(\gamma=0\), the stabiliser group of \(f\) is \(T\). In this case the \(GO(S)^{\circ}\)-orbit of \(f\) is closed in \(\operatorname{Hom}(V,S)^{ss}\), since the orbit has minimal dimension among orbits mapping to \(\sigma^{-1}(Z_{r-2,n})\). Let us write \(\mathbb{A}^{i}(j)\) for a \(T\)-representation of dimension \(i\) with weight \(j\). We then have an isomorphism of \(T\)-representations \[T_{\operatorname{Hom}(V,S),f}\cong\operatorname{Hom}(V,S)\cong\mathbb{A}^{n} (1)\oplus\mathbb{A}^{n}(-1)\oplus\mathbb{A}^{n(r-2)}(0).\] The Luna etale slice theorem implies that etale locally near the orbit of \(f\), the variety \(\operatorname{Hom}(V,S)^{ss}\mathbin{/\!\!/}\operatorname{GO}(S)^{\circ}\) is isomorphic to \[N_{f}\mathbin{/\!\!/}T,\] where \(N_{f}=T_{\operatorname{Hom}(V,S),f}/T_{\operatorname{GO}(S)^{\circ}f,f}\) is the normal space to the \(\operatorname{GO}(S)^{\circ}\)-orbit of \(f\). As \(T\)-representations \[T_{\operatorname{GO}(S)^{\circ}f,f}\cong\operatorname{Lie}(\operatorname{GO}(S )^{\circ})/\operatorname{Lie}(T).\] Computing \[\operatorname{Lie}(\operatorname{GO}(S)^{\circ})\cong\mathbb{A}^{r-2}(1) \oplus\mathbb{A}^{r-2}(-1)\oplus\mathbb{A}^{\binom{r-2}{2}+2}(0)\] \[\operatorname{Lie}(T)=\mathbb{A}^{1}(0)\] gives \[N_{f}\cong\mathbb{A}^{n-r+2}(1)\oplus\mathbb{A}^{n-r+2}(-1)\oplus\mathbb{A}^{M }(0)\] for some \(M\). The quotient \(N_{f}\mathbin{\mathchoice{\mathbin{\hbox to 0.0pt{\kern 2.999954pt\vrule hei ght 3.299954pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 2.999954pt \vrule height 3.299954pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 2.999954pt \vrule height 3.299954pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt \vrule height 3.299954pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.2999 54pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule heig ht 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.2999 54pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule heig ht 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.2999 54pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule heig ht 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.2999 54pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.2999 54pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3.2999 54pt width 1px\hss}\hbox{$\big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule heig ht 3.299954pt width 1px\hss}\hbox{$ \big{/}$}}{\mathbin{\hbox to 0.0pt{\kern 1.499977pt\vrule height 3. By Lemma 2.12, we obtain Fano varieties as linear sections of \(W_{r,n}\) only when \(r=2\) or \(r=4\). The case \(r=2\) gives \(W_{2,n}=\mathbb{P}^{n-1}\times\mathbb{P}^{n-1}\), and many linear sections of \(\mathbb{P}^{n-1}\times\mathbb{P}^{n-1}\) are indeed Fano, but these varieties do not have interesting cohomology groups from the point of view of this paper. We therefore focus on the case \(r=4\). In this case the existence of the double cover \(\sigma:W_{4,n}\to\overline{Z}_{4,n}\) is explained as follows. A smooth quadric surface in \(\mathbb{P}^{3}\) contains two families of lines; thus a quadric of rank \(4\) in \(n\) variables contains two families of \((n-2)\)-planes, each parameterised by a \(\mathbb{P}^{1}\). Thus \(W_{4,n}\) parameterises quadrics plus a choice of one of the two families. The dimensions of the first few rank loci \(Z_{i}\) are given by \[\dim Z_{4,n}=4n-7,\ \dim Z_{3,n}=3n-4,\ \dim Z_{2,n}=2n-2,\ \dim Z_{1,n}=n-1.\] By Corollary 2.11, the double cover \(W_{4,n}\) is singular along \(\sigma^{-1}(\overline{Z}_{2,n})\), which has codimension \(2n-5\) in \(W_{4,n}\). By (2.4), the canonical divisor of \(W_{4,n}\) equals \[K_{W_{4,n}}=-2nH.\] **Definition 2.13**.: Given \(n\geq 4\) and \(c\geq 0\), let \(X_{n,c}\) be a general complete intersection \[X_{n,c}=W_{4,n}\cap H_{1}\cap\cdots\cap H_{c}. \tag{2.6}\] The varieties \(X\) in Theorems 1.1 and 1.2 are \(X_{n,2n-1}\) with \(n\geq 5\) and \(n\geq 6\), respectively. ## 3. Cohomology computations Let \(X_{n,c}^{\rm sm}\) be the smooth part of \(X_{n,c}\). In this section we compute the low degree cohomology of \(X_{n,c}^{\rm sm}\). In Proposition 3.1 we compute the low degree cohomology of \(\operatorname{BGO}(4)^{\circ}\), and in Proposition 3.5 we show that this agrees with low degree cohomology of \(X_{n,c}^{\rm sm}\). We summarise the consequences for the cohomology of \(X_{n,c}^{\rm sm}\) in Corollary 3.6. In order to prove Theorem 1.1, we want a non-zero \(2\)-torsion cohomology class of degree \(3\), and for Theorem 1.2, the class should furthermore have a non-zero square modulo \(2\) (this will be explained in Proposition 5.2). ### Cohomology of \(\operatorname{BSO}(4)\) The cohomology rings with integer coefficients of the classifying spaces \(\operatorname{BSO}(n)\) were computed by Brown [6] and Feshbach [10]. For \(n=4\), the ring is given by \[H^{*}(\operatorname{BSO}(4),\mathbb{Z})=\mathbb{Z}[\nu,e,p]/(2\nu),\] where \(e\) is the Euler class (of degree \(4\)), \(p\) is the Pontrjagin class (degree \(4\)), and \(\nu\) is a \(2\)-torsion class of degree \(3\). Thus the low-degree cohomology groups of \(\operatorname{BSO}(4)\) are given by \[\begin{array}{c|c|c|c|c|c|c}H^{0}&H^{1}&H^{2}&H^{3}&H^{4}&H^{5}&H^{6}\\ \hline\mathbb{Z}&0&0&\mathbb{Z}/2\cdot\nu&\mathbb{Z}p\oplus\mathbb{Z}e&0& \mathbb{Z}/2\cdot\nu^{2}\\ \end{array}\] The cohomology ring of \(\operatorname{BSO}(4)\) with \(\mathbb{Z}/2\)-coefficients is given by \[H^{*}(\operatorname{BSO}(4),\mathbb{Z}/2)=\mathbb{Z}/2[w_{2},w_{3},w_{4}]. \tag{3.1}\] where \(w_{2},w_{3},w_{4}\in H^{*}(\operatorname{BSO}(4),\mathbb{Z}/2)\) denote the Stiefel-Whitney classes [19]. The class \(\nu\) is equal to \(\beta_{\mathbb{Z}}(w_{2})\in H^{3}(\operatorname{BSO}(4),\mathbb{Z})\), where \(\beta_{\mathbb{Z}}\) is the Bockstein homomorphism associated with \[0\longrightarrow\mathbb{Z}\longrightarrow\mathbb{Z}\longrightarrow\mathbb{Z} /2\longrightarrow 0.\] Moreover, the mod 2 reduction of \(\nu\) is given by \(w_{3}\)[12, p. 97]. Therefore, the mod 2 reduction of \(\nu^{2}\) equals \[w_{3}^{2}\neq 0\,\,\,\text{in}\,\,H^{6}(\text{BSO}(4),\mathbb{Z}/2).\] ### Cohomology of \(\text{BGO}(4)^{\circ}\) We next compute the low degree cohomology of \(\text{BGO}(4)^{\circ}\). **Proposition 3.1**.: _We have_ \[H^{1}(\text{BGO}(4)^{\circ},\mathbb{Z})=0,\,H^{2}(\text{BGO}(4)^{\circ}, \mathbb{Z})=\mathbb{Z},\,H^{3}(\text{BGO}(4)^{\circ},\mathbb{Z})=\mathbb{Z}/2.\] _Moreover, if \(0\neq\gamma\in H^{3}(\text{BGO}(4)^{\circ},\mathbb{Z})\), the mod 2 reduction of \(\gamma^{2}\) is non-zero._ Proof.: We use the exact sequence \[1\longrightarrow\text{SO}(4)\longrightarrow\text{GO}(4)^{\circ} \longrightarrow\mathbb{C}^{*}\longrightarrow 1.\] This gives a fibre bundle \(\pi:\text{BSO}(4)\rightarrow\text{BGO}(4)^{\circ}\) with fiber \(\mathbb{C}^{*}\). The associated Gysin sequence for this circle bundle takes the form \[\cdots\to H^{i}(\text{BGO}(4)^{\circ},\mathbb{Z})\xrightarrow{\pi^{*}}H^{i }(\text{BSO}(4),\mathbb{Z})\xrightarrow{\pi_{*}}H^{i-1}(\text{BGO}(4)^{\circ },\mathbb{Z})\rightarrow\cdots\] Using the fact that \(H^{0}(\text{BSO}(4),\mathbb{Z})\to H^{0}(\text{BGO}(4)^{\circ}, \mathbb{Z})\) is an isomorphism and \[H^{1}(\text{BSO}(4),\mathbb{Z})=H^{2}(\text{BSO}(4),\mathbb{Z})=0,\] this sequence gives that \[H^{1}(\text{BGO}(4)^{\circ},\mathbb{Z})=0\] and \[H^{2}(\text{BGO}(4)^{\circ},\mathbb{Z})=\mathbb{Z}.\] Since \(H^{3}(\text{BSO}(4),\mathbb{Z})=\mathbb{Z}/2\), the map \(H^{3}(\text{BSO}(4),\mathbb{Z})\to H^{2}(\text{BGO}(4)^{\circ}, \mathbb{Z})\) is 0, and so \(H^{3}(\text{BGO}(4),\mathbb{Z})=\mathbb{Z}/2\). If \(0\neq\gamma\in H^{3}(\text{BGO}(4)^{\circ},\mathbb{Z})\), then \(\pi^{*}(\gamma^{2})=\nu^{2}\), which has reduction \(w_{3}^{2}\neq 0\) modulo 2. It follows that \(\gamma^{2}\) has non-zero reduction modulo 2. ### Cohomology of hyperplane sections of \(W_{r,n}^{\text{sm}}\) Let \(S\) be a quadratic \(r\)-dimensional vector space and \(L\subseteq\mathbb{P}(\text{Sym}^{2}\,V^{\vee})\) is a codimension \(c\) linear subspace. We analyse the natural homomorphism \[H^{*}(\text{BGO}(S)^{\circ},\mathbb{Z})\to H^{*}(L\times_{\mathbb{P}( \text{Sym}^{2}\,V^{\vee})}W_{r,n}^{\text{sm}},\mathbb{Z}) \tag{3.2}\] and show that it is an isomorphism in low degrees. To define the homomorphism, begin with the pullback maps \[H^{*}(\text{BGO}(S)^{\circ},\mathbb{Z})\xrightarrow{\simeq}H^{*}_{\text{GO}( S)^{\circ}}(\text{Hom}(V,S),\mathbb{Z})\to H^{*}_{\text{GO}(S)^{\circ}}( \text{Hom}(V,S)-\tau^{-1}(CZ_{r-2,n}),\mathbb{Z}),\] with \(\tau\) and \(CZ_{r-2,n}\) as defined in Section 2.2. By Lemma 2.5 and Corollary 2.11, the variety \(W_{r,n}^{\text{sm}}\) is isomorphic to \((\text{Hom}(V,S)-\tau^{-1}(CZ_{r-2,n})/\,\text{GO}(S)^{\circ}\), where the group action is free, so we get an isomorphism \[H^{*}_{\text{GO}(S)^{\circ}}(\text{Hom}(V,S)-\tau^{-1}(CZ_{r-2,n}),\mathbb{Z} )\xrightarrow{\simeq}H^{*}(W_{r,n}^{\text{sm}},\mathbb{Z}).\] Finally, we have the pullback homomorphism \[H^{*}(W_{r,n}^{\text{sm}},\mathbb{Z})\to H^{*}(L\times_{\mathbb{P}( \text{Sym}^{2}\,V^{\vee})}W_{r,n}^{\text{sm}},\mathbb{Z}),\] and composing these maps gives (3.2). **Lemma 3.2**.: _Let \(G\) be an algebraic group on an affine space \(\mathbb{A}^{N}\). Let \(Z\subset\mathbb{A}^{N}\) be a closed, \(G\)-invariant subset of codimension \(c\), and let \(U=\mathbb{A}^{N}-Z\). Then the natural homomorphisms_ \[H^{l}_{G}(\operatorname{pt},\mathbb{Z}) \to H^{l}_{G}(U,\mathbb{Z})\] \[H^{l}_{G}(\operatorname{pt},\mathbb{Z}/2) \to H^{l}_{G}(U,\mathbb{Z}/2)\] _are isomorphisms for \(l<2c-1\), and injective for \(l=2c-1\)._ Proof.: The Leray-Serre spectral sequence for equivariant cohomology [18, p. 501] has \(E_{2}\)-page \(H^{i}_{G}(\operatorname{pt},H^{j}(U))\) and converges to \(H^{i+j}_{G}(U)\). Since \(H^{j}(U)=0\) for \(0<j\leq 2c-2\), there are no non-trivial differentials whose domain is of degree \((i,j)\) with \(i+j\leq 2c-2\). The claim of the theorem follows from this. **Lemma 3.3**.: _The homomorphisms_ \[H^{i}_{\operatorname{GO}(S)^{\circ}}(\operatorname{pt},\mathbb{Z}) \to H^{i}_{\operatorname{GO}(S)^{\circ}}(\operatorname{Hom}(V,S)- \tau^{-1}(CZ_{r-2,n}),\mathbb{Z})\] \[H^{i}_{\operatorname{GO}(S)^{\circ}}(\operatorname{pt},\mathbb{ Z}/2) \to H^{i}_{\operatorname{GO}(S)^{\circ}}(\operatorname{Hom}(V,S)- \tau^{-1}(CZ_{r-2,n}),\mathbb{Z}/2)\] _are isomorphisms for \(i<2n-2r+3\) and injective for \(i=2n-2r+3\)._ Proof.: Combine Lemma 2.7 and Lemma 3.2. **Lemma 3.4**.: _Let \(L\subseteq\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})\) be a generic codimension \(c\) linear subspace. The homomorphisms_ \[H^{i}(W^{\operatorname{sm}}_{r,n},\mathbb{Z}) \to H^{i}(L\times_{\mathbb{P}(\operatorname{Sym}^{2}V)^{\vee}}W^{ \operatorname{sm}}_{r,n},\mathbb{Z})\] \[H^{i}(W^{\operatorname{sm}}_{r,n},\mathbb{Z}/2) \to H^{i}(L\times_{\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})}W^{ \operatorname{sm}}_{r,n},\mathbb{Z}/2)\] _are isomorphisms for \(i\leq\dim W_{r,n}-c\) and injective for \(i=\dim W_{r,n}-c\)._ Proof.: The generalised Lefschetz theorem of Goresky-MacPherson [11, Thm p.150] states that we have isomorphisms on the level of homotopy groups for low degrees. Combining this with the Hurewicz theorem gives the statement for cohomology groups. **Proposition 3.5**.: _Let \(L\subseteq\mathbb{P}(\operatorname{Sym}V^{\vee})\) be a generic codimension \(c\) subspace. The homomorphisms_ \[H^{j}(\operatorname{BGO}(S)^{\circ},\mathbb{Z}) \to H^{*}(L\times_{\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})}W^{ \operatorname{sm}}_{r,n},\mathbb{Z})\] \[H^{j}(\operatorname{BGO}(S)^{\circ},\mathbb{Z}/2) \to H^{*}(L\times_{\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})}W^{ \operatorname{sm}}_{r,n},\mathbb{Z}/2)\] _are isomorphisms for \(j<N\) and injective for \(j=N\), where_ \[N=\min(2n-1,\dim W_{r,n}-c)\] Proof.: Combine Lemmas 3.2, 3.3 and 3.4. **Corollary 3.6**.: _If \(c\leq 4n-11\), then_ \[H^{0}(X^{\operatorname{sm}}_{n,c},\mathbb{Z})=\mathbb{Z},H^{1}(X^{\operatorname {sm}}_{n,c},\mathbb{Z})=0,H^{2}(X^{\operatorname{sm}}_{n,c},\mathbb{Z})= \mathbb{Z},H^{3}(X^{\operatorname{sm}}_{n,c},\mathbb{Z})=\mathbb{Z}/2.\] _If moreover \(c\leq 4n-13\), then the square of the non-zero class in \(H^{3}(X^{\operatorname{sm}}_{n,c},\mathbb{Z})\) does not vanish modulo 2._ Proof.: Since \(X_{n,c}=L\times_{\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})}W_{4,n}\) for a generic codimension \(c\) subspace \(L\), Bertini's theorem implies \(X_{n,c}^{\operatorname{sm}}=L\times_{\mathbb{P}(\operatorname{Sym}^{2}V^{ \vee})}W_{4,n}^{\operatorname{sm}}\). The first claim then follows from Propositions 3.1 and 3.5. For the second claim, let \(\phi\colon H^{*}(\operatorname{BGO}(4)^{\circ},\mathbb{Z})\to H^{*}(X_{n,c}^{ \operatorname{sm}},\mathbb{Z})\) be the natural homomorphism. Taking \(0\neq\gamma\in H^{3}(\operatorname{BGO}(4)^{\circ},\mathbb{Z})\), we know from Proposition 3.1 that \(\gamma^{2}\neq 0\pmod{2}\). If \(c\leq 4n-13\), Proposition 3.5 implies that the map \[H^{6}(\operatorname{BGO}(4)^{\circ},\mathbb{Z}/2)\to H^{6}(X_{n,c}^{ \operatorname{sm}},\mathbb{Z}/2)\] is injective, and it follows that \(\phi(\gamma)^{2}=\phi(\gamma^{2})\neq 0\pmod{2}\). **Remark 3.7**.: When \(X_{n,c}\) is smooth and rationally connected, the torsion subgroup of \(H^{3}(X_{n,c},\mathbb{Z})\) can be identified with the Brauer group \(\operatorname{Br}(X_{n,c})\). Under this identification, the generator of \(H^{3}(X_{n,c},\mathbb{Z})\simeq\mathbb{Z}/2\) is represented by Brauer-Severi variety given by the restriction of \(\eta:U_{4,n}\to W_{4,n}\) to \(X_{n,c}\). ## 4. The varieties \(X_{n,c}\) We now analyse a few particularly interesting choices of \(n\) and \(c\). ### The case \(c=2n-1\) Let \(X=X_{n,2n-1}\). Collecting our work, we have now proved Theorem 1.1, restated more precisely as follows. **Theorem 4.1**.: _The variety \(X\) is nonsingular of dimension \(2n-6\) with \(K_{X}=-H\), and hence Fano. It has Picard number 1 and \(H^{3}(X,\mathbb{Z})=\mathbb{Z}/2\)._ Proof.: The singular locus in \(W_{4,n}\) has dimension \(2n-2\) by Proposition 2.1 and Corollary 2.11, so \(X\) is nonsingular by Bertini's theorem. The proof of Lemma 2.12 gives \(K_{X}=-H\). Finally, \(H^{3}(X,\mathbb{Z})\) is computed in Corollary 3.6. ### The Fano fourfold Specialising further, let \(X=X_{5,9}\). The strata \(Z_{i,5}\) were described in Example 2.2. The quintic hypersurface \(\overline{Z}_{4,5}\subset\mathbb{P}^{14}\) parameterizes singular quadrics in \(5\) variables, i.e., cones over quadrics in \(\mathbb{P}^{3}\). The double cover \(W_{4,n}\to\overline{Z}_{4,5}\) is ramified along the \(\overline{Z}_{2,5}\subset\mathbb{P}^{14}\), which is a singular \(8\)-fold. **Proposition 4.2**.: _The fourfold \(X\) is a Fano variety with invariants_ 1. \(\operatorname{Pic}(X)=\mathbb{Z}H\)_, with_ \(H^{4}=10\)_._ 2. \(-K_{X}=H\)__ 3. \(h^{1,3}(X)=9\)_,_ \(h^{2,2}(X)=67\)_._ 4. \(H^{3}(X,\mathbb{Z})=\mathbb{Z}/2\)_._ Proof.: (1), (2) and (4) follow from Theorem 4.1. (3) follows from Lemma 4.4 and Corollary 4.5 below. #### 4.2.1. Homological projective duality In the paper [25], the second named author studies derived categories of linear sections of the stack \(\operatorname{Sym}^{2}\mathbb{P}^{n-1}\) from the perspective of homological projective duality [17]. When \(n\) is odd, the paper defines a noncommutative resolution \(Y_{n}\) of \(W_{n-1,n}\), and shows that linear sections of this noncommutative resolution are related to dual linear sections of \(\operatorname{Sym}^{2}\mathbb{P}^{n-1}\) in precisely the way predicted by HP duality, which strongly suggest that \(Y_{n}\) is HP dual to \(\operatorname{Sym}^{2}\mathbb{P}^{n-1}\). Specialising to the case \(n=5\) and linear sections of the appropriate dimensions gives the following result. Let \(V=\mathbb{C}^{5}\), let \(L_{1},\ldots,L_{9}\) be general hyperplanes in \(\mathbb{P}(\operatorname{Sym}^{2}V)\), and let \(L\) be their intersection. Let \[L^{\perp}=\langle L_{1},\ldots,L_{9}\rangle\subset\mathbb{P}(\operatorname{Sym }^{2}V^{\vee})\] be the orthogonal complement. In this language, \(X=L\times_{\mathbb{P}(\operatorname{Sym}^{2}V)}W_{4,5}\). Since \(X\) avoids the singular locus of \(W_{4,5}\), the noncommutative resolution \(Y_{5}\) of \(W_{4,5}\) is equivalent to \(W_{4,5}\), and the main theorem of [25] applies. On the other side of the HP duality we find \[S=L^{\perp}\times_{\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})}\operatorname{ Sym}^{2}\mathbb{P}(V^{\vee}), \tag{4.1}\] which is the intersection of \(6\) general \((1,1)\)-divisors in \(\operatorname{Sym}^{2}\mathbb{P}(V^{\vee})=\operatorname{Sym}^{2}\mathbb{P}^ {4}\). The following is a slight amplification of the main result of [25]. **Proposition 4.3**.: _The category \(D(X)\) admits a semiorthogonal decomposition_ \[D(X)=\langle D(S),E_{1},E_{2},E_{3},E_{4}\rangle,\] _where the \(E_{i}\) are exceptional objects._ The amplification consists in the fact that [25] only proves that \(D(S)\) includes as a semiorthogonal piece in \(D(X)\). The fact that the orthogonal complement is generated by \(4\) exceptional objects is not difficult to show using the techniques of the paper. **Lemma 4.4**.: _The surface \(S\) in (4.1) is smooth of degree 35 with respect to the embedding \(S\subset\mathbb{P}(\operatorname{Sym}^{2}V^{\vee})\). It has Hodge numbers_ \[h^{1,0}(S)=0,h^{2,0}(S)=9,h^{1,1}(S)=65.\] Proof.: The map \(\mathbb{P}^{4}\times\mathbb{P}^{4}\to\operatorname{Sym}^{2}(\mathbb{P}^{4})\) induces an etale double cover \(\pi:T\to S\) where \(T\) is a general complete intersection of \(6\) symmetric \((1,1)\)-divisors in \(\mathbb{P}^{4}\times\mathbb{P}^{4}\). In particular, \(T\) is simply connected by the Lefschetz theorem. Furthermore, we find that \(K_{T}=\mathcal{O}_{\mathbb{P}^{4}\times\mathbb{P}^{4}}(1,1)|_{T}\), which gives \(\dim H^{0}(T,K_{T})=19\) and \(K_{T}^{2}=70\). From Noether's formula, \(\chi(\mathcal{O}_{S})=1+h^{2,0}(S)=(1+19)/2=10\), giving \(h^{2,0}(S)=9\). As \(\pi\) is etale of degree \(2\), \(K_{T}=\pi^{*}K_{S}\), \(K_{S}^{2}=35\) and hence \(\chi_{\operatorname{top}}(S)=85\) by Noether's formula. From this we find that \(h^{1,1}(S)=65\). **Corollary 4.5**.: _With \(S\) and \(X\) as above, we have_ \[\sum_{i=0}^{4}h^{i,i}(X) =\sum_{i=0}^{2}h^{i,i}(S)+4,\] \[h^{1,2}(X) =h^{0,1}(S)\] _and_ \[h^{1,3}(X) =h^{0,2}(S)\] Proof.: The semiorthogonal decomposition in Proposition 4.3 gives the relation of Hochschild homology groups \[HH_{*}(X)\cong HH_{*}(S)\oplus\mathbb{C}^{4}.\] Expressing Hochschild homology via Hodge numbers through \[\dim HH_{i}(-)=\sum_{p-q=i}h^{p,q}(-),\] and using the fact that \(h^{0,i}(X)=0\) since \(X\) is Fano gives the result. **Example 4.6**.: The fact that \(\operatorname{Tors}H^{3}(X,\mathbb{Z})\neq 0\) can be seen as a consequence of the fact that the conic bundle \(\eta:U_{4,5}\to W_{4,5}\) does not admit a rational section. To see this, recall that \(U_{4,5}\) is a projective bundle over the Grassmannian \(G=\operatorname{Gr}(3,V).\) Explicitly, \(U_{4,5}=\mathbb{P}(E)\) where \(E\) is the rank \(9\) vector bundle appearing as the kernel of the natural map \(S^{2}(V^{\vee}\otimes\mathcal{O}_{G})\to S^{2}(\mathcal{U}^{\vee}),\) and where \(\mathcal{U}\) is the universal subbundle of rank \(3.\) Now, if \(D\subset\mathbb{P}(E)\) is the divisor determined by a rational section of \(\eta,\)\(D\) is linearly equivalent to a divisor of the form \(aL+bG,\) where \(L=\mathcal{O}_{\mathbb{P}(E)}(1)\) and \(G\) is the pullback of \(\mathcal{O}_{\operatorname{Gr}(3,V)}(1).\) We must also have \(D\cdot L^{13}=10\) (as the \(1\)-cycle \(L^{13}\) is represented by \(10\) fibers of \(\mathbb{P}(E)\to W_{4,5}).\) On the other hand, using the Chern classes of \(S^{2}(\mathcal{U}^{\vee}),\) we compute that \(D\cdot L^{13}=-20b,\) contradicting the condition that \(b\) is an integer. This shows that the Brauer group of \(W_{4,5}^{\operatorname{sm}}\) is non-trivial. In our case, we may identify the Brauer group with \(\operatorname{Tors}H^{3}(W_{4,5}^{\operatorname{sm}},\mathbb{Z})\) because \(H^{2}(W_{4,5}^{\operatorname{sm}},\mathbb{Z})=\mathbb{Z}\) is generated by algebraic classes [3, Proposition 4]. Finally, Lemma 3.4 shows that \(H^{3}(W_{4,5}^{\operatorname{sm}},\mathbb{Z})\to H^{3}(X,\mathbb{Z})\) is an isomorphism, so the latter group has non-trivial torsion part as well. For an alternative approach to the absence of rational sections, see Claim A.2 in the Appendix. ### The case \(c=2n-2\) Let \(X=X_{n,2n-2}.\) Then \(X\) has dimension \(2n-5,\) isolated singularities in \(\sigma^{-1}(\overline{Z}_{2,n})\cap X,\) and \(K_{X}=-2H.\) Let \(\widetilde{X}\to X\) be the blow-up at the singular points. Then the exceptional divisor \(E\) is a disjoint union of components \(E_{1},\dots,E_{s},\) all of which are isomorphic to \(\mathbb{P}^{n-3}\times\mathbb{P}^{n-3},\) by Proposition 2.10. By Corollary 3.6, we have \(H^{3}(X^{\operatorname{sm}},\mathbb{Z})=\mathbb{Z}/2.\) Since \(X^{\operatorname{sm}}\simeq\widetilde{X}-E,\) we get a pullback map \(H^{3}(\widetilde{X},\mathbb{Z})\to H^{3}(X^{\operatorname{sm}},\mathbb{Z}).\) This map is an isomorphism by the exact sequence \[H^{1}(E,\mathbb{Z})\to H^{3}(\widetilde{X},\mathbb{Z})\to H^{3}(X-E, \mathbb{Z})\to H^{2}(E,\mathbb{Z}),\] using also that \(H^{1}(E,\mathbb{Z})=0\) and \(H^{2}(E,\mathbb{Z})\) is torsion free. **Proposition 4.7**.: _For each \(n\geq 4\), \(\widetilde{X}\) is a smooth projective variety of dimension \(2n-5\) with_ \[\operatorname{Tors}H^{3}(\widetilde{X},\mathbb{Z})\neq 0.\] _The variety \(\widetilde{X}\) is unirational, but not stably rational._ Proof.: Only the unirationality remains to be proved. The incidence variety \(U_{4,n}\) of (2.3) is a \(\mathbb{P}^{2n-2}\)-bundle over the Grassmannian \(\operatorname{Gr}(n-2,V).\) This means that if \(X\) is a complete intersection of \(2n-2\) divisors in \(W_{4,n},\) the preimage \(U_{X}=\eta^{-1}(X)\) is birational to \(\operatorname{Gr}(n-2,V).\) Therefore \(U_{X}\) is rational, and hence \(X\) is unirational. **Example 4.8**.: When \(n=4\), \(X\) is a double cover of \(\mathbb{P}^{3}\) branched along a singular quartic surface. This is the example famously studied by Artin and Mumford in [2], and for which they prove Proposition 4.7. Here \(X\) has \(10\) ordinary double points and the blow-up \(\widetilde{X}\) contains \(10\) exceptional divisors isomorphic to \(\mathbb{P}^{1}\times\mathbb{P}^{1}.\) ### The case \(n=4\), \(c<6\) The Artin-Mumford examples of \(X_{4,6}\) can also naturally be generalised to \(X_{4,c}\) with \(c<6.\) We will explain that, at least when \(c=4\) or \(5,\) these do not have torsion in \(H^{3}\) in their smooth models (correcting a claim made in a MathOverflow answer [1]). The singular locus of \(X_{4,c}\) has codimension 3 and is a smooth Enriques surface or a smooth genus 6 curve when \(c=4\) and \(c=5\), respectively. There is a resolution \(\pi\colon\widetilde{X}\to X_{4,c}\) obtained by blowing up the singular locus, where the exceptional divisor is a \(\mathbb{P}^{1}\times\mathbb{P}^{1}\)-bundle over the singular locus. **Proposition 4.9**.: _With \(\widetilde{X}\) as above, we have that the group \(H^{3}(\widetilde{X},\mathbb{Z})\) is torsion free for \(c=5\) and 0 for \(c=4\)._ Proof.: To show that \(H^{3}(\widetilde{X},\mathbb{Z})\) is torsion free, we first remark that \(H^{3}(X,\mathbb{Z})\) has no torsion by Corollary 4.11 below. Next, we consider the Leray spectral sequence associated to the blow-up \(\pi:\widetilde{X}\to X\), with \(E_{2}\)-page \(H^{p}(X,R^{q}\pi_{*}\mathbb{Z})\) converging to \(H^{p+q}(\widetilde{X},\mathbb{Z})\). Let \(S\subset X\) be the singular locus. We have \(R^{0}\pi_{*}\mathbb{Z}_{\widetilde{X}}=\mathbb{Z}_{X}\), \(R^{1}\pi_{*}\mathbb{Z}=0\), \(R^{2}\pi_{*}\mathbb{Z}_{\widetilde{X}}=F\) and \(R^{3}\pi_{*}\mathbb{Z}=0\), where \(F\) is a rank two local system. More explicitly, we have \(F=R^{2}\pi_{*}\mathbb{Z}_{E}\), and since \(E\) is \(\mathbb{P}^{1}\times\mathbb{P}^{1}\)-bundle over \(S\), this means \(F\cong R^{0}f_{*}\mathbb{Z}_{S^{\prime}}\), where \(f\colon S^{\prime}\to S\) is the etale double cover of \(S\) corresponding to the two families of lines in each fibre of \(E\to S\). By Corollary 4.11, we have \(H^{3}(X,\mathbb{Z})=0\), and so the only non-vanishing term of the \(E_{2}\)-page of the spectral sequence is \[H^{1}(X,R^{2}\pi_{*}\mathbb{Z}_{\widetilde{X}})\cong H^{1}(S,R_{0}f_{*} \mathbb{Z}_{S^{\prime}})=H^{1}(S^{\prime},\mathbb{Z}).\] Running the spectral sequence then gives \[0\to H^{3}(\widetilde{X},\mathbb{Z})\to H^{1}(S^{\prime},\mathbb{Z})\to H^{4} (X,\mathbb{Z})\] Since \(H^{1}(S^{\prime},\mathbb{Z})\) is torsion free, the same is true for \(H^{3}(\widetilde{X},\mathbb{Z})\). When \(c=4\), the variety \(S\) is an Enriques surface, so that \(S^{\prime}\) is either a K3 surface or two copies of \(S\); in either case \(H^{1}(S^{\prime},\mathbb{Z})=0\) which gives \(H^{3}(\widetilde{X},\mathbb{Z})=0\). In the argument above, we used the following version of the Weak Lefschetz hyperplane theorem for singular varieties. **Proposition 4.10**.: _Let \(V\) be a projective variety of dimension \(n+1\) and let \(D\) be an ample divisor which is disjoint from the singular locus \(\operatorname{sing}(V)\). Then the natural maps_ \[H_{i}(D,\mathbb{Z})\longrightarrow H^{2n+2-i}(V,\mathbb{Z})\] _are isomorphisms for \(i<n\) and surjective for \(i=n\)._ Proof.: Letting \(U=V-D\), the relative cohomology sequence takes the form \[\cdots\to H^{i}(V,V-D,\mathbb{Z})\to H^{i}(V,\mathbb{Z})\to H^{i}(U, \mathbb{Z})\to H^{i+1}(V,V-D,\mathbb{Z})\rightarrow\cdots\] Moreover, since \(V\) is smooth in a neighborhood of \(D\), we may identify \(H^{i}(V,V-D,\mathbb{Z})\) with \(H^{BM}_{2n+2-i}(D)=H_{2n+2-i}(D)\). Now, using that \(U\) is affine of dimension \(n+1\), the cohomology groups \(H^{i}(U,\mathbb{Z})\) vanish for all \(i>n+1\), by Artin's vanishing theorem. **Corollary 4.11**.: _Let \(\sigma:X\rightarrow\mathbb{P}^{n}\) be a ramified double cover. Then for each \(i<\frac{n}{2}\):_ 1. \(H_{2i}(X,\mathbb{Z})=\mathbb{Z}\) _and_ \(H_{2i-1}(X,\mathbb{Z})=0\)__ 2. \(H^{2i}(X,\mathbb{Z})=\mathbb{Z}\) _and_ \(H^{2i-1}(X,\mathbb{Z})=0\)__ Proof.: Note that \(X\) can be defined by an equation of the form \(z^{2}=f(x_{0},\ldots,x_{n})\) in the weighted projective space \(V=\mathbb{P}(1,\ldots,1,\frac{d}{2})\). Thus \(X\) is an ample divisor, disjoint from the one singular point of \(V\). Thus the conditions of Proposition 4.10 hold, and we find that \(H_{j}(X,\mathbb{Z})=H^{2n-j}(V,\mathbb{Z})\) when \(j<n\). The cohomology of \(V\) is computed in [14, Theorem 1], which gives claim (i), and claim (ii) follows by the Universal Coefficient theorem. ## 5. Proof of Theorem 1.2 In this section we state and prove a precise version of Theorem 1.2. We first recall some general background on the coniveau filtrations on cohomology of algebraic varieties, referring to [4] for details. We restrict ourselves to the case of cohomology with integral coefficients \(H^{i}(X,\mathbb{Z})\) on a smooth projective variety \(X\) over \(\mathbb{C}\). A cohomology class \(\alpha\in H^{l}(X,\mathbb{Z})\) is said to be of _coniveau_\(\geq c\) if it restricts to \(0\) on \(X-Z\) where \(Z\) is a closed subset of codimension at least \(c\) in \(X\). These classes give the _coniveau filtration_\(N^{c}H^{l}(X,\mathbb{Z})\subset H^{l}(X,\mathbb{Z})\). Equivalently, viewing \(H^{l}(X,\mathbb{Z})\) as \(H_{2n-l}(X,\mathbb{Z})\) via Poincare duality, a class \(\alpha\in H_{2n-l}(X,\mathbb{Z})\) is of coniveau \(\geq c\) if and only if \(\alpha=j_{*}\beta\) for some \(\beta\in H_{2n-l}(Y,\mathbb{Z})\), where \(j:Y\to X\) is the inclusion of a closed algebraic subset of \(X\) of codimension at least \(c\). So for example, \(N^{c}H^{2c}(X,\mathbb{Z})\) consists of exactly the algebraic classes in \(H^{2c}(X,\mathbb{Z})\). A class \(\alpha\in H^{l}(X,\mathbb{Z})\) is said to be of _strong coniveau_\(\geq c\) if \(\alpha=f_{*}\beta\) where \(f:Z\to X\) is a proper morphism, \(Z\) is a smooth complex variety of dimension at most \(n-c\), and \(\beta\in H^{l-2c}(Z,\mathbb{Z})\). Equivalently, \(\alpha\) has strong coniveau \(\geq 1\) if \(\alpha=\tilde{j}_{*}\beta\) is the Gysin pushforward of a class \(\beta\in H^{*}(\widetilde{Y},\mathbb{Z})\) and \(\tilde{j}:\widetilde{Y}\to Y\) is the desingularization of a closed subset of codimension \(\geq c\). These classes give the strong coniveau filtration \(\widetilde{N}^{c}H^{l}(X,\mathbb{Z})\subset H^{l}(X,\mathbb{Z})\). We have \(\widetilde{N}^{c}H^{l}(X,\mathbb{Z})\subset N^{c}H^{l}(X,\mathbb{Z})\) for every \(c\). Moreover, the quotient \[N^{1}H^{l}(X,\mathbb{Z})/\widetilde{N}^{1}H^{l}(X,\mathbb{Z})\] is a birational invariant among smooth projective varieties [4]. This invariant is particularly interesting for rationally connected varieties \(X\). In this case, all cohomology classes are of coniveau \(\geq 1\): **Proposition 5.1**.: _Let \(X\) be a rationally connected smooth projective complex variety. Then for any \(l>0\),_ \[N^{1}H^{l}(X,\mathbb{Z})=H^{l}(X,\mathbb{Z}).\] Proof.: See [5] for the case \(\ell=3\), and [8] in general. In [30, Question 3.1], Voisin asked whether \(\widetilde{N}^{1}H^{l}(X,\mathbb{Z})=N^{1}H^{l}(X,\mathbb{Z})\) for \(X\) a rationally connected variety, i.e., whether all cohomology classes are of strong coniveau \(1\) (see also [4, Section 7.2]). In the same paper, she proved that any class in \(H^{3}(X,\mathbb{Z})\) modulo torsion is of strong coniveau \(1\). This was extended by Tian [27, Theorem 1.23] who proved that \(H^{3}(X,\mathbb{Z})=\widetilde{N}^{1}H^{3}(X,\mathbb{Z})\) for any rationally connected threefold. Our Fano varieties give the first rationally connected examples where the two coniveau filtrations are different. In [4], the following topological obstruction to strong coniveau \(\geq 1\) was introduced. **Proposition 5.2**.: _If \(\alpha\in H^{3}(X,\mathbb{Z})\) is a class of strong coniveau \(\geq 1\), then the mod 2 reduction \(\bar{\alpha}\in H^{3}(X,\mathbb{Z}/2)\) satisfies_ \[\bar{\alpha}^{2}=0\text{ in }H^{6}(X,\mathbb{Z}/2).\] Proof.: This is a special case of [4, Proposition 3.5]. Here is the precise version of Theorem 1.2: **Theorem 5.3**.: _For \(n\geq 6\), the variety \(X_{n,2n-1}\) from Definition 2.13 is a Fano variety of dimension \(2n-6\) with \(K_{X}=-H\), such that_ \[0=\widetilde{N}^{1}H^{3}(X,\mathbb{Z})\neq N^{1}H^{3}(X,\mathbb{Z})=H^{3}(X, \mathbb{Z})\cong\mathbb{Z}/2.\] Proof.: Let \(X=X_{n,2n-1}\). The computation of \(\dim X\), \(H^{3}(X,\mathbb{Z})\) and \(K_{X}\) is part of Theorem 4.1. Since \(X\) is Fano, it is rationally connected, so Proposition 5.1 gives \(N^{1}H^{3}(X,\mathbb{Z})=H^{3}(X,\mathbb{Z})\). Corollary 3.6 shows that the class \(\alpha\neq 0\in H^{3}(X,\mathbb{Z})\) is such that the mod \(2\) reduction of \(\alpha^{2}\) is non-zero. Proposition 5.2 then implies \(\alpha\not\in\widetilde{N}^{1}H^{3}(X,\mathbb{Z})\), so \(\widetilde{N}^{1}H^{3}(X,\mathbb{Z})=0\). **Remark 5.4**.: We can obtain examples of other rationally connected varieties where \(\widetilde{N}^{c}H^{l}\neq N^{c}H^{l}\) for any \(c\geq 1\) and \(l\geq 2c+1\) by taking appropriate products with projective spaces (see e.g., [4, Theorem 4.3]). **Remark 5.5** (The Artin-Mumford example).: In light of Theorem 1.2, it is natural to ask whether the \(2\)-torsion class \(\alpha\in H^{3}(X,\mathbb{Z})\) in the Artin-Mumford example has strong coniveau \(\geq 1\), i.e., whether the birational invariant (1.2) is zero. It turns out that this is indeed the case: Inspecting Artin-Mumford's 'brutal procedure' in [2, p. 82-83], shows that the class \(\alpha\) is obtained from a cylinder map \(H^{1}(C,\mathbb{Z})\to H^{3}(X,\mathbb{Z})\) from an elliptic curve \(C\). In other words, \(\alpha\) is the pushfoward from a class in \(H^{1}\) from some ruled surface \(S\) over \(C\). Note that this can also be seen as a special case of [27, Theorem 1.23]. ### Open questions We conclude with two open questions regarding the two coniveau filtrations: **Question 1**.: Are there rationally connected varieties \(X\) with \(\widetilde{N}^{1}H^{l}(X,\mathbb{Z})\neq N^{1}H^{l}(X,\mathbb{Z})\) for some \(l>0\) and torsion free \(H^{l}(X,\mathbb{Z})\)? **Question 2**.: Are there rationally connected varieties of dimension \(4\) or \(5\) where \(\widetilde{N}^{1}H^{l}(X,\mathbb{Z})\neq N^{1}H^{l}(X,\mathbb{Z})\) for some \(l>0\)? **Remark 5.6**.: Let \(X=X_{5,9}\) be the fourfold from Section 4.2. Then we don't know if the generator \(\alpha\) of \(H^{3}(X,\mathbb{Z})\) has strong coniveau \(\geq 1\). We can show, however, that \(\overline{\alpha}^{2}=0\) in \(H^{6}(X,\mathbb{Z}/2)\), so the topological obstruction of Proposition 5.2 vanishes. To see this, we use the fact that the third integral Steenrod square \(\operatorname{Sq}^{2}_{\mathbb{Z}}\colon H^{p}(Z,\mathbb{Z})\to H^{p+3}(X, \mathbb{Z})\) is naturally identified with the third differential \(d_{3}\) in the Atiyah-Hirzebruch spectral sequence of topological K-theory, with \(E_{2}\)-page \[H^{p}(X,K^{q}(pt))=\begin{cases}H^{p}(X,\mathbb{Z})&q\text{ even}\\ 0&\text{otherwise}\end{cases}\] converging to \(K^{p+q}(X)\). Now \(H^{*}(X,\mathbb{Z})\) has torsion only in degrees \(3\) and \(6\), with torsion part \(\mathbb{Z}/2\) in each of these degrees. It also has torsion \(\mathbb{Z}/2\oplus\mathbb{Z}/2\) in its topological \(K\)-theory, by Proposition 4.3 (because \(S\) is a general type surface with fundamental group \(\mathbb{Z}/2\)). This implies \(d_{3}=0\), since otherwise the Atiyah-Hirzebruch spectral sequence would give that the topological \(K\)-theory of \(X\) was torsion free. Since \(\mathrm{Sq}^{3}_{\mathbb{Z}}\) is an integral lift of the usual (mod 2) Steenrod square \(\mathrm{Sq}^{3}\), we then get \[\overline{\alpha}\cup\overline{\alpha}=\mathrm{Sq}^{3}(\overline{\alpha})=0.\] In general, it would be interesting to find other obstructions to the equality of the two coniveau filtrations than the topological obstructions of [4]. ## Appendix A Comments on the Fano fourfold by Janos Kollar In the present paper, Ottem and Rennemo construct smooth Fano fourfolds \(X\) such that \(H_{2}(X,\mathbb{Z})\cong\mathbb{Z}+\mathbb{Z}/2\). This appendix gives a shorter computation of \(H_{2}(X,\mathbb{Z})\), see Claims A.2 and A.4. We also add two new results. In Claim A.6 we exhibit two lines \(L^{\prime},L^{\prime\prime}\subset X\) such that \(L^{\prime}-L^{\prime\prime}\) generates the torsion in \(H_{2}(X,\mathbb{Z})\cong\mathbb{Z}+\mathbb{Z}/2\). Then in Paragraph A.14 we show that \(X\) is birational to a double cover of \(\mathbb{P}^{4}\) ramified along a degree 18 hypersurface \(R\), which is obtained as the 5-secants of a degree 15, smooth, determinantal surface \(S=\left(\operatorname{rank}N\leq 4\right)\subset\mathbb{P}^{4}\), where \(N\) is a \(6\times 5\) matrix whose entries are linear forms. Although \(S\) is smooth, it is not a general determinantal surface, since the latter have only 1-parameter families of 5-secants. The higher dimensional examples constructed in the paper can also be treated with minor changes. We refer to [26, Chapters VIII-IX] for symmetric determinantal varieties and to [16] for the classification of lines on them. ### (Basic set-up) We recall the construction of the Fano fourfolds in the paper. This is the case when \(r=4\) and \(n=5\) (see Section 4.2). Let \(Z_{i}\subset\mathbb{P}^{14}\) be the space of rank \(\leq i\) quadrics in \(\mathbb{P}^{4}\). Our main interest is \(Z=Z_{4}\subset\mathbb{P}^{14}\), the space of rank \(\leq 4\) quadrics in \(\mathbb{P}^{4}\). It is a quintic hypersurface. The universal deformation of a rank 3 quadric is given by \[x_{0}^{2}-x_{1}^{2}-x_{2}^{2}+z_{1}x_{3}^{2}+z_{2}x_{3}x_{4}+z_{3}x_{4}^{2}=0.\] ( _A.1.1_ ) This has rank \(\leq 4\) iff \(z_{2}^{2}-4z_{1}z_{3}=0\). So \(Z\) is singular along \(Z_{3}\), with transversal singularity type \(A_{1}\). As in (2.2), define \(U\) to be the space of pairs \((L^{2}\subset Q)\) where \(L^{2}\subset\mathbb{P}^{4}\) is a 2-plane and \(Q\subset\mathbb{P}^{4}\) a quadric. We have projections \[G\xleftarrow{p_{1}}U\xrightarrow{p_{2}}Z,\] and \(\mathrm{Pic}(U)\cong\mathbb{Z}^{2}\) is generated by \(p_{1}^{*}\mathcal{O}_{G}(1)\) and \(p_{2}^{*}\mathcal{O}_{Z}(1)\). Let \(U_{2}\subset U\) be the subset where the quadric has rank \(\leq 2\). These quadrics split into 2 hyperplanes, one of which must contain \(L^{2}\). So \(U_{2}\) is a \(\mathbb{P}^{1}\times\mathbb{P}^{4}\)-bundle over \(\mathrm{Grass}(2,4)\). Set \(U^{\circ}:=U-U_{2}\). This is the preimage of the open set \(Z^{\circ}:=Z_{4}-Z_{2}\). Since \(U_{2}\subset U\) has codimension 3, \(H^{i}(U^{\circ},\mathbb{Z})=H^{i}(U,\mathbb{Z})\) for \(i\leq 4=2(3-1)\). In particular, \[H^{0}(U^{\circ},\mathbb{Z})=\mathbb{Z},\ H^{1}(U^{\circ},\mathbb{Z})=0,\ H^{2}(U^{ \circ},\mathbb{Z})=\mathbb{Z}^{2},\ H^{3}(U^{\circ},\mathbb{Z})=0.\] As in Section 2.1, we define \(W\) using the Stein factorization of \(p_{2}:U\to Z\) \[U\xrightarrow{\eta}W\xrightarrow{\sigma}Z.\] In the coordinates (A.1.1), \(Z\cong(z_{2}^{2}-4z_{1}z_{3}=0)\times\mathbb{A}^{11}\). Set \(w_{1}:=\sqrt{z_{1}}\) and \(w_{3}:=\sqrt{z_{3}}\). Since \(w_{1}w_{3}=z_{2}/2\), adjoining both \(w_{1},w_{2}\) is a degree \(2\) covering and \[z_{1}x_{3}^{2}+z_{2}x_{3}x_{4}+z_{3}x_{4}^{2}=(w_{1}x_{3}+w_{3}x_{4})^{2}.\] Thus, locally analytically over the points of \(Z_{3}-Z_{2}\), we have \(W\cong\mathbb{A}_{w_{1},w_{2}}^{2}\times\mathbb{A}^{11}\), and the family of quadrics becomes \[(x_{0}^{2}-x_{1}^{2}-x_{2}^{2}+(w_{1}x_{3}+w_{2}x_{4})^{2}=0)\times\mathbb{A}^{ 11}.\] Thus \(U\) is the family of \(2\)-planes in the same family as \[\big{(}x_{0}-x_{1}=x_{2}-(w_{1}x_{3}+w_{2}x_{4})=0\big{)}.\] Each of these has a unique intersection point with \((x_{3}=x_{4}=0)\). Thus \(U\) is locally analytically isomorphic to the trivial family \[W\times(x_{0}^{2}-x_{1}^{2}-x_{2}^{2}=0)\subset W\times\mathbb{P}^{2}.\] Therefore, restricting to \(U^{\circ}\) we get \[U^{\circ}\xrightarrow{\eta^{\circ}}W^{\circ}\xrightarrow{\sigma^{\circ}}Z^{\circ},\] and \(\eta^{\circ}:U^{\circ}\to W^{\circ}\) is a smooth morphism with conics as fibers. **Claim A.2**.: \(H^{3}(W^{\circ},\mathbb{Z})\cong\mathbb{Z}/2\) and \(\eta\) has no rational sections. Proof.: Let \(C\subset U^{\circ}\) be a fiber of \(\pi^{\circ}\) over a rank \(3\) conic. We can think of \(C\) as (cones over) lines on a quadric cone, or after further degeneration, as (cones over) \(2\) pencils of lines on \(2\) planes. So \(C\) is \(2\)-times the class of (cones over) a pencil of lines in a plane. Thus the image of \(H^{2}\big{(}U^{\circ},\mathbb{Z}\big{)}\to H^{2}(C,\mathbb{Z})\cong\mathbb{Z}\) is twice the generator. (Note that this splitting of \(C\) into \(2\) components happens in the fibers over \(Z_{2}\), thus outside \(Z^{\circ}\).) In the Leray spectral sequence for \(\pi^{\circ}\), the only interesting map is on the \(E_{3}\) page: \[\mathbb{Z}\cong H^{0}\big{(}W^{\circ},R^{2}\pi_{*}^{\circ}\mathbb{Z}\big{)} \to H^{3}\big{(}W^{\circ},\mathbb{Z}\big{)}.\] As we noted, the kernel is \(2\mathbb{Z}\) and \(H^{3}\big{(}U^{\circ},\mathbb{Z}\big{)}=0\). Thus \(H^{3}\big{(}W^{\circ},\mathbb{Z}\big{)}\cong\mathbb{Z}/2\) and \(H^{2}\big{(}W^{\circ},\mathbb{Z}\big{)}\cong\mathbb{Z}\). ### A.3 (Construction of \(X\)).: As in Section 4.2, let \(X_{Z}\subset Z\) be the complete intersection of \(9\) general hyperplanes and \(X:=X_{W}\subset W\) its preimage. Then \(X\subset W^{\circ}\) is a Fano fourfold with \(K_{X}\sim-\sigma^{*}H\). **Claim A.4**.: \(H_{2}(X,\mathbb{Z})\cong\mathbb{Z}+\mathbb{Z}/2\)_._ Proof.: The isomorphism \(H_{2}(X,\mathbb{Z})\cong H_{2}(W^{\circ},\mathbb{Z})\) follows from the Lefschetz hyperplane theorem of [11] (see Section 4.2). Note that by [29], \(H_{2}(X,\mathbb{Z})\) is generated by algebraic curves. Next we write down a difference of \(2\) smooth, degree \(1\) rational curves that generates the \(\mathbb{Z}/2\)-summand of \(H_{2}(X,\mathbb{Z})\). **Claim A.5**.: Let \(L\subset Z-Z_{3}\subset\mathbb{P}^{14}\) be a line. Its preimage in \(W\) is a pair of lines \(L^{\prime}\cup L^{\prime\prime}\) such that 1. \(L^{\prime}\) and \(L^{\prime\prime}\) are numerically equivalent, 2. \(U\times_{W}L^{\prime}\) is the ruled surface \(\mathbb{F}_{1}\cong B_{p}\mathbb{P}^{2}\), 3. \(U\times_{W}L^{\prime\prime}\) is the ruled surface \(\mathbb{F}_{0}\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), and 4. \(L^{\prime}-L^{\prime\prime}\) is a generator of the \(\mathbb{Z}/2\)-summand of \(H_{2}(W^{\circ},\mathbb{Z})\cong\mathbb{Z}+\mathbb{Z}/2\). By Paragraph A.12, \(X_{Z}\) contains a \(2\)-parameter family of lines, and Claim A.5 applies to them. Thus we obtain the following. **Claim A.6**.: Let \(L\subset X_{Z}-Z_{3}\) be a line. Its preimage in \(X\) is a pair of lines \(L^{\prime}\cup L^{\prime\prime}\), and \(L^{\prime}-L^{\prime\prime}\) is a generator of the \(\mathbb{Z}/2\)-summand of \(H_{2}(X,\mathbb{Z})\cong\mathbb{Z}+\mathbb{Z}/2\). **A.7** (Beginning of the proof of Claim A.5).: By Claim A.4\(H_{2}(X,\mathbb{Z})/(\text{torsion})\) is generated by \(K_{X}\sim-\sigma^{*}H\), so \(L^{\prime}\) and \(L^{\prime\prime}\) are numerically equivalent. By Paragraph A.11, in suitable coordinates we can write \(L\) as a family of quadrics \[Q(\lambda{:}\mu):=\big{(}x_{0}(\lambda x_{2}-\mu x_{3})=x_{1}(\mu x_{4}- \lambda x_{3})\big{)}.\] All of these contain the \(2\)-plane (\(x_{0}=x_{1}=0\)), defining a section \(s_{0}:L\to U\). The preimage of \(L\) in \(W\) is a disjoint union of \(2\) lines \(L^{\prime}\cup L^{\prime\prime}\). We choose \(L^{\prime}\) to be \(\pi\circ s_{0}:L\to U\to W\). For any nonzero linear form \(\ell=a\lambda+b\mu\), a section \(C^{\prime}(\ell)\) of \(\pi:U\to W\) over \(L^{\prime}\) is given by \[\ell x_{0}=\mu x_{4}-\lambda x_{3}\quad\text{and}\quad\lambda x_{2}-\mu x_{3} =\ell x_{1}.\] For \(\ell_{1}\neq\ell_{2}\) the \(2\) sections \(C^{\prime}(\ell_{1}),C^{\prime}(\ell_{2})\) meet at the point where \(\ell_{1}=\ell_{2}\). Thus \(U\times_{W}L^{\prime}\) is the ruled surface \(\mathbb{F}_{1}\). In the other family of \(2\)-planes, we have sections \(C^{\prime\prime}(\ell)\) given by \[cx_{0}=x_{1}\quad\text{and}\quad\mu x_{4}-\lambda x_{3}=c(\lambda x_{2}-\mu x _{3}).\] These are disjoint for \(c_{1}\neq c_{2}\). Thus \(U\times_{W}L^{\prime\prime}\) is the the trivial \(\mathbb{P}^{1}\)-bundle. These show Claim A.5.2-3. Claim A.5.4 is a formal consequence of (A.5.1-3). To see this, we need to discuss how to detect \(2\)-torsion in \(H_{2}\) using \(\mathbb{P}^{1}\)-bundles. (Similarly, \(n\)-torsion can be detected using \(\mathbb{P}^{n-1}\)-bundles.) **A.8** (Comments on \(\mathbb{P}^{1}\)-bundles).: Let \(X\) be a normal, proper variety and \(\pi:Y\to X\) a \(\mathbb{P}^{1}\)-bundle (etale locally trivial). For a smooth curve \(C\to X\), let \(C_{Y}\to Y_{C}:=C\times_{X}Y\) be a lifting. Set \[\sigma_{Y}(C):=(C_{Y}\cdot K_{Y_{C}/C})\mod 2.\] (A.8.1) This is well defined as a function on \(A_{1}(X)\), the group of \(1\)-cycles modulo algebraic equivalence. If \(X\) is smooth and \(Y\to X\) has a rational section \(S\subset Y\), then \(K_{Y/X}\sim-2S+\pi^{*}D\) for some Cartier divisor \(D\) on \(X\). In this case \[\sigma_{Y}(C)\equiv(C\cdot D)\mod 2.\] Conversely, assume that we are over \(\mathbb{C}\), \(H_{2}(X,\mathbb{Z})\) is generated by algebraic curves, and \(\sigma_{Y}(C)\equiv(C\cdot D)\mod 2\) for every \(C\). Then \(K_{Y/X}-\pi^{*}D\) is divisible by \(2\), giving a rational section. In any case, we get the following. **Claim A.9**.: If there is a numerically trivial \(1\)-cycle \(C\) such that \(\sigma_{Y}(C)\equiv 1\mod 2\), then \([C]\) is a nontrivial torsion element in \(H_{2}(X,\mathbb{Z})\), and \(Y\to X\) has no rational sections. **A.10** (End of the proof of Claim A.5).: Since \(U\times_{W}L^{\prime}\cong\mathbb{F}_{1}\), (A.8.1) shows that \(\sigma_{U}(L^{\prime})\equiv 1\mod 2\). Similarly, \(U\times_{W}L^{\prime\prime}\cong\mathbb{F}_{0}\) implies that \(\sigma_{U}(L^{\prime\prime})\equiv 0\mod 2\). Thus \(C:=L^{\prime}-L^{\prime\prime}\) is numerically trivial and \(\sigma_{U}(C)\equiv 1\mod 2\). We can now apply Claim A.9. In both cases we could have used the isomorphism \[\omega_{U/W}\cong p_{1}^{*}\mathcal{O}_{G}(-1)\otimes p_{2}^{*}\mathcal{O}_{Z} (1)\] to compute \(\sigma_{U}(L^{\prime})\) and \(\sigma_{U}(L^{\prime\prime})\). **A.11** (Lines on \(Z\)).: By [16], the lines on \(Z-Z_{2}\) form 3 families of dimension 20 each. These are the following. (A.11.1) \(\langle Q_{1},Q_{2}\rangle\) where the \(Q_{i}\) contain a common 2-plane. The general such line \(L\) is disjoint from \(Z_{3}\), and its preimage in \(W\) is a pair of disjoint lines \(L^{\prime}\cup L^{\prime\prime}\). After coordinate change, these can be written as \[x_{0}(\lambda x_{2}-\mu x_{3})=x_{1}(\mu x_{4}-\lambda x_{3}).\] (A.11.2) \(\langle Q_{1},Q_{2}\rangle\) where the \(Q_{i}\) have a common singular point. After coordinate change, these can be written as \[\lambda q_{1}(x_{1},\dots,x_{4})+\mu q_{2}(x_{1},\dots,x_{4})=0,\] where the \(q_{i}\) are quadratic forms. The general such line \(L\) intersects \(Z_{3}\) at 4 points, and its preimage in \(W\) is a smooth, elliptic curve of degree 2. (A.11.3) \(\langle Q_{1},Q_{2}\rangle\) where the \(Q_{i}\) have rank 2 and \(\operatorname{sing}Q_{i}\) is tangent to \(Q_{3-i}\). The general such line \(L\) intersects \(Z_{3}\) at 2 points, and its preimage in \(W\) is a smooth, rational curve of degree 2. After coordinate change, these can be written as \[\lambda(x_{0}^{2}+x_{1}x_{2})+\mu(x_{2}x_{3}+x_{4}^{2})=0.\] **A.12** (Lines on \(X_{Z}\)).: The space of lines in \(Z-Z_{3}\) has dimension 20, and with each hyperplane section the dimension drops by 2. So the lines on \(X_{Z}\) form 3 families of dimension 2 each. Only (A.11.1) contains lines that are disjoint from \(\operatorname{sing}X_{Z}\). Since there are no lines on \(Z_{3}-Z_{2}\), the only common lines to any 2 of these families are the finitely many double tangents of \(\operatorname{sing}X_{Z}\). **A.13** (Another representation of \(X_{Z}\)).: Let \(P_{5}\) be a general 5-dimensional linear system of quadrics on \(P_{4}:=\mathbb{P}^{4}\). For \(i=4,5\), we have the projections \(\pi_{i}:P_{4}\times P_{5}\to P_{i}\). For brevity let us write \((a,b):=a\pi_{4}^{*}H_{4}+b\pi_{5}^{*}H_{5}\), where \(H_{i}\) is the hyperplane class on \(P_{i}\). Set \[Y:=\big{\{}(p,Q):p\in P_{4},Q\in P_{5},p\in\operatorname{sing}Q\big{\}}\subset P _{4}\times P_{5}.\] The condition \(p\in\operatorname{sing}Q\) is equivalent to the partial derivatives of the equation of \(Q\) vanishing at \(p\). Thus \(Y\subset P_{4}\times P_{5}\) is the complete intersection of 5 divisors of bidegree \((1,1)\). Write these as \[\sum_{i=0}^{4}\sum_{j=0}^{5}\alpha_{ij}^{\ell}y_{i}x_{j}\quad\text{for}\quad \ell=1,\dots,5.\] Over \(P_{5}\), (A.13.1) is equivalent to a \(5\times 5\) symmetric matrix whose entries are the linear forms \(m_{i}^{\ell}=\sum_{j=0}^{5}\alpha_{ij}^{\ell}x_{j}\). The condition \(\det(m_{i}^{\ell})=0\) defines \(X_{Z}\) as in Paragraph A.3. Over \(P_{4}\), (A.13.1) is equivalent to a \(6\times 5\) matrix whose entries are the linear forms \(n_{j}^{\ell}=\sum_{i=0}^{4}\!a_{ij}^{\ell}y_{i}\). Note that \(\pi_{4}:Y\to P_{4}\) is birational. Its inverse is the blow-up of a surface1 Footnote 1: This is not the surface \(S\) of Section 4.2. \[S\subset P_{4},\quad\text{defined by}\quad\operatorname{rank}(n_{j}^{\ell}) \leq 4.\] Let \(E_{4}\subset Y\) denote the exceptional divisor. \(Y\) defines a rational map \(P_{4}\dasharrow X_{Z}\subset P_{5}\), which is given by the \(5\times 5\) subdeterminants of \((n_{j}^{\ell})\). Thus \(E_{4}\sim(5,-1)|_{Y}\). The inverse rational map \(X_{Z}\dasharrow P_{4}\) is a bit harder to see. It is given by a linear system of divisors as follows. Let \(H\in|H_{4}|\) be a hyperplane and set \[D_{H}:=\big{\{}Q\in X_{Z}:H\cap\operatorname{sing}Q\neq\emptyset\big{\}} \subset X_{Z}.\] Note that the condition \(H\cap\operatorname{sing}Q\) is equivalent to \(Q|_{H}\) being singular. (Here we need that \(Q\) itself is singular.) This gives us the equation \(\det(Q|_{H})=0\) for \(D_{H}\). It has degree \(4\). We claim that the intersection of \((\det(Q|_{H})=0)\) with \(X_{Z}\) has multiplicity \(2\). To see this, choose coordinates such that \(H=(x_{0}=0)\) and \(Q=(x_{0}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}=0)\). For its deformations we can make linear coordinate changes to the \(x_{2},x_{3},x_{4}\), but \(x_{0}\) can only be multiplied by a constant. Thus we get a miniversal deformation family \[\big{(}x_{0}^{2}+t_{1}x_{0}x_{1}+t_{2}x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}= 0\big{)}\subset P_{5}\times\mathbb{A}_{t_{1},t_{2}}^{2}.\] For a given \(t_{1},t_{2}\), the quadric has rank \(4\) iff \(t_{1}^{2}-4t_{2}=0\), and the singular point is on \((x_{0}=0)\) iff \(t_{2}=0\). Their intersection is the length \(2\) scheme (\(t_{1}^{2}=0\)). Thus the \(D_{H}\subset X_{Z}\) have degree \(10=\frac{1}{2}(4\cdot 5)\) and \(2D_{H}\sim 4H_{5}|_{Y}\). In particular, the divisor class \(D_{H}-2H_{5}|_{Y}\) is \(2\)-torsion in the class group \(\operatorname{Cl}(X_{Z})\). The corresponding double cover is our \(X\), constructed in Paragraph A.3. Let \(E_{5}\subset Y\) denote the exceptional divisor of \(\pi_{5}:Y\to X_{Z}\). The previous computations suggest that it should be linearly equivalent to \((-1,2)\). However, \(X_{Z}\) has multiplicity \(2\) along the base locus of \(|D_{H}|\), so the correct bidegree is \((-2,4)\). On \(P_{4}\), the \(3\) families of lines (A.11.1-3) correspond to (A.13.1) conics that are \(9\)-secants of \(S\), (A.13.2) fibers of \(E_{4}\to S\), and (A.13.3) lines that are \(4\)-secants of \(S\). **A.14** (\(X\) as a double \(\mathbb{P}^{4}\)).: By the previous description, \(X\) is birational to a double cover of \(\mathbb{P}^{4}\) ramified along the hypersurface \(R:=\pi_{4}(E_{5})\subset P_{4}\). The degree of \(R\) is given by \((1,0)^{3}[E_{5}](1,1)^{5}\), which works out to be \(18\). The degree of the surface \(S\) is \((1,0)^{2}[E_{4}](0,1)(1,1)^{5}=15\). Note that \(S\) is a \(6\times 5\) determinantal surface. However, it is not general since we have a symmetry condition on the \(P_{5}\) side, so results about general determinantal surfaces do not apply to \(S\). The multiplicity of \(Y\) along \(S\) is \(4\). This follows from the computation \[(1,0)^{2}[E_{4}][E_{5}](1,1)^{5}=60=4\cdot\deg S.\] Thus \(R\) is in the \(4\)th symbolic power of the homogeneous ideal of \(S\), but not in its \(4\)th power. For general determinantal surfaces these are equal by [9]. Another interesting property of \(S\) is that the fibers of \(E_{5}\to\operatorname{sing}X_{Z}\) give \(5\)-secants of \(S\). Thus \(S\) has a \(2\)-parameter family of \(5\)-secants. Note that by (A.13.3), the family of \(4\)-secants has dimension \(2\) as well. Most surfaces in \(\mathbb{P}^{4}\), including general \(6\times 5\) determinantal surfaces, have only 1-parameter families of 5-secants. ### Acknowledgments I thank J. Ottem, C. Raicu and B. Totaro for many useful comments. Partial financial support was provided by the NSF under grant number DMS-1901855.
2307.16896
Disruptive Autoencoders: Leveraging Low-level features for 3D Medical Image Pre-training
Harnessing the power of pre-training on large-scale datasets like ImageNet forms a fundamental building block for the progress of representation learning-driven solutions in computer vision. Medical images are inherently different from natural images as they are acquired in the form of many modalities (CT, MR, PET, Ultrasound etc.) and contain granulated information like tissue, lesion, organs etc. These characteristics of medical images require special attention towards learning features representative of local context. In this work, we focus on designing an effective pre-training framework for 3D radiology images. First, we propose a new masking strategy called local masking where the masking is performed across channel embeddings instead of tokens to improve the learning of local feature representations. We combine this with classical low-level perturbations like adding noise and downsampling to further enable low-level representation learning. To this end, we introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations. Additionally, we also devise a cross-modal contrastive loss (CMCL) to accommodate the pre-training of multiple modalities in a single framework. We curate a large-scale dataset to enable pre-training of 3D medical radiology images (MRI and CT). The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance. Notably, our proposed method tops the public test leaderboard of BTCV multi-organ segmentation challenge.
Jeya Maria Jose Valanarasu, Yucheng Tang, Dong Yang, Ziyue Xu, Can Zhao, Wenqi Li, Vishal M. Patel, Bennett Landman, Daguang Xu, Yufan He, Vishwesh Nath
2023-07-31T17:59:42Z
http://arxiv.org/abs/2307.16896v1
# Disruptive Autoencoders: Leveraging Low-level features for 3D Medical Image Pre-training ###### Abstract Harnessing the power of pre-training on large-scale datasets like ImageNet forms a fundamental building block for the progress of representation learning-driven solutions in computer vision. Medical images are inherently different from natural images as they are acquired in the form of many modalities (CT, MR, PET, Ultrasound etc.) and contain granulated information like tissue, lesion, organs etc. These characteristics of medical images require special attention towards learning features representative of local context. In this work, we focus on designing an effective pre-training framework for 3D radiology images. First, we propose a new masking strategy called local masking where the masking is performed across channel embeddings instead of tokens to improve the learning of local feature representations. We combine this with classical low-level perturbations like adding noise and downsampling to further enable low-level representation learning. To this end, we introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations. Additionally, we also devise a cross-modal contrastive loss (CMCL) to accommodate the pre-training of multiple modalities in a single framework. We curate a large-scale dataset to enable pre-training of 3D medical radiology images (MRI and CT). The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance. Notably, our proposed method tops the public test leaderboard of BTCV multi-organ segmentation challenge. 3D Medical Image Analysis, Autoencoders, Image Segmentation, Pre-training. ## 1 Introduction Inception of transformers [1, 2] has led to a significant shift from convolution neural network (ConvNet) based methods [3, 4, 5] to transformer-based methods [2, 6, 7, 8, 9] for many computer vision applications. However, the fact that pre-training plays an irreplaceable role in model development has not changed in the past decade [10]. Model weight initialization is an important step in training deep neural networks [11] as good starting weights are necessary for efficient training towards a particular task. In the ConvNet era, pre-training on the ImageNet dataset [12] played a significant role in developing models for various downstream tasks. Most of the network architectures offer ImageNet weights as the starting weights for further training. In Vision Transformer (ViT) [2], pre-training is proved to be even more fruitful due to its higher model capacity. It was observed that pre-training ViT on large-scale datasets like JFT300M [13] could provide superior performance than pre-training on ImageNet. Curation and collection of such large-scale datasets is thus pivotal to advancement of various computer vision sub-fields. Pre-training for natural computer vision tasks is usually not constricted by the availability of data as natural images are abundant and there is no scarcity and less restrictions in obtaining them. Unfortunately, the same does not translate to medical images as they are scarce (acquisition cost is high) Figure 1: Disruptive Autoencoders: Here, we disrupt the 3D medical image with a combination of low-level perturbations - noise and downsampling, followed by tokenization and local masking. These disrupted tokens are then passed through a transformer encoder and convolutional decoder to learn to reconstruct the original image. Our method also includes cross modal contrastive learning to bring in modality-awareness to the pre-training framework. This can act as an effective pre-training strategy to extract meaningful low-level representations for 3D medical image analysis. and also difficult to obtain (requires specialized hardware). There is complexity involved to release them publicly due to heavy privacy regulations [14]. There have been recent efforts in creating a large-scale dataset for medical imaging [15], there still does not publicly exist a high-resolution dataset containing multiple modalities. It should be noted that while natural images are in general standardized RGB, medical images acquired by different modalities through different manufacturers' hardware with different acquisition parameters can differ significantly from each other. The characteristics of medical images are dependent on several factors, including modalities used for the specific diagnostic task, and the site where the image is acquired. Hence it is extremely challenging to build a generic pre-training framework for medical images that is capable of factoring in multiple possible variations simultaneously. Recently, masked image modelling methods [16] have gained significant traction as an efficient self-supervised pre-training framework. They are used to develop robust pre-trained models that can generalize well to the downstream tasks. Masked Auto-Encoders MAEs learn a feature representation by trying to reconstruct the original image while masking out randomly selected tokens in the input space. MAEs are designed specifically for transformers as masked tokens help reduce the computation. However, improvements are needed in medical imaging domain: while trying to adopt vanilla MAEs for medical images, we observed that although MAEs do lead to a performance boost for further finetuning, the reconstructions were poor and most of the anatomical structures were missing after reconstruction. For example, in Fig. 2 (left), it can be seen that MAEs cannot reconstruct the bones and other fine structures. This has also been observed in some recent works [17], [18]. Unlike natural images, most of the vital information in medical images are in the fine details (e.g. small lesions, finer boundaries of organs, tiny structures of bones that need to be delineated etc). To solve such tasks which require understanding fine details in an image, the focus is usually on extracting meaningful low-level features. In the context of features in different levels, computer vision tasks can be broadly classified into - low, mid, and high-level tasks depending on the type of information needed to solve the task. High level tasks like object recognition [19] and scene understanding [20] usually need a semantic understanding of the image. Low-level tasks like edge detection and image matching on the other hand, require a more fine understanding of the image. The type of features extracted by a deep network to solve low-level tasks are usually different from features extracted to solve high-level tasks [21], [22]. Since MAEs can be considered as a form of inpainting [23] which is regarded as a high-level task; most of the features extracted are mid to high-level features resulting in a coarse reconstruction as we observed in the outcome. This makes MAEs sub-optimal for pre-training medical images as features representative of crucial low-level information go missing. In this work, we focus on designing an effective pipeline for pre-training on 3D medical volumes. First, we design a new pre-training strategy that is better than MAEs at extracting low-level details. MAEs mask tokens at different places at a global level, making it difficult for the network to predict reconstructions while preserving local details. To this end, we introduce local masking where we do not mask at the token dimension but at the channel embedding dimension. Unlike MAEs, certain amount of masking is done to all tokens as only the channel embeddings are perturbed (visualized in Fig. 1 and 3). In this way, we mask out certain portion of information for all tokens throughout the input, helping the network reconstruct sharp details and learn better local context. We also explore using various low-level vision tasks like denoising and super-resolution for pre-training. We observe that these tasks help extract better low-level features and result in sharper reconstructions. In summary, we introduce Disruptive Autoencoders (DAE) where we first create a combination of these perturbations (local masking, downsampling, and adding noise) to disrupt the tokens (visualized in Fig. 1). Then, an autoencoder is trained to reconstruct the original medical volume from these disrupted tokens. DAEs result in sharper reconstructions, and a better performance on downstream tasks. We also devise a contrastive loss for our framework in such a way it can discriminate between the features extracted from different modalities. This cross modal contrastive loss (CMCL) pulls together features of the same modality and pushes apart features of different modalities. In this way, the pre-training strategy extracts diverse features from different modalities while maintaining an awareness of each modality. To evaluate our proposed method DAE we curate a pre-training dataset of radiology images. The dataset includes 5 modalities: Computed Tomography (CT) and 4 modalities of Magnetic Resonance Imaging (MRI) (T1, T2, FLAIR, T1ce). We use this collection of 10,050 individual 3D volumes for pre-training. All selected datasets are public so that the pre-trained weights can be used as a better initialization as compared to random for future MR and CT radiology tasks. We conduct experiments on multiple segmentation baselines to validate the effectiveness of our proposed pre-training method. In summary, the following are the major contributions of this work: * We propose Local Masking, a new masking strategy which helps learn useful features to reconstruct local details like small anatomical and functional structures. * We introduce **Disruptive Autoencoders**, which aim to reconstruct the original volume from tokens disrupted from a combination of low-level perturbations such as local masking, downsampling, and adding noise. * Our framework proposes using a single pre-trained weight for various downstream tasks even for different modalities. * We curate a public pre-training dataset for CT and MRI radiology images with over 10,000 3D volumes and conduct extensive experiments on multiple segmentation datasets and show state-of-the-art performance. The pipeline achieves best performance on a public multi-organ segmentation challenge leaderboard. ## 2 Related Works **Self-supervised Pre-training**: Self-supervised pre-training approaches can be broadly categorized into two types: 1) gen erative and 2) discriminative. Generative methods focus on mapping the input to a latent space to learn a representation and then decode it back to a new space. Masked Autoencoders (MAE) [16] propose a way to mask out some tokens and make the network reconstruct the original image back thus helping the model learn useful representative features. It uses an asymmetric encoder-decoder design by having a small transformer decoder to reduce the computation burden. Beit [24] proposed a masked image modeling task to pretrain vision transformer while using two views of the input: image patches as well as a discrete visual token. SimMIM [25] simultaneously performs representation encoding and pretext prediction, due to which the decoder design can be changed to be as simple as a single dense layer. Masked feature prediction [26] proposes a technique where instead of the original image, manual features like Histogram of Gradients (HOG) are extracted to learn the representation. Latent contextual regressors and alignment constraints have been proposed to map the predicted representations of masked patches to the targets. Masked Pre-training has not been applied only for images but also for point-clouds [27], videos [28], and multi-spectral images [29]. Discriminative pre-training methods try to design a discriminative loss to differentiate between the features extracted for different inputs. Typically ground truth in the form of annotations or labels is not used for pre-training, pretext tasks like solving jigsaw puzzles [30] or predicting rotation [31] are used to extract meaningful information. It is also worthy to note CLIP [10] uses a contrastive loss for multi-modal data to learn robust visual representations from image and text pairs. Contrastive methods have been shown to be useful in many other multi-modal contexts [32]. **Pre-training for Medical Images:** Model Genesis [33] proposes a unified self-supervised learning framework using the recurrent anatomy in medical images. Azizi et al. [34] perform a stage-wise pre-training on natural images followed by task specific medical images. A Multi-Instance Contrastive Learning based approach is proposed to perform self-supervised pre-training. Several other methods [35] also follow similar contrastive strategies for specific medical imaging tasks. In [36], a self-supervised framework using a combination of contrastive coding, rotation prediction, and inpainting was proposed for pretraining on CT volumes. MAE-based pre-training methods have been quickly adopted for self-supervised pre-training on medical images [18, 37]. These works show that masked image modelling methods provides better performance than previous contrastive methods. Unlike these works, we propose a new pre-training setup which efficiently pre-trains on multiple modalities contrastively while also learning all the low-level anatomical details using an autoencoder to have a better representative power. ## 3 Proposed Method In this section, we first explain our motivation by exploring why MAEs are sub-optimal for pre-training on 3D medical images. Then, we describe the proposed pre-training strategy - DAE, followed by training details and loss functions. ### What do MAEs lack for medical images? MAEs have shown impressive results for vision based image pre-training. To this end, we first adapted MAEs to operate on 3D volumes for pre-training medical volumes. However, we observed that the reconstruction quality was low as the reconstructions lose the finer anatomical details. Such observations are also seen in other works that try to use vanilla MAEs for medical image pre-training as depicted in Fig. 3. of [17] & Fig. 2. of [18]. Although using these pre-trained weights do improve downstream tasks, we argue that there is significant potential for further improvement in learning better representations as compared to MAEs for medical images. Coarse reconstructions might be sufficient to understand a high-level semantic understanding which is useful for classifying natural images. For tasks like segmentation of medical images, we postulate that coarse features result in poor reconstructions and are not sufficient to enable efficient fine-tuning. MAEs lack in the aspect of learning features that reflect a deeper understanding of the medical image as the tokens are masked globally and no special attention is given to learn the local details. ### Distruptive Autoencoders Medical images (especially radiology images) are different from natural images as the information contained in them like anatomical details and lesions are mostly fine-grained details instead of coarse structures. To this purpose, the network architecture need guidance to learn features which are representative of low-level details to have more useful representations of the medical image. To tackle these challenges, we propose DAE, a pre-training strategy that focuses Figure 2: Comparison of reconstruction quality. It can be observed that masked image modelling produces a coarse reconstruction for radiology without local contact while the proposed disruptions in this work obtain sharper reconstructions recovering meaningful fine details. on learning strong low-level representative features of the medical image to reconstruct local context. Here, we first take a cubic patch from a 3D medical volume, perturb and tokenize it to get disrupted tokens in 3D. The disrupted tokens are then passed through a transformer encoder to learn a feature representation. The latent features are passed through a decoder and are learned to reconstruct the original image back. The tokens are disrupted using a combination of different low-level perturbations: i) Local Masking ii) Adding Noise and iii) Downsampling. Local Masking is a novel masking strategy proposed by this work. Denoising and super-resolution (recovering downsampled images) are classic low-level tasks, we are one of the first to explore them as pre-training tasks for medical imaging motivated by the tasks ability to affect low-level finer features of the images. In the next sections, we discuss these in greater detail. #### 3.1.1 Local Masking Transformers were designed to function in a sequence-to-sequence manner where the image is first tokenized. Tokenization can either be done with just linear layers [2] or convolutional kernels [6]. The 3D input images are of dimension \((H,W,Ch)\) where \(H\), \(W\), and \(Ch\) denote the height, width, and number of channels in the image respectively. After tokenization, the tokens are of the dimensions \((N,C)\) where \(N\) represents the number of tokens and \(C\) denotes the embedding dimension. Masked image modelling methods like MAE and SimMIM follow a token masking approach where some tokens \(X\) out of \(N\) are set to zero and the network tries to reconstruct the original image back. The percentage of tokens masked here is a hyper-parameter. Token masking approach done in MAEs can be considered a global masking approach as the entire token chosen to be masked is set to zero. To be more specific, the entire \(C\) dimension of the chosen \(X\) tokens that are to be masked are set to zero. The entire \(C\) dimension, when set to zero, disrupts the image globally, thereby directing the network to learn global context with the objective to reconstruct the original image. This globally disruptive process of setting \(C\) does not always help in obtaining a good reconstruction for medical images as most of the information in medical images are not global but in the finer local details. Not being able to learn features representative of the local details like anatomy also affects the fine-tuning performance of MAEs. To this end, instead of masking \(X\) tokens out of \(N\), we propose to mask \(X\) channels out of \(C\) channel embedding dimension. This ensures that there is some information preserved for each token so that local details do not get completely destroyed. The perturbation is done locally as we set certain embeddings of each token as zero. We call this approach local masking as the masking is done to local details. Local masking has been visualized in Fig. 3. The masking ratio \(r\) here is a hyper-parameter which defines the percentage of \(C\) embeddings being masked. #### 3.1.2 Other Low-level Perturbations While local masking helps us extract features representative of local context, we further try to employ other tasks like denoising and super-resolution for pre-training which would help learn more low-level information. Noise and low resolution are commonly found issues in realistic clinical medical acquisition pipelines and hence having them in the pre-training pipeline is meaningful [38]. **Adding Noise:** Denoising is the task of restoring an original image from its noisy version. To obtain a good denoised image, a model must be able to restore all local details of the image like edges and corners. To enable denoising as a pre-training task, we first add noise to the original input and try to restore the original image from the noisy input using the network. Given that the most common additive Gaussian noise, we define the perturbed input \(\hat{x}\) as follows: \[\hat{x}=x+N(\mu,\sigma), \tag{1}\] where \(x\) is the original input and \(N\) is the normal distribution. For each sample, we randomly sample from a normal distribution to get the noise. This way, there is no specific pattern for the network to easily restore the input. The mean and variance are hyper-parameters which can be used to control the noise level injected. **Downsampling:** Image Super-resolution refers to the task of enhancing the resolution of an image from a low-resolution (LR) to high-resolution (HR). Low-resolution images usually are blurred with sampling artifacts thus making it difficult to infer details from the image. Super-resolution is highly useful in medical imaging as capturing high-resolution images for certain modalities like MR can be tricky due to long scanning time (as the resolution of the scan acquisition is increased, time and cost go higher exponentially). To use super-resolution as a pre-training strategy, we first downsample the input image \(x\) to get the LR image \(\hat{x}\). We formulate this downsampling process as follows: \[\hat{x}=D(x,\epsilon), \tag{2}\] where \(D\) is the downsampling function and \(\epsilon\) is the downsampling ratio. We use linear interpolation for downsampling. We follow a pre-upsampling super-resolution setup where we upsample the LR to same spatial dimension before converting it to HR. In DAEs, we use a combination of all the above perturbations. We first add noise and downsample the image, then combine the two perturbations. Then, the resultant 3D image is tokenized with a local masking strategy. We name these tokens as disrupted tokens and pass it to the transformer encoder. These features are then passed through a decoder for reconstruction and for cross-modal contrastive learning as explained in Fig. 1. Figure 3: Local Masking: Each token has channel embeddings, in MAE-based methods, the selected tokens are masked completely. In Local masking we only mask certain channel embeddings of each token instead of completely masking the token. ### Network Details and Training We use Swin-UNETR as our backbone architecture for all the experiments. Swin-UNETR follows a hierarchical vision transformer backbone that computes self-attention in an efficient shifted window partitioning scheme and has been proved to be suitable for multiple downstream medical imaging tasks [36]. It has a U-shaped network design in which the extracted feature representations of the encoder are used in the decoder via skip connections at each resolution. The encoder first creates a sequence of 3D tokens followed by transformer blocks consisting of window and shifted window self-attention mechanisms. This helps Swin-UNETR learn long-range dependencies in the image resulting in effective segmentation. For pre-training we use a combination of \(\mathcal{L}_{1}\) reconstruction loss and the cross modal contrastive learning loss \(\mathcal{L}_{CMCL}\). First, we explain how to obtain \(\mathcal{L}_{CMCL}\). Inspired from [10] we train the network to predict which of the \(B\times B\) possible pairings across a batch \(B\). The goal is to maximize the cosine similarity of the embeddings of the true pairs in the batch while minimizing the cosine similarity of the embeddings of the incorrect pairings. We optimize a symmetric cross entropy loss over these similarity scores. \[z_{sim}=z_{i}*z_{i}^{T}*exp(t), \tag{3}\] where \(z_{i}\) is the mini-batch feature vector, \(z_{sim}\) is the similarity matrix and \(i\) is the mini-batch index. \(t\) here is the temperature parameter and is set to 0.07. Note that the total number of data points is \(N\) and each mini-batch has \(B\) data points which means the mini-batch index \(i\) goes from 1 to \(N/B\). Now, we apply a simple binary cross entropy loss on the similarity matrix \(z_{sim}\) to perform contrastive learning. We define CMCL loss as follows: \[\mathcal{L}_{CMCL}=\alpha*CE(z_{sim},z_{label}), \tag{4}\] where \(CE\) represents the binary cross entropy loss, \(\alpha\) represents the scale and \(z\) denotes the latent feature vector. CE loss is applied across each axis in the \(z_{sim}\) matrix. \(z_{label}\) is the label matrix that is created based on the positive pairs and negative pairs as per the meta-data. The total pre-training loss can be defined as: \[\mathcal{L}_{pretrain}=\mathcal{L}_{1}+\mathcal{L}_{CMCL}. \tag{5}\] The \(\alpha\) in CMCL is set to 0.05 to match the range of both losses. We pre-train Swin-UNETR on a curated set of medical volumes without any labels then use those weights as starting weights for our downstream fine-tuning experiments. For finetuning experiments, we use a Dice loss to train the model. ## IV Experiments ### Datasets **Pre-training Dataset:** We combine various public radiology CT and MRI datasets BraTS21 [39], LUNA16 [40], TCIA Covid19 [41], HNSCC [42], TCIA Colon [43], and LiDC [44] to construct our pre-training dataset. The corresponding number of 3D volumes for brain, chest, abdomen and head/neck volumes are \(1,310\times 4\) (4 modalities), 2,018, 1,520 and 1,223, respectively. The number of brain MRI volumes of each modalities T1, T2, T1ce and FLAIR is 1,310. **Finetuning Datasets:** i) **BTCV:** Beyond the Cranial Vault (BTCV) abdomen challenge dataset [45] consists of abdominal CT scans of 30 subjects. The annotations contain 13 organs which are annotated by interpreters under supervision of radiologists at Vanderbilt University Medical Center. Each CT scan is acquired with contrast enhancement phase at portal venous consists of 80 to 225 slices with \(512\times 512\) pixels and slice thickness ranging from 1 to 6 \(mm\). The multi-organ segmentation problem is formulated as a 13-class segmentation task. ii) **FeTA**: Fetal Tissue Annotations dataset (FeTA) [46] consists of publicly available database of 120 manually segmented pathological and neurotypical fetal MRI T2-weighted brain volumes. These volumes are across a range of gestational ages (20 to 33 weeks) and are segmented into 7 different tissue categories (external cerebrospinal fluid, grey matter, white matter, ventricles, cerebellum, deep grey matter, brain-stem/spinal cord). The training images were acquired at two different sites with 1.5T and 3T MR scanners. The data were provided with histogram based matching and zero-padded for \(256\times 256\times 256\) voxels. Data of both sites was also sampled at 0.5 \(mm\) isotropic spacing as per challenge design. The dataset was split into 5-folds of 80/20 for training and validation. ### Implementation Details Our deep learning models were implemented in PyTorch [47] and MONAI. For pre-training experiments, we used a batch-size of 2 per GPU. The volumes were randomly cropped into \(96\times 96\times 96\) cubes while pre-training. We used an initial learning rate of \(4e^{-4}\), momentum of 0.9 and decay of \(1e^{-5}\) for 20K iterations. We trained the model using an AdamW [48] optimizer with a warm-up cosine scheduler of 500 iterations. We use hyper-parameters \(r=60\%\), \(\sigma=0.1\), \(\epsilon=4\) when training the DAE. For BTCV fine-tuning experiments, a five-fold cross validation strategy is used to train the models. The models were trained for 600 epochs with a learning rate of \(4e^{-4}\) and the batch-size was set to 4 per GPU. Multiple augmentations such as gaussian noise, contrast scaling, zoom and random flipping across the axis were utilized. We select the best model in each fold and ensemble their outputs for final segmentation predictions. For FeTA, the intensities were normalized to a scale of 0 to 1. The learning rate was set to \(4e^{-4}\) and batch size was set to 4 per GPU. All models were trained for 600 epochs, which was determined by convergence for the full dataset. Augmentations like random flipping on all 3 axes, Gaussian noise etc. were utilized during the training process. The final layer of the network is also changed from the pre-training configuration to accommodate the fine-tuning task at hand. For FeTA, the output channels were set to 8 (including background) as per the dataset. All pre-training and fine-tuning models are trained using NVIDIA DGX-1 V100 servers with 8 and 4 GPUs, respectively. ### Comparison with Previous Works We compare our proposed method with previous self-supervised methods like contrastive coding [36, 49], rotation prediction [36, 50], and masked image modelling methods [16, 25]. We use Swin-UNETR as our network backbone for all these experiments. We note that MAE and SimMIM are very similar to each other, the only difference being MAEs discard masked tokens while SimMIM includes them. So, we just utilize a masked image modelling configuration but with Swin-UNETR as the backbone and call this configuration MAE in all upcoming discussions. For BTCV, we directly validated our predictions in the public leaderboard and so that we can compare our method with all previous backbone methods. For the leaderboard submissions, we submit to the free competition (no specific registration process required). We train all our models with 80 subjects (20% as validation set), and evaluates on the 20 images test set with spacing resolution of \(1\times 1\times 1\)mm. Within the 80 images, 30 scans are from the public challenge data and 50 extra CT scans annotated by radiologists are used to boost the training performance. We perform 4 rounds of five-fold cross validation experiments and ensemble models to obtain the final prediction. The ensemble process are effective to exclude outliers. In addition, test time augmentation, boundary smoothing and connected component analysis are used for post-processing the labels. Note that this pipe-line for BTCV leaderboard submission is similar to the previous approaches like [36] for fair comparison. These results are tabulated in Table I. We note that our proposed method performs the best and outperforms all the previous baselines. Specifically, we note that we outperform Swin-UNETR [36] which also uses SSL pre-training consisting of 3 different SSL pretext tasks. In particular, we obtain a significant improvement in terms of Hausdorff Distance (HD) and Mean Surface Distance (MSD) compared to previous methods. We also conduct a paired t-test between our BTCV test Dice scores and Swin-UNETR's results. We obtain a two-tailed p-value of 0.0318 which shows our improvement is statistically significant (p \(\leq\) 0.05). We also note we obtain better results than even recent methods like UniMiSS [51] which works on the same dataset. We also present our results on five-fold cross validation of BTCV according to [36] and compare our method with previous SSL methods in Table III. It can be observed that our method performs better than all the previous SSL baselines compared including MAEs. Note that even the combination of the 3 pretext SSL tasks proposed in Swin-UNETR [52] achieves 0.8472 dice score which is less than our obtained result. For FeTA, we perform five-fold cross validation experiments and compare our method with from scratch and MAE. These results are tabulated in Tables II. We also show additional experiments on 50\(\%\), and 20\(\%\) training data in finetuning to show the usefulness of our method. It can be observed that our method performs even better under low-data regime which shows the benefits of pre-training. We also visualize some sample qualitative results of the BTCV dataset in Fig. 4. It can be observed that DAEs result in better segmentation predictions specifically performing better at segmenting small anatomy when compared to MAEs and other baselines. best out of the three. The combination of the disruptions obtains better performance. We note that this trend totally depends on the downstream task but with a combination of these perturbations, the pre-trained weights are always better at performance than random initialization. **Impact of CMCL:** To understand the impact of CMCL, we conduct an experiment on DAE with and without CMCL loss. This experiment is conducted on single fold of the BTCV dataset. It can be observed in Table V that CMCL provides a benefit in terms of fine-tuning performance. **Empirical Analysis to prove low-level features matter:** Since a major premise of this work is to improve pre-training pipeline by extracting better low-level features, we conduct a simple experiment to show that low-level features are the most important for fine-tuning. We use CKA [53] as the feature similarity metric. CKA is used to represent the correlation between any two feature vectors in the latent space. For this experiment, we use MAE and DAE pretrained weights and finetune it on a single fold of BTCV. Now, we feed forward the test images to both the pre-trained model and the fine-tuned model. The CKA calculated between the features extracted from pre-trained model and fine-tuned model across different stages is reported in Table VI. This value gives us an estimate of how much features changed from the initial pre-trained weights to the final fine-tuned model. It can be observed that the deeper layers have the least CKA meaning most of the high-level features have changed after fine-tuning. On the other hand, the early layers have a higher CKA meaning more low-level feature representations were retained from the pre-trained weights to the fine-tuned model. As high-level features undergo a relatively heavier change compared to low-level features after fine-tuning, therefore it is important to focus on learning stronger low-level features during pre-training. **Limitations:** There is scope for improvement in picking the best combination of \(\sigma,\epsilon,r\) for the pre-trained weights. A grid search on these parameters would be time-consuming but would lead to better pre-trained weights. Moreover, it can be noted that it takes a lot of time and compute to conduct pre-training experiments with large-scale datasets on dense medical volumes. Additionally, if the size of the pre-training dataset is increased, it is likely to obtain better weights. ## VI Conclusion In this work, we proposed a new pre-training framework for 3D medical images called Disruptive Autoencoders, where tokens are disrupted using a combination of perturbations: local masking, additive noise, and downsampling. In particular, local masking is a new masking strategy where the masking is performed across channel embeddings instead of tokens to improve the learning of local feature representations. DAE as a pre-training framework performs better than other pre-training strategies across multiple 3D segmentation datasets.
2309.10787
AV-SUPERB: A Multi-Task Evaluation Benchmark for Audio-Visual Representation Models
Audio-visual representation learning aims to develop systems with human-like perception by utilizing correlation between auditory and visual information. However, current models often focus on a limited set of tasks, and generalization abilities of learned representations are unclear. To this end, we propose the AV-SUPERB benchmark that enables general-purpose evaluation of unimodal audio/visual and bimodal fusion representations on 7 datasets covering 5 audio-visual tasks in speech and audio processing. We evaluate 5 recent self-supervised models and show that none of these models generalize to all tasks, emphasizing the need for future study on improving universal model performance. In addition, we show that representations may be improved with intermediate-task fine-tuning and audio event classification with AudioSet serves as a strong intermediate task. We release our benchmark with evaluation code and a model submission platform to encourage further research in audio-visual learning.
Yuan Tseng, Layne Berry, Yi-Ting Chen, I-Hsiang Chiu, Hsuan-Hao Lin, Max Liu, Puyuan Peng, Yi-Jen Shih, Hung-Yu Wang, Haibin Wu, Po-Yao Huang, Chun-Mao Lai, Shang-Wen Li, David Harwath, Yu Tsao, Shinji Watanabe, Abdelrahman Mohamed, Chi-Luen Feng, Hung-yi Lee
2023-09-19T17:35:16Z
http://arxiv.org/abs/2309.10787v2
# AV-SUPERB: A Multi-Task Evaluation Benchmark ###### Abstract Audio-visual representation learning aims to develop systems with human-like perception by utilizing correlation between auditory and visual information. However, current models often focus on a limited set of tasks, and generalization abilities of learned representations are unclear. To this end, we propose the AV-SUPERB benchmark that enables general-purpose evaluation of unimodal audio/visual and bimodal fusion representations on 7 datasets covering 5 audio-visual tasks in speech and audio processing. We evaluate 5 recent self-supervised models and show that none of these models generalize to all tasks, emphasizing the need for future study on improving universal model performance. In addition, we show that representations may be improved with intermediate-task fine-tuning and audio event classification with AudioSet serves as a strong intermediate task. We release our benchmark with evaluation code1 and a model submission platform2 to encourage further research in audio-visual learning. Footnote 1: [https://github.com/roger-tseng/av-superb](https://github.com/roger-tseng/av-superb) Footnote 2: [https://av.superbenchmark.org](https://av.superbenchmark.org) Yuan Tseng\({}^{1}\), Layne Berry\({}^{2*}\), Yi-Ting Chen\({}^{3*}\), I-Hsiang Chiu\({}^{1*}\), Hsuan-Hao Lin\({}^{1*}\), Max Liu\({}^{1*}\), Puyuan Peng\({}^{2*}\), Yi-Jen Shih\({}^{1*}\), Hung-Yu Wang\({}^{1*}\), Haibin Wu\({}^{1*}\), Po-Yao Huang\({}^{4}\), Chun-Mao Lai\({}^{1}\), Shang-Wen Li\({}^{4}\), David Harwath\({}^{2}\), Yu Tsao\({}^{3}\), Shinji Watanabe\({}^{5}\), Abdelrahman Mohamed\({}^{6}\), Chi-Luen Feng\({}^{1}\), Hung-yi Lee\({}^{1}\) \({}^{1}\) National Taiwan University, Taiwan \({}^{2}\) University of Texas at Austin, USA \({}^{3}\) Academia Sinica, Taiwan \({}^{4}\) Meta AI \({}^{5}\) Carnegie Mellon University, USA \({}^{6}\) Rembrand [email protected] Audio-Visual Learning, Representation Learning, Evaluation, Self-Supervised Learning ## 1 Introduction Emulating the seamless integration of multiple tasks in human cognition, such as spoken language comprehension, sound event detection, and visual object recognition has been a long-standing goal of computational research. Prior research demonstrates that the pretrain-then-finetune paradigm is an effective and scalable method of building multitasking algorithmic systems for speech [1, 2], audio [3, 4], and vision [5, 6]. In the pretraining stage, models can often learn meaningful representations from unlabelled data alone through optimization of contrastive, masked prediction, or other self-supervised loss functions. These pretrained representations can then be applied to diverse tasks just by fine-tuning minimal additional parameters. In order to better measure progress in representation learning, previous works have established multitask benchmarks in speech [7, 8], audio [9], and vision [10, 11]. However, these benchmark predominantly evaluate performance in isolation within single modalities. This approach overlooks the inherent multimodal nature of human perception, which synergistically integrates auditory and visual cues [12, 13]. While audio-visual representation learning has made significant progress [14, 15, 16, 17, 18], the assessment of these models tends to be task-specific, leaving the broader generalization capabilities across various audio-visual challenges less understood. This complicates comparative analysis of different models and training strategies, impeding the development of more robust and versatile audio-visual representation learning approaches. To address this issue, we propose AV-SUPERB, a standardized benchmark for comprehensively evaluating representations across seven distinct datasets involving five speech and audio processing tasks. AV-SUPERB comprises of three tracks to assess audio, video, and audio-visual fusion representations. We envision that these distinct tracks will allow researchers in speech, audio, and video representation learning alike to compare learning strategies across models and modalities, enabling broader analysis of their effectiveness. Our contributions are four-fold: (1) **Diverse-domain evaluation**: We propose the first audio-visual learning benchmark that encompasses multiple datasets and tasks, covering both speech and audio domains. (2) **Easy and reproducible benchmarking**: We release evaluation code and a dedicated model submission platform that ensures reproducible evaluation on dynamic Youtube datasets and reduces computational entry barriers. (3) **Intermediate-task fine-tuning**: Our work emphasizes the potential benefits of full fine-tuning on intermediate tasks for improving performance on out-of-domain downstream tasks. (4) **Layer-wise analysis**: We show that different layers contribute variably to task performance, suggesting that simply using representations of the final layer is suboptimal, motivating the weighted-sum evaluation approach. ## 2 Related Work Recognizing how the close relation between audition and vision facilitates multimodal human perception, many audio-visual datasets have been gathered for action recognition [19, 15, 20, 21], speech recognition [22, 23, 24], speaker recognition [25, 26], and a variety of other tasks to study audio-visual learning. However, most models are trained and evaluated on different datasets with different experiment settings, which increases comparison difficulty and obfuscates the broad applicability of proposed methods. Hence in the AV-SUPERB benchmark, we select a diverse set of datasets from multiple tasks to comprehensively compare works in audio-visual representation learning. Previous multitask benchmarks in speech [7, 8], audio [9], and video representation learning [27, 10, 11] allow for fairer comparison of different models and promote research towards general approaches that are applicable to a variety of real-world tasks. SUPERRB [7] and SUPERRB-SG [8] evaluate speech representation models on a wide range of downstream tasks covering content, speaker, and other different aspects of speech. Additionally, the HEAR benchmark [9] evaluates audio representations on diverse domains beyond speech, such as music and environmental sounds. For video representations, the SEVERE-benchmark [10] evaluates video self-supervised learning models on a diverse set of datasets to measure model sensitivity to different properties of downstream tasks. Feichtenhofer et al. [27] extend 4 image self-supervised learning methods to video representations and compare their efficacy on several downstream datasets, while Kumar et al. [11] focus on the effects of different factors in self-supervised video pretraining. However, these works focus on individual domains, and cannot make use of the relationship between paired audio/visual inputs. Previous multi-task multimodal benchmarks either focus on egocentric videos [28], visual-and-language domains [29, 30] or general multimodal learning[31]. In contrast, AV-SUPERB specializes in audio-visual tasks from speech and audio processing, allowing for more holistic assessment of representation models of audio and video alike. ## 3 Benchmark Details As shown in Figure 1, audio-visual models typically consist of two separate unimodal encoders followed by multimodal fusion layers. Based on this design, we setup three evaluation tracks in AV-SUPERB to benchmark representations from the two encoder and fusion layers, referred as audio-only, video-only, and audio-visual fusion features. This also allows for easy comparison with previous unimodal representation models. Instead of striving for best possible performance for each task, the goal of our benchmark is to provide insight on the generalization capabilities of pretrained representations; therefore, we freeze the parameters of the task-invariant pretrained representation model (hereby referred as upstream model), and only finetune the parameters of the task-specific model (hereby referred as downstream model), following previous work [7]. Downstream models are designed to be simple and lightweight in order to purely evaluate representation abilities. Following the spirit of representation evaluation, we also limit hyperparameter tuning for downstream tasks. Although, we recognize that different representations may have vastly different loss landscapes, hence we search for the best performing learning rate from \(10^{-1}\) to \(10^{-5}\) in log-scale. ### Downstream Task Selection To keep computational costs reasonable, we mainly focus on utterance-level classification tasks in speech and audio processing, with the addition of ASR. For audio processing, we select two audio classification tasks that highlight the relevance of different modalities, audio event classification (AEC) and action recognition (AR). Since audio events are often directly caused by actions, these tasks are complementary, and utilizing both audio and visual information can lead to better representations. This enables the possibility of learning better representations from multimodal input compared to unimodal baselines. For speech processing, we select three audio-visual speech processing tasks where visual information is known to be beneficial [32, 33, 34], automatic speech recognition (ASR), automatic speaker verification (ASV), and emotion recognition (ER), in order to assess model capabilities on three fundamental aspects of speech: content, speaker, and paralinguistic information. In designing the architecture for the downstream models, we generally follow the setup used for utterance-level tasks in the SUPERRB benchmark. Specifically, the downstream model consists of a two-layer fully-connected network. This network takes the mean of features extracted from the frozen upstream model as input, and outputs class probabilities. However, as we also include the frame-level ASR task, we employ a two-layer BiLSTM model that takes the whole representation sequence as input and outputs characters. ### Pretrained Upstream Models To showcase the utility of our benchmark, we opt for the base version of four audio-visual upstream models, AV-HuBERT [35], RePLAI [36], Lee et al.'s model [37] (hereby referred as AVBERT throughout this paper), and MAViL [38]. These models were specifically chosen because they each excel at different tasks, underscoring the current gap in multi-tasking capabilities within existing audio-visual models. They vary substantially in terms of architecture, training objectives, and preprocessing techniques. We also conduct experiments on the base HuBERT [2] model, an unimodal speech representation model with similar design as AV-HuBERT, to make a fairer comparison between audio & audio-visual features. Additionally, we incorporate two baselines that use handcrafted features as input for downstream models. Specifically, we employ log mel filterbank (FBANK) for audio and histogram of oriented gradients (HoG) for video, respectively. ## 4 Experimental Results and Discussion Previous work has shown that simply using representations extracted at the last layer of a frozen self-supervised model often results in sub-optimal performance [39, 7]. Hence, we take a learnable weighted-sum of representations extracted over different Transformer layers as the final representation for each downstream task. For the audio-only and video-only tracks, only unimodal input and the relevant layers are used for extracting representations. For the audio-visual fusion track, both of the unimodal encoders plus fusion layers are used. As the size of representations extracted from fusion Transformer layers differ from those of unimodal layers, we take the weighted-sum for fusion Transformer layers only. Figure 1: We consider three evaluation scenarios: extracting features using inputs from one or both modalities. Following [7], the weighted-sum of features from Transformer layers (if applicable) are used as input for fine-tuning a small downstream model for each individual task. Details of selected tasks are given in Section 3.1. ### Downstream Datasets and Training Details Evaluation results for the three tracks are given in Table 1. For AEC, we evaluate on AudioSet [40] and VGGSound [41], and for AR, we select Kinetics-Sounds [15] and UCF101 [20]. Notably, in VGGSound and Kinetics-Sounds, audio and visual information are more correlated. This is reflected in our results, as audio-visual fusion improves representations more for the two datasets compared to AudioSet and UCF101. We report testing set mean average precision for multi-label classification on AudioSet, and accuracy for the remaining three datasets. For speech processing, we choose LRS3-TED for ASR, VoxCeleb2 for ASV, and IEMOCAP for ER. For ASR, we optimize CTC loss for character-level ASR, and report character error rate. For ASV, we first train for speaker identification on a subset of the dev split, then calculate cosine similarity to do verification on the test split and report equal error rate. For ER, we follow conventional evaluation, remove unbalanced classes to perform four-way classification (neutral, happy, sad, angry) and report accuracy. Additional dataset and training details are given on our submission platform2. Footnote 2: In order to fairly compare HuBERT & AV-HuBERT, we set features of the opposing modality to 0 and extract features from the 12-layer fusion Transformer for audio-only and video-only tracks. ### Overall Results We find that existing models generally obtain large gains over handcrafted features, yet none of the five models tested were able to outperform all others in every task. To gauge universal performance across tasks, we provide an overall score calculated as the mean of either task-specific accuracies or the complement of error rates. For the three speech processing tasks (ASR, ASV, ER), AV-HuBERT performs the best on ASR and ASV, and HuBERT achieves superior performance on ER. Notably, the unimodal HuBERT scores competitively on ASR and ASV as well, despite not being trained to utilize any visual grounding information. For the four audio processing datasets, MAViL and AVBERT consistently outperforms all other models in all three tracks. We hypothesize that this is largely due to the diversity and large size of AudioSet data used for pretraining. Despite the domain mismatch, AVBERT also performs competitively for the ASV and ER speech tasks, especially in the audio-visual fusion track. However, MAViL and AVBERT cannot perform ASR well, as simply using handcrafted FBANK features achieves lower error rates. Comparing their scores in the audio-only and fusion tracks, we see that fusion layers are unable to effectively utilize the additional lip reading information, as performance is reduced when video is provided. ### When does Visual Grounding Improve Audio Representation Learning? Compared to unimodal audio representation models, audio-visual models may take advantage of information learned from visual grounding to improve audio representations even when only audio input is available at inference. Of the five evaluated models, HuBERT and AV-HuBERT use similar architectures, and optimize the same masked cluster prediction objective using k-means clusters of MFCC features as the initial targets. While HuBERT is only trained on unimodal speech data, AV-HuBERT is further trained to predict multimodal cluster targets obtained from both audio and visual modalities. Hence we compare audio-only track results for HuBERT and AV-HuBERT, and find that visual grounding from multimodal cluster prediction can obtain small gains for VoxCeleb2, \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{3}{*}{Representation Type} & \multirow{3}{*}{Params.} & \multirow{3}{*}{Overall Score} & \multicolumn{4}{c}{Audio-Visual} & \multicolumn{4}{c}{Speech-Visual} \\ \cline{3-10} & & & \multicolumn{2}{c}{AEC} & \multicolumn{2}{c}{AR} & \multicolumn{2}{c}{ASR} & \multicolumn{2}{c}{ASV} & ER \\ \cline{5-10} & & & AS-20K & VGGSound & \begin{tabular}{c} Kinetics- \\ Sounds \\ \end{tabular} & UCF101 & LRS3-TED & VoxCeleb2 & IEMOCAP \\ & & (mAP \(\uparrow\)) & (Acc. \(\uparrow\)) & (Acc. \(\uparrow\)) & (Acc. \(\uparrow\)) & (CER \(\downarrow\)) & (EER \(\downarrow\)) & (Acc. \(\uparrow\)) \\ \hline _Audio-only_ & & & & & & & & \\ FBANK & 0 & 36.69 & 2.8 & 5.12 & 24.73 & 19.91 & 20.07 & 27.16 & 51.52 \\ HuBERT & 95M & 53.66 & 14.3 & 30.21 & 51.46 & 36.06 & **2.96** & 15.58 & **62.14** \\ AV-HuBERT* & 90M & 53.20 & 12.6 & 31.14 & 49.02 & 38.58 & 3.01 & **14.45** & 58.54 \\ RepLAI & 5M & 39.70 & 12.3 & 27.01 & 45.90 & 33.85 & 66.09 & 32.58 & 57.53 \\ AVBERT & 10M & 44.81 & 20.5 & 37.67 & 55.28 & 43.26 & 80.23 & 23.74 & 60.94 \\ MAViL & 86M & 54.11 & **21.6** & **39.91** & **57.28** & **45.68** & 24.43 & 20.71 & 59.46 \\ \hline _Video-only_ & & & & & & & & \\ HoG & 0 & 25.39 & 1.5 & 3.81 & 18.70 & 25.67 & 71.46 & 36.32 & 35.83 \\ AV-HuBERT* & 103M & 33.48 & 2.4 & 5.90 & 24.73 & 37.55 & **50.91** & **11.90** & 26.59 \\ RepLAI & 15M & 36.40 & 5.5 & 13.5 & 46.68 & 56.69 & 71.33 & 36.95 & 40.72 \\ AVBERT & 37M & 47.69 & 11.5 & 28.73 & 62.67 & 77.42 & 72.29 & 20.00 & **45.8** \\ MAViL & 87M & 49.70 & **18.0** & **32.08** & **74.01** & **79.37** & 74.03 & 24.58 & 43.03 \\ \hline _Audio-visual fusion_ & & & & & & & & \\ AV-HuBERT & 103M & 53.42 & 13.3 & 32.69 & 52.23 & 41.46 & **2.75** & **9.46** & 46.45 \\ AVBERT & 43M & 54.85 & 22.9 & 44.54 & 71.31 & 71.76 & 70.12 & 18.31 & **61.87** \\ MAViL & 187M & 62.36 & **26.7** & **47.22** & **79.51** & **77.98** & 30.18 & 19.67 & 54.94 \\ \hline \hline \end{tabular} * In order to fairly compare HuBERT & AV-HuBERT, we set features of the opposing modality to 0 and extract features from the 12-layer fusion Transformer for audio-only and video-only tracks. \end{table} Table 1: Main results. Best results for each track are highlighted in bold. Second-best results are underlined. We observe that MAViL excels at audio processing tasks, while HuBERT and AV-HuBERT are better for speech processing tasks. VGGSound and UCF101. ### Layer-wise Contribution Analysis After fine-tuning the learnable weighted-sum over all upstream model layers on a downstream task, we may compare layer utilization by examining the weights of each layer in the weighted-sum. [42] Since the magnitude of representations from each layer may differ, we normalize layer weights for each layer by multiplying the weight with the L2-norm of representation values on the training set. For MAViL, we find the layers that are commonly more dominant are the last three layers in the audio encoder, and the last two layers in the video encoder and fusion layers. Despite this, we observe an exception for emotion recognition on IEMOCAP. For IEMOCAP, the most dominant layer is the \(0^{\text{th}}\) layer instead. For AV-HuBERT, the final layer often contributes little. In the audio-only setup, we see that the layer with the most contribution is the penultimate layer for most speech and audio tasks besides ASR. For ASR, the last two layers are highly dominant on all three tracks. For non-ASR tasks, we note that when additional visual inputs are given, prior layers increase in contribution only when audio-visual fusion outperforms audio-only performance for AV-HuBERT (VGGSound, Kinetics-Sound, UCF101, VoxCeleb2), suggesting that prior layers in AV-HuBERT are more related to _visual_ information, while the last few layers contain more _audio_ information. Overall, the variation in layer usage for different tasks, models, and modalities strongly motivates the use of the learnable weighted-sum technique for evaluation, instead of suboptimally evaluating the final layer alone. ## 5 How Does Intermediate-Task Fine-tuning Affect Performance? Studies in natural language processing show that pretrained language models can be improved by initial fine-tuning on an intermediate task, followed by further fine-tuning on the target task [43, 44]. In previous sections, we focus on assessing models pretrained in a self-supervised manner. However, model creators often release models variants that are fine-tuned further for performing specific downstream tasks. For example, MAViL adds 3 Transformer fusion layers after the audio and video encoders, and the whole model is finetuned on (audio&video, class) pairs for audio event classification. We hypothesize that these supervised models variants may provide improved representations for speech/audio tasks after intermediate-task training. In order to support our hypothesis, we additionally evaluate fully fine-tuned variants of AV-HuBERT and MAViL on our benchmark, to determine when intermediate-task fine-tuning is beneficial. The variant of AV-HuBERT uses the same architecture, and is fine-tuned on 433 hours of (video, text) pairs from LRS3-TED to perform visual speech recognition, whereas the MAViL variant is fine-tuned on the entirety of AudioSet-2M. Experiment results are shown in Table 2. For AV-HuBERT, we see that visual speech recognition on LRS3-TED is not a suitable intermediate task in general. Video-only representations obtain small gains in generalizability, at the cost of greatly reducing audio-only and fusion performance. We posit that intermediate-task fine-tuning with (video,text) pairs shifts AV-HuBERT Transformer layers to favor video input alone, reducing usability for audio-only and audio-visual inputs. Contrarily, for audio-visual fusion with MAViL, we see that intermediate-task training on AudioSet-2M not only brings substantial improvements to all AEC and AR datasets, but also improves ASV while maintaining ASR performance. This suggests that fine-tuning on AudioSet-2M may be sufficiently diverse to improve speaker separability of representations without much loss of content information. ## 6 Conclusions We introduce AV-SUPERB, the first benchmark for assessing general-purpose capabilities of audio-visual representations. AV-SUPERB includes a suite of 7 speech and audio processing datasets covering 5 audio-visual tasks. The benchmark is split into three tracks: two unimodal audio-only or video-only representations tracks, as well as a bimodal audio-visual fusion track, which enables easy comparison between unimodal and bimodal learning. Despite advances made in recent years, our experiments show that none of the models tested generalize to all tasks, leading us to conclude that further study is required to develop universal audio-visual models. As discussed in Section 3.1, although our benchmark aims to comprehensively evaluate audio-visual models, only a limited set of tasks and datasets are included in its current form. For future work, we wish to incorporate more tasks relevant to additional facets of audio-visual processing, such as cross-modal retrieval and sound/video generation, as well as improving the diversity and comprehensiveness of data sources. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Audio-Visual} & Speech-Visual \\ \cline{3-8} & Intermediate Task & \multicolumn{2}{c}{AEC} & \multicolumn{2}{c}{AR} & \multicolumn{2}{c}{ASR} & \multicolumn{2}{c}{ASV} & \multicolumn{2}{c}{ER} \\ \cline{3-8} & Fine-tuning Data & \multicolumn{1}{c}{AS-20K} & VGGSound & Kinetics-Sounds & UCF101 & LRS3-TED & VoxCeleb2 & IEMOCAP \\ & & (mAP \(\uparrow\)) & (Acc. \(\uparrow\)) & (Acc. \(\uparrow\)) & (Acc. \(\uparrow\)) & (CER \(\downarrow\)) & (EER \(\downarrow\)) & (Acc. \(\uparrow\)) \\ \hline \hline \multicolumn{8}{l}{_AV-HuBERT_} \\ Audio & \begin{tabular}{c} LRS3-TED \\ (video-text pairs) \\ \end{tabular} & \begin{tabular}{c} 12.6(-0.6) \\ 2.5(+0.1) \\ 5.1(-8.2) \\ \end{tabular} & \begin{tabular}{c} 22.83(-8.31) \\ 2.5(+0.1) \\ 5.1(-8.2) \\ \end{tabular} & \begin{tabular}{c} 38.19(-10.83) \\ 2.5(+0.2) \\ 5.1(-8.2) \\ \end{tabular} & \begin{tabular}{c} 28.70(-9.88) \\ 2.5(+0.1) \\ 5.1(-8.2) \\ \end{tabular} & \begin{tabular}{c} 13.89(-10.88) \\ 53.92(-4.62) \\ 32.69(+6.10) \\ 43.58(-2.87) \\ \end{tabular} & \begin{tabular}{c} 13.89(-10.88) \\ 5.2(+0.1) \\ \end{tabular} & \begin{tabular}{c} 28.70(-9.88) \\ 2.23(-7.93) \\ 32.69(+6.10) \\ 43.58(-2.87) \\ \end{tabular} & \begin{tabular}{c} 13.89(-10.88) \\ 5.2(+0.1) \\ \end{tabular} & \begin{tabular}{c} 22.38(-7.93) \\ 35.48(+15.43) \\ 34.58(+3.6) \\ \end{tabular} & \begin{tabular}{c} 53.92(-4.62) \\ 32.69(+6.10) \\ 43.58(-2.87) \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 2: Intermediate-task fine-tuning does not generally improve performance across all tasks. Results after intermediate-task fine-tuning (left) and absolute improvements compared to the original self-supervised model (right) are shown. Fine-tuning data for each model is color-coded to the corresponding downstream dataset.
2309.12190
Regret and Conservatism of Distributionally Robust Constrained Stochastic Model Predictive Control
We analyse the conservatism and regret of distributionally robust (DR) stochastic model predictive control (SMPC) when using moment-based ambiguity sets for modeling unknown uncertainties. To quantify the conservatism, we compare the deterministic constraint tightening while taking a DR approach against the optimal tightening when the exact distributions of the stochastic uncertainties are known. Furthermore, we quantify the regret by comparing the performance when the distributions of the stochastic uncertainties are known and unknown. Analysing the accumulated sub-optimality of SMPC due to the lack of knowledge about the true distributions of the uncertainties marks the novel contribution of this work.
Maik Pfefferkorn, Venkatraman Renganathan, Rolf Findeisen
2023-09-21T15:58:15Z
http://arxiv.org/abs/2309.12190v4
# Regret and Conservatism of Distributionally Robust Constrained Stochastic Model Predictive Control ###### Abstract We analyse the conservatism and regret of distributionally robust (DR) stochastic model predictive control (SMPC) when using moment-based ambiguity sets for modeling unknown uncertainties. To quantify the conservatism, we compare the deterministic constraint tightening while taking a DR approach against the optimal tightening when the exact distributions of the stochastic uncertainties are known. Furthermore, we quantify the regret by comparing the performance when the distributions of the stochastic uncertainties are known and unknown. Analysing the accumulated sub-optimality of SMPC due to the lack of knowledge about the true distributions of the uncertainties marks the novel contribution of this work. ## 1 Introduction Recently, there has been a surge of interest in analyzing control algorithms that operate subject to unknown _quantities of interest_ (QIs) through the lens of regret analysis. Lack of knowledge about a QI induces regret for the controller, i.e., a performance loss compared to when the QI was known. Notably, many robust control algorithms incorporate caution mechanisms into their decision making scheme in order to account for the lack of knowledge about the QIs. The cautiousness in the decision making can be quantified as the conservatism of the control algorithm which in turn incurs regret. For instance, the regret of \(H_{\infty}\) control compared against an optimal controller that knows the disturbance sequences exactly beforehand was examined in Karapetyan et al. (2022). Further works investigated regret in different variants of the finite- and infinite-horizon linear-quadratic regulator problem, see e.g. Chen et al. (2023); Li et al. (2019) and references therein. In this research, we are interested in comparing controllers that are robust with respect to a moment-based ambiguity set of distributions against a fully informed counterpart that knows the true uncertainty distribution (i.e. QI). Specifically, we study the conservatism and regret of the DR SMPC formulation for unknown uncertainty distributions compared against its fully informed counterpart. There have been recent works on DR MPC Fochesato and Lygeros (2022); McAllister and Rawlings (2023) and more specifically on investigating its regret Yan et al. (2023). However, they all consider Wasserstein-based formulations. On the contrary, we consider a moment-based ambiguity set formulation. Our main contributions are: 1) We define constraint conservatism and distributional regret of DR SMPC with uncertainties modeled using moment-based ambiguity sets, and 2) We develop a framework for quantifying conservatism and for analyzing the resulting regret behavior. We derive analytic expressions to identify and analyze the effects that lead to regret. The letter is organised as follows: The DR SMPC problem is introduced in Section 2. Regret and conservatism associated with DR SMPC are presented in Section 3, while some of its features are demonstrated using numerical simulation in Section 4 before concluding in Section 5. **Notations:** The set of real numbers, integers and the natural numbers are denoted by \(\mathbb{R},\mathbb{Z},\mathbb{N}\) respectively and the subset of real numbers greater than a given constant \(a\in\mathbb{R}\) is denoted by \(\mathbb{R}_{>a}\). The subset of natural numbers between two constants \(a,b\in\mathbb{N}\) with \(a<b\) is denoted by \([a:b]\). For \(N\in\mathbb{N}\), we denote by \([\![N]\!]:=\{1,\ldots,N\}\). Given two sets \(A\subset\mathbb{R}^{n},B\subset\mathbb{R}^{n}\), their Pontryagin difference is denoted by \(A\ominus B:=\{a\in A\mid a+b\in A,\forall b\in B\}\subset\mathbb{R}^{n}\). For a matrix \(A\in\mathbb{R}^{n\times n}\), we denote its transpose and its trace by \(A^{\top}\) and \(\mathbf{Tr}(A)\) respectively. Given \(x\in\mathbb{R}^{n}\), and \(A\in\mathbb{R}^{n\times n}\), the notation \(\left\|x\right\|_{A}^{2}\) denotes \(x^{\top}Ax\). ## 2 DR SMPC Problem Formulation We consider DR SMPC problem with joint chance constraints similar to Paulson et al. (2020) using moment-based ambiguity set modeling to describe the stochastic system uncertainties. ### System Dynamics & Constraints We consider a stochastic linear time-invariant system \[x_{k+1}=Ax_{k}+Bu_{k}+w_{k},\quad\forall k\in\mathbb{N}, \tag{1}\] where \(x_{k}\in\mathbb{R}^{n}\) and \(u_{k}\in\mathbb{R}^{m}\) are the system state and input at time step \(k\), respectively. The matrices \(A\in\mathbb{R}^{n\times n}\) and \(B\in\mathbb{R}^{n\times m}\) denote the dynamics matrix and the input matrix respectively. For the ease of exposition, we will assume that \(x_{0}\in\mathbb{R}^{n}\) is known. The process noise \(w_{k}\in\mathbb{R}^{n}\) is a zero-mean random vector that is independent and identically distributed over time. The distribution of \(w_{k}\), namely \(\mathbb{P}^{w}\), is unknown. However, it is known to belong to a moment-based ambiguity set of distributions given by \[\mathcal{P}^{w}=\left\{\mathbb{P}^{w}\mid\mathbb{E}[w_{k}]=0, \mathbb{E}[w_{k}w_{k}^{\top}]=\Sigma_{w}\right\}. \tag{2}\] We denote by \(N\in\mathbb{N}\) the prediction horizon of the predictive control problem to be defined shortly. The states, inputs and the disturbances over the prediction horizon are given by \[\mathbf{x}_{k} :=\left[x_{0|k}^{\top}\quad x_{1|k}^{\top}\quad\cdots\quad x_{N|k }^{\top}\right]^{\top}\in\mathbb{R}^{(N+1)n}, \tag{3}\] \[\mathbf{u}_{k} :=\left[u_{0|k}^{\top}\quad u_{1|k}^{\top}\quad\cdots\quad u_{N-1 |k}^{\top}\right]^{\top}\in\mathbb{R}^{Nm}, \tag{4}\] \[\mathbf{w}_{k} :=\left[w_{0|k}^{\top}\quad w_{1|k}^{\top}\quad\cdots\quad w_{N-1 |k}^{\top}\right]^{\top}\in\mathbb{R}^{Nn}. \tag{5}\] Here, the subscript \(i\!\mid\!k\) indicates \(i\) time steps ahead of \(k\) and \(x_{0|k}=x_{k}\). Starting at \(x_{k}\), we compactly write the evolution of (1) over the prediction horizon as \[\mathbf{x}_{k}=\mathbf{A}x_{0|k}+\mathbf{B}\mathbf{u}_{k}+ \mathbf{D}\mathbf{w}_{k}, \tag{6}\] with matrices \(\mathbf{A},\mathbf{B}\), and \(\mathbf{D}\) being of appropriate dimension. Note that \(\mathbf{x}_{k}\) and \(\mathbf{w}_{k}\) are random vectors as the realizations of future disturbances \(w_{i|k},i\in[0:N-1]\) are a-priori unknown. The mean and covariance of (6) evolve as \[\bar{\mathbf{x}}_{k} :=\mathbb{E}[\mathbf{x}_{k}]=\mathbf{A}x_{0|k}+\mathbf{B} \mathbf{u}_{k}, \tag{7}\] \[\mathbf{\Sigma}_{\mathbf{x}} :=\mathbb{E}[(\mathbf{x}_{k}-\bar{\mathbf{x}}_{k})(\mathbf{x}_{k} -\bar{\mathbf{x}}_{k})^{\top}]=\mathbf{D}\mathbf{\Sigma}_{\mathbf{w}}\mathbf{D }^{\top}, \tag{8}\] where \(\mathbf{\Sigma}_{\mathbf{w}}\) is a block diagonal matrix with \(N\) blocks and each block being equal to \(\Sigma_{w}\). Note that the state and disturbance sequences \((x_{i})_{i\in[0:k]}\) and \((w_{i})_{i\in[0:k-1]}\) have been realized at time step \(k\) and are hence deterministic. We denote those realizations by \((\hat{x}_{i})_{i\in[0:k]}\) and \((\hat{w}_{i})_{i\in[0:k-1]}\), respectively, where \(\hat{x}_{0}=x_{0}\) is the deterministic initial condition. Since the support of \(\mathbb{P}^{w}\) can be unbounded, we cannot guarantee the satisfaction of hard state constraints for all \(k\). Hence, we enforce a DR joint risk constraint on the states as \[\sup_{\mathbb{P}_{k}^{\mathbf{x}}\in\mathcal{P}_{k}^{\mathbf{x}}}\mathbb{P}_{ k}^{\mathbf{x}}\left[\mathbf{x}_{k}\notin\mathcal{X}\right]\leq\Delta, \tag{9}\] where \(\mathcal{P}_{k}^{\mathbf{x}}\) denotes the moment-based ambiguity set for the compact state \(\mathbf{x}_{k}\) and is given by \[\mathcal{P}_{k}^{\mathbf{x}}=\left\{\mathbb{P}_{k}^{\mathbf{x}}\mid\mathbb{E} [\mathbf{x}_{k}]=\bar{\mathbf{x}}_{k},\mathbb{E}[(\mathbf{x}_{k}-\bar{\mathbf{ x}}_{k})(\mathbf{x}_{k}-\bar{\mathbf{x}}_{k})^{\top}]=\mathbf{\Sigma}_{\mathbf{x}} \right\}.\] The set \(\mathcal{X}\) is assumed to be a convex polytope defined by \[\mathcal{X}:=\bigcap_{i=1}^{n_{x}}\left\{\mathbf{x}\mid f_{i}^{\top}\mathbf{x }\leq g_{i}\right\}=\left\{\mathbf{x}\mid\mathbf{F}\mathbf{x}\leq\mathbf{g} \right\}, \tag{10}\] where \(f_{i}\in\mathbb{R}^{(N+1)n},g_{i}\in\mathbb{R},\mathbf{F}\in\mathbb{R}^{n_{x} \times(N+1)n}\) and \(\mathbf{g}\in\mathbb{R}^{n_{x}}\). Further, \(\Delta\in(0,0.5]\) represents a user-prescribed total risk budget for the worst-case probability of constraint violation over the entire prediction horizon. A similar probabilistic treatment of control constraints can also be formulated. For the ease of exposition, we consider the hard input constraint formulation \(u_{k}\in\mathcal{U},\forall k\), where \(\mathcal{U}\) is a convex polytope \[\mathcal{U}:=\bigcap_{j=1}^{n_{u}}\left\{\mathbf{u}\mid c_{j}^{\top}\mathbf{u }\leq d_{j}\right\}=\left\{\mathbf{u}\mid\mathbf{Cu}\leq\mathbf{d}\right\} \tag{11}\] with \(c_{j}\in\mathbb{R}^{Nm},d_{i}\in\mathbb{R},\mathbf{C}\in\mathbb{R}^{n_{u} \times Nm}\) and \(\mathbf{d}\in\mathbb{R}^{n_{u}}\). We employ SMPC as a solution approach to minimizing the cost \[J_{\infty}(\mathbf{x}_{\infty},\mathbf{u}_{\infty})=\sum_{k=0}^{\infty}\lVert x _{k}\rVert_{Q}^{2}+\lVert u_{k}\rVert_{R}^{2}, \tag{12}\] where \(\mathbf{x}_{\infty}=(x_{i})_{i\in\mathbb{N}}\), \(\mathbf{u}_{\infty}=(u_{i})_{i\in\mathbb{R}}\), \(Q\in\mathbb{R}^{n\times n},Q\succeq 0\) and \(R\in\mathbb{N}^{m\times m},R\succ 0\) and which is an intractable problem due to the infinite horizon as well as the stochastic dynamics. The objective of the SMPC is to minimize the cost \[J_{\mathrm{SMPC}}(\mathbf{u}_{k},x_{0|k}):=\mathbb{E}\left[\lVert\mathbf{x}_ {k}\rVert_{\mathbf{Q}}^{2}+\lVert\mathbf{u}_{k}\rVert_{\mathbf{R}}^{2}\right] \tag{13}\] defined for a finite horizon \(N\) and where the expectation \(\mathbb{E}[\cdot]\) accounts for the stochasticity of the states. Note that \(\mathbf{Q}\) is a block-diagonal matrix with \(N+1\) blocks of which the first \(N\) are equal to \(Q\succeq 0\) and the last block is given by \(Q_{t}\succeq 0\). For appropriately designed \(Q_{t}\), (13) approximates the expected, remaining infinite-horizon cost starting at \(x_{k}\) at time point \(k\). Similarly, \(\mathbf{R}\) is a block-diagonal matrix with \(N\) blocks and each block being equal to \(R\succ 0\). Then, \(J_{\mathrm{SMPC}}(\mathbf{u}_{k},x_{0|k})\) can be rewritten as \[J_{\mathrm{SMPC}}(\mathbf{u}_{k},x_{0|k})=\left\lVert\bar{\mathbf{x}}_{k} \right\rVert_{\mathbf{Q}}^{2}+\lVert\mathbf{u}_{k}\rVert_{\mathbf{R}}^{2}+ \mathbf{Tr}(\mathbf{Q}\mathbf{\Sigma}_{\mathbf{x}}). \tag{14}\] We now formally state the DR SMPC optimization problem along with the state and input constraints. ### The DR SMPC Problem **Problem 1**.: _Given an initial state \(x_{0|k}=x_{k}\in\mathbb{R}^{n}\), risk budget \(\Delta\), process noise \(w_{i|k}\sim\mathbb{P}^{w}\in\mathcal{P}^{w},\forall i\in[0:N-1]\), and the penalty matrices \(Q\succeq 0,R\succ 0\), we seek to find an input sequence \(\mathbf{u}_{k}\) that optimizes the following optimal control problem (OCP):_ \[\min_{\mathbf{u}_{k}} J_{\mathrm{SMPC}}(\mathbf{u}_{k},x_{0|k}) \tag{15a}\] \[\mathrm{s.\,t.} \mathbf{x}_{k}=\mathbf{A}x_{0|k}+\mathbf{Bu}_{k}+\mathbf{D} \mathbf{w}_{k},\quad x_{0|k}=x_{k},\] (15b) \[\mathbf{u}_{k}\in\mathcal{U},w_{i|k}\sim\mathbb{P}^{w}\in \mathcal{P}^{w},\] (15c) \[\sup_{\mathbb{P}_{k}^{\mathbf{x}}\in\mathcal{P}_{k}^{\mathbf{x}}} \mathbb{P}_{k}^{\mathbf{x}}\left[\mathbf{x}_{k}\notin\mathcal{X}\right]\leq\Delta. \tag{15d}\] Problem 1 is solved online in a receding horizon fashion at each time instance \(k\). The first element \(u_{0|k}^{\star}\) of the optimal input sequence \(\mathbf{u}_{k}^{\star}\) is applied to the system given by (1) until the next sampling instant \(k+1\)1. Note that the OCP (15) is in general computationally intractable as (15d) is an infinite-dimensional DR joint risk constraint. Using Boole's inequality, (15d) can be decomposed into individual risk constraints across each time step along the prediction horizon. Suppose that \(\sum_{i=1}^{n_{x}}\delta_{i}\leq\Delta\) and Footnote 1: We refer the reader to Rawlings et al. (2019) for a comprehensive introduction to and overview of model predictive control. \[\sup_{\mathbb{P}_{k}^{\star}\in\mathcal{P}_{k}^{\star}}\mathbb{P}_{k}^{\star} \left[f_{i}^{\top}\mathbf{x}_{k}>g_{i}\right]\leq\delta_{i},\quad\forall i\in \llbracket n_{x}\rrbracket. \tag{16}\] Then, (16) implies (15d). That is, \[\sup_{\mathbb{P}_{k}^{\star}\in\mathcal{P}_{k}^{\star}}\mathbb{P}_ {k}^{\star}[\mathbf{x}_{k}\!\notin\!\mathcal{X}] =\sup_{\mathbb{P}_{k}^{\star}\in\mathcal{P}_{k}^{\star}}\mathbb{P} _{k}^{\star}\left[\mathbf{x}_{k}\!\in\!\bigcup_{i=1}^{n_{x}}\left\{\mathbf{x}_ {k}\left|f_{i}^{\top}\mathbf{x}_{k}\!>\!g_{i}\right.\right\}\right]\] \[\leq\sum_{i=1}^{n_{x}}\sup_{\mathbb{P}_{k}^{\star}\in\mathcal{P} _{k}^{\star}}\mathbb{P}_{k}^{\star}\left[f_{i}^{\top}\mathbf{x}_{k}>g_{i}\right]\] \[\leq\sum_{i=1}^{n_{x}}\delta_{i}\] \[\leq\Delta.\] Note that the decomposition using Boole's inequality is known to be conservative, see Hunter (1976) for less conservative alternatives. Though (16) is an infinite-dimensional risk constraint, it can be equivalently formulated as a second order cone constraint on \(\bar{\mathbf{x}}_{k}\) through deterministic constraint tightening (See Lemma 1 given without proof). **Lemma 1**.: _(From Ono and Williams (2008) and Calafiore and El Ghaoui (2006)) Let \(\mathbf{x}\in\mathbb{R}^{n}\) be a random vector with known mean \(\bar{\mathbf{x}}\) and covariance \(\mathbf{\Sigma}_{\mathbf{x}}\). Then, \(\forall\delta_{i}\!\in\!(0,0.5],i\!\in\!\llbracket n_{x}\rrbracket\), we have_ \[\mathbb{P}^{\mathbf{x}}\left[f_{i}^{\top}\mathbf{x}>g_{i}\right]\leq\delta_{i }\Leftrightarrow f_{i}^{\top}\bar{\mathbf{x}}\leq g_{i}-\psi_{i}\left\| \mathbf{\Sigma}_{\mathbf{x}}^{\frac{1}{2}}f_{i}\right\|_{2},\] _where the tightening constant \(\psi_{i},\forall i\in\llbracket n_{x}\rrbracket\) is given by_ \[\psi_{i}:=\begin{cases}\Phi_{\mathbb{P}^{\star}}^{-1}(1-\delta_{i}),&\text{ when $\mathbb{P}^{\mathbf{x}}$ is known},\\ \sqrt{\frac{1-\delta_{i}}{\delta_{i}}},&\text{ when $\mathbb{P}^{\mathbf{x}}$ is unknown}, \end{cases} \tag{17}\] _and \(\Phi_{\mathbb{P}^{\star}}\) denotes the Cumulative Distribution Function of the known distribution \((\mathbb{P}^{\mathbf{x}})\) when normalized._ The surrogate of problem 1, called the surrogate SMPC, can then be written as \[\min_{\mathbf{u}_{k}} J_{\mathrm{SMPC}}(\mathbf{u}_{k},x_{0|k}) \tag{18a}\] \[\mathrm{s.\,t.} \mathbf{x}_{k}=\mathbf{A}x_{0|k}+\mathbf{B}\mathbf{u}_{k}+\mathbf{ D}\mathbf{w}_{k},\quad x_{0|k}=x_{k},\] (18b) \[\mathbf{u}_{k}\in\mathcal{U},w_{k}\sim\mathbb{P}^{\mathbf{x}}\in \mathcal{P}^{w},\] (18c) \[f_{i}^{\top}\bar{\mathbf{x}}_{k}\leq g_{i}-\psi_{i}\left\| \mathbf{\Sigma}_{\mathbf{x}}^{\frac{1}{2}}f_{i}\right\|_{2},i\in\llbracket n_{x }\rrbracket. \tag{18d}\] Note that (18) has finite-dimensional constraints unlike its original counterpart in (15). Let the control input sequences that minimize (18) with exact and DR constraint tightening according to (17) be denoted by \(\mathbf{u}_{k}^{\star}\) and \(\mathbf{u}_{k}^{\dagger}\) respectively. Their corresponding optimal value functions are given by \[J_{\mathrm{SMPC}}^{\circ}(x_{k}):=J_{\mathrm{SMPC}}(\mathbf{u}_{k}^{\circ},x_{ k}),\quad\text{where $\diamond\in\{\dagger,\star\}$}. \tag{19}\] ## 3 Conservatism & Regret Analyses In this section, we define the concepts of conservatism and regret for the previously introduced DR SMPC algorithm. ### Conservatism of DR SMPC We would like to study the difference in constraint tightening when the true distributions of the stochastic uncertainties are known and when they are unknown. **Definition 1**.: _The constraint conservatism, denoted by \(\mathfrak{C}\in\mathbb{R}\), associated with the DR SMPC is defined as the difference in volume of the deterministically tightened state constraint set with and without the knowledge of \(\mathbb{P}^{w}\) and \(\mathbb{P}^{\mathbf{x}}_{k}\) respectively. That is,_ \[\mathfrak{C}:=\underbrace{\int_{(\mathcal{X}_{\mathrm{True}} \ominus\mathcal{X}_{\mathrm{DR}})}dx}_{:=\mathcal{X}_{\mathrm{Diff}}}dx,\quad \text{where}, \tag{20}\] \[\mathcal{X}_{\mathrm{True}} =\left\{\mathbf{x}\mid\mathbf{F}\mathbf{x}\leq\mathbf{g}_{ \mathrm{True}}\right\},\] (21) \[\mathcal{X}_{\mathrm{DR}} =\left\{\mathbf{x}\mid\mathbf{F}\mathbf{x}\leq\mathbf{g}_{ \mathrm{DR}}\right\},\quad\text{where}\quad\forall i\in\llbracket n_{x} \rrbracket,\] (22) \[\left[\mathbf{g}_{\mathrm{True}}\right]_{i} :=\left[g_{i}-\Phi_{\mathbb{P}^{\mathbf{x}}_{k}}^{-\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}} \left(1-\delta_{i}}\right)}\right]\right]\rightrightrightrightrightrightrightright}}}\] (23) \[\left[\left[\mathbf{g}_{\mathrm{DR}}\right]_{i} :=\left[g_{i}-\sqrt{\frac{1-\delta_{i}}{\delta_{i}}}}\left\| \mathbf{\Sigma}_{\mathbf{x}}^{\frac{1}{2}}f_{i}}\right\|_{2}}\rightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightrightright] \tag{24}\] Let \(\mathbf{z}:=\left[\begin{array}{c|c}\mathbf{x}^{\top}&\mathbf{x}^{\top}& \mathbf{x}^{\top}\end{array}\right]^{\top}\in\mathbb{R}^{2n(N+1)}\) and \[\mathbf{h}:=\left[\frac{\mathbf{g}_{\mathrm{True}}}{\mathbf{g}_{\mathrm{DR}}} \right]\in\mathbb{R}^{2n_{x}}\text{, }\mathbf{H}:=\left[\frac{\mathbf{F}}{\mathbf{0}_{n_{x} \times n(N+1)}}&\mathbf{F}\right]. \tag{25}\] Then, using Theorem 3.3 in Gabidullina (2019), we can express \(\mathcal{X}_{\mathrm{Diff}}\) as \[\mathcal{X}_{\mathrm{Diff}}:=\left\{\mathbf{z}\mid\mathbf{H}\mathbf{z}\leq \mathbf{h}\right\}. \tag{26}\] Although analyzing (26) is in general hard, its volume (i.e., conservatism) can be computed numerically, see e.g. Chevallier et al. (2022). **Remark 1**:: Note that (20) uses the Pontryagin difference of the two deterministically tightened convex polyhedrons \(\mathcal{X}_{\mathrm{True}}\) and \(\mathcal{X}_{\mathrm{DR}}\). The constraint tightening in both (23) and (24) depends upon the risk2\(\delta_{i}\) corresponding to the \(i^{\text{th}}\) constraint. For \(\delta_{i},i\in\llbracket n_{x}\rrbracket\) close to \(0\), the DR constraint tightening will be significantly stricter than the exact tightening. Footnote 2: Both \(\mathcal{X}_{\mathrm{Diff}}\) and \(\mathfrak{C}\) will have interesting observations with a non-uniform risk allocation as in Ono and Williams (2008). Analysing this aspect is left for future. ### Regret of DR SMPC We start by noting that for DR SMPC formulation (18), regret is introduced by conservatism: Since the deterministic constraint tightening will be different for the case when the true distributions \(\mathbb{P}_{w}\) and \(\mathbb{P}_{\mathbf{x}}\) are known and unknown respectively, the resulting open-loop costs (19) and closed-loop costs till a time point \(k\) will be different. This difference of the optimal costs is essentially referred to as regret. **Assumption 1**.: _Both the systems under the DR and fully informed SMPC respectively encounter the same disturbance realization \(w_{k}\) at all time steps \(k\in\mathbb{N}\)._ **Definition 2**.: _Given Assumption 1, the closed-loop regret accumulated up to time point \(k\in\mathbb{N}\) is defined as the difference in the closed-loop costs with and without knowledge of \(\mathbb{P}^{w}\) and \(\mathbb{P}^{\mathbf{x}}_{k}\) respectively. That is,_ \[\mathfrak{R}^{\mathrm{cl}}_{k}=\sum_{i=0}^{k}\left(\|\hat{x}_{i}^{\dagger}\|_{Q} ^{2}+\|u_{i}^{\dagger}\|_{R}^{2}\right)-\left(\|\hat{x}_{i}^{\star}\|_{Q}^{2} +\|u_{i}^{\star}\|_{R}^{2}\right) \tag{27}\] _where \(\cdot^{\star}\) and \(\cdot^{\dagger}\) indicate the fully informed and DR quantities respectively, with \(x_{0}^{\dagger}=x_{0}^{\star}=x_{0}\) and \(\mathfrak{R}^{\mathrm{cl}}_{0}=0\)._ Besides \(\mathfrak{R}^{\mathrm{cl}}_{k}\), we can also exploit the information obtained from solving the surrogate SMPC (18) to define the expected remaining infinite-horizon regret from time point \(k\) on in open-loop - given that (14) is designed to approximate (13). **Definition 3**.: _The expected remaining infinite-horizon regret in open-loop from time \(k\in\mathbb{N}\) on is defined as the difference in the optimal value functions of DR SMPC (18) with and without knowledge of \(\mathbb{P}^{w}\) and \(\mathbb{P}^{\mathbf{x}}_{k}\) respectively. That is,_ \[\mathfrak{R}^{\mathrm{ol}}_{k}:=J^{\dagger}_{\mathrm{SMPC}}(x^{ \dagger}_{k})-J^{*}_{\mathrm{SMPC}}(x^{*}_{k}). \tag{28}\] We now define the total regret of DR SMPC at time \(k\in\mathbb{N}\). **Definition 4**.: _The total regret at time point \(k\in\mathbb{N}\) is_ \[\mathfrak{R}^{\mathrm{total}}_{k}=\mathfrak{R}^{\mathrm{cl}}_{k} +\mathfrak{R}^{\mathrm{ol}}_{k}. \tag{29}\] To analyze \(\mathfrak{R}^{\mathrm{cl}}_{k},\mathfrak{R}^{\mathrm{ol}}_{k}\) and \(\mathfrak{R}^{\mathrm{total}}_{k}\), we derive a closed-form expression for \(\mathbf{u}^{\star}_{k}\) and \(\mathbf{u}^{\dagger}_{k}\), whose first elements also constitute the closed-loop inputs \(u^{\star}_{k}\) and \(u^{\dagger}_{k}\). We reformulate OCP (18) as a quadratic program (QP) of the form \[\min_{\mathbf{u}_{k}} \frac{1}{2}\left\|\mathbf{u}_{k}\right\|_{H}^{2}+h_{k}^{\top} \mathbf{u}_{k}+r_{k} \tag{30a}\] \[\mathrm{s.t.} \mathbf{M}\mathbf{u}_{k}-\mathbf{b}_{k}\leq\mathbf{0}. \tag{30b}\] We start by reformulating the cost function (14) of OCP (18). Substituting (7) and (8) into (14), we obtain \[J_{\mathrm{SMPC}}(\mathbf{u}_{k},x_{k})=\frac{1}{2}\left\|\mathbf{u}_{k} \right\|_{H}^{2}+h_{k}^{\top}\mathbf{u}_{k}+r_{k}, \tag{31}\] where \(H=2(\mathbf{B}^{\top}\mathbf{Q}\mathbf{B}+\mathbf{R})\succ 0,h_{k}^{\top}=2x_{0|k}^{ \top}\mathbf{A}^{\top}\mathbf{Q}\mathbf{B}\) and \(r_{k}=\mathbf{Tr}\left(\mathbf{Q}\mathbf{D}\mathbf{\Sigma}_{w}\mathbf{D}^{\top }\right)+\left\|x_{0|k}\right\|_{\mathbf{A}^{\top}\mathbf{Q}\mathbf{A}}^{2}\). Next, we substitute (7) and (8) into (18d) to reformulate the tightened state constraints in terms of only the input as \[\underbrace{f_{i}^{\top}\mathbf{B}}_{=:f_{i}^{\top}}\mathbf{u}_{k} \leq\underbrace{g_{i}-f_{i}^{\top}\mathbf{A}x_{0|k}}_{=:\tilde{g}_{k,i}}- \psi_{i}\underbrace{\left\|(\mathbf{D}\mathbf{\Sigma}_{w}\mathbf{D}^{\top})^{ \frac{1}{2}}f_{i}\right\|_{2}}_{=:v_{i}}. \tag{32}\] Then, \(\forall i\in\llbracket n_{x}\rrbracket\), writing (32) equivalently in vectorized form as \(\bar{\mathbf{F}}\mathbf{u}_{k}\leq\bar{\mathbf{g}}_{k}-\boldsymbol{\psi}^{ \top}\mathbf{v}\) enables us to define (30b) via \[\mathbf{M}=\begin{bmatrix}\mathbf{C}\\ \bar{\mathbf{F}}\end{bmatrix},\quad\mathbf{b}_{k}=\begin{bmatrix}\mathbf{d}\\ \bar{\mathbf{g}}_{k}-\mathrm{diag}(\boldsymbol{\psi})\mathbf{v}\end{bmatrix}. \tag{33}\] We now define when an inequality constraint is called active. **Definition 5**.: _An inequality constraint is said to be active if \(\mathbf{M}_{i}.\mathbf{u}_{k}-\mathbf{b}_{k,i}=0\) and inactive if \(\mathbf{M}_{i}.\mathbf{u}_{k}-\mathbf{b}_{k,i}<0\), where \(\mathbf{M}_{i:}\) denotes the \(i^{\text{th}}\) row of \(\mathbf{M}\) and \(\mathbf{b}_{k,i}\) the \(i^{\text{th}}\) entry of \(\mathbf{b}_{k}\). The active set \(\mathcal{A}_{k}\subseteq\llbracket n_{x}+n_{u}\rrbracket\) is the index set of active inequality constraints._ **Assumption 2**.: _The active set \(\mathcal{A}_{k}^{\diamond}\) of QP (30) is known \(\forall k\)._ To solve the QP, we assume the following regularity condition on the constraints. **Assumption 3**.: _QP (30) satisfies the linear independence constraint qualification (LICQ) criterion, i.e., the gradients of the active inequality constraints are linearly independent._ **Proposition 1**.: _Under Assumptions 2 and 3, the unique and global solution to QP (30) at time step \(k\) is given by_ \[\mathbf{u}^{\diamond}_{k}=\left(V^{\diamond}_{k}\mathbf{\tilde{M}}^{\diamond}_ {k}H^{-1}-H^{-1}\right)h_{k}+V^{\diamond}_{k}\mathbf{\tilde{b}}^{\diamond}_{k}, \tag{34}\] _where \(V^{\diamond}_{k}=H^{-1}\mathbf{\tilde{M}}^{\diamond^{\top}}_{k}\left(\mathbf{ \tilde{M}}^{\diamond}_{k}H^{-1}\mathbf{\tilde{M}}^{\diamond^{\top}}_{k}\right) ^{-1}\), \(\mathbf{\tilde{M}}^{\diamond}_{k}=[\mathbf{M}_{i:}]_{i\in\mathcal{A}_{k}^{ \diamond}}\), and \(\mathbf{\tilde{b}}^{\diamond}_{k}=[\mathbf{b}_{k,i}]_{i\in\mathcal{A}_{k}^{ \diamond}}\)._ Proof.: The result directly follows from applying the method of Lagrange multipliers Boyd and Vandenberghe (2004); Ghojogh et al. (2021) to QP (30), exploiting sufficiency of the active set Arnstrom (2023). We first formulate the Lagrangian as \[\mathcal{L}(\mathbf{u}_{k},\mu_{k})=\frac{1}{2}\mathbf{u}_{k}^{\top}H\mathbf{u }_{k}+h_{k}^{\top}\mathbf{u}_{k}+r_{k}+\mathbf{\mu}_{k}^{\top}(\mathbf{M}\mathbf{u }_{k}-\mathbf{b}_{k}), \tag{35}\] where \(\mu_{k}\) is the vector of Lagrange multipliers. Applying the Karush-Kuhn-Tucker (KKT) conditions yields \[H\mathbf{u}_{k}^{\diamond}+h_{k}+\mathbf{M}^{\top}\mathbf{\mu}_{k}^{\diamond} =\mathbf{0} \tag{36a}\] \[\mathbf{M}\mathbf{u}_{k}^{\diamond}-\mathbf{b}_{k} \leq\mathbf{0}\] (36b) \[\mathbf{\mu}_{k}^{\diamond} \geq\mathbf{0}\] (36c) \[\mu_{k,i}^{\diamond}(\mathbf{M}_{i:}\mathbf{u}_{k}^{\diamond}- \mathbf{b}_{k,i}) =0,\quad\forall i\in\llbracket n_{x}+n_{u}\rrbracket. \tag{36d}\] According to the complementary slackness condition (36d), we have \(\mu_{k,i}^{\diamond}=0,\forall i\in\llbracket n_{x}+n_{u}\rrbracket\setminus \mathcal{A}_{k}^{\diamond}\). This enables to remove inactive inequality constraints from the problem formulation (c.f., sufficiency of the active set Arnstrom (2023)). Hence, the KKT system (36) is reduced to a linear system of equations given by \[\begin{bmatrix}H&\widetilde{\mathbf{M}}_{k}^{\top^{\top}}\\ \widetilde{\mathbf{M}}_{k}^{\diamond}&\mathbf{0}\end{bmatrix}\begin{bmatrix} \mathbf{u}_{k}^{\diamond}\\ \widetilde{\mathbf{\mu}}_{k}^{\diamond}\end{bmatrix}=\begin{bmatrix}-h_{k}\\ \widetilde{\mathbf{b}}_{k}^{\diamond}\end{bmatrix}, \tag{37}\] where \(\widetilde{\mathbf{\mu}}_{k}^{\diamond}=[\mu_{k,i}^{\diamond}]_{i\in\mathcal{A}_{ k}^{\diamond}},\widetilde{\mathbf{M}}_{k}^{\diamond}=[\mathbf{M}_{i:}]_{i\in \mathcal{A}_{k}^{\diamond}}\), \(\widetilde{\mathbf{b}}_{k}^{\diamond}=[\mathbf{b}_{k,i}]_{i\in\mathcal{A}_{k} ^{\diamond}}\) and the dual feasibility condition (36c) is trivially fulfilled if \(\mathcal{A}_{k}^{\diamond}\) is the correct active set. Exploiting invertibility of the coefficient matrix of (37) yields (34). Expression (34) is guaranteed to be the optimal solution to QP (30) as the KKT conditions are necessary and sufficient under Assumption 3. The solution is unique and global as \(H\succ 0\). The optimal inputs \(\mathbf{u}_{k}^{\star}\) and \(\mathbf{u}_{k}^{\dagger}\) are obtained using (34) when exact and DR constraint tightening are used, respectively. **Remark 2:** The active set \(\mathcal{A}_{k}^{\diamond}\) can be constructed in an iterative manner through additions and removals of constraints to a _working set_\(\mathcal{A}_{k}\) until primal feasibility (36b) and dual feasibility (36c) are satisfied by solution (34). This idea is exploited in so-called _active-set methods_ for solving QPs Arnstrom (2023). Hence, Assumption 2 is not restrictive and only used to avoid computations that are out of the scope of this work. As for Assumption 3, the LICQ criterion is commonly employed in practice as it is a rather weak condition. In the considered controller set-up, LICQ can be established during control design and is naturally given in many cases. However, other constraint qualifications might be used, see e.g. Bergmann and Herzog (2019). **Remark 3:** Given the optimal input sequences \(\mathbf{u}_{k}^{\star}\) and \(\mathbf{u}_{k}^{\dagger}\) from Proposition 1, it can be seen that \(\mathfrak{R}_{k}^{\mathrm{cl}},\mathfrak{R}_{k}^{\mathrm{ol}}\) and \(\mathfrak{R}_{k}^{\mathrm{total}}\) are induced through three main effects: * The active set \(\mathcal{A}_{k}\) differ between the DR and the fully informed controller. The dependencies of \(\mathfrak{R}_{k}^{\mathrm{cl}},\mathfrak{R}_{k}^{\mathrm{ol}}\) and \(\mathfrak{R}_{k}^{\mathrm{total}}\) on the active sets are highly nonlinear. * The initial states \(x_{0|k}^{\star}\) and \(x_{0|k}^{\dagger}\) differ as the feasible set of states of the DR controller is smaller than that of the fully-informed controller. * The tightening factors \(\psi_{i},\forall i\in\mathcal{A}_{k}\) differ between the DR and the fully informed controller. While \(\mathfrak{R}_{k}^{\mathrm{cl}},\mathfrak{R}_{k}^{\mathrm{ol}}\) and \(\mathfrak{R}_{k}^{\mathrm{total}}\) can be easily computed using the previously stated results, we analyze their behaviors. First, we start by analyzing \(\mathfrak{R}_{k}^{\mathrm{ol}}\), restricting ourselves to the special time instances defined below. **Definition 6**.: _The set \(\mathcal{I}\) contains the time steps when the active sets \(\mathcal{A}_{k}^{\star}\) and \(\mathcal{A}_{k}^{\dagger}\) were same. That is,_ \[\mathcal{I}:=\left\{k\in\llbracket N\rrbracket\left|\mathcal{A}_{k}^{\star}= \mathcal{A}_{k}^{\dagger}\right\}. \tag{38}\] Then, \(\forall k\in\mathcal{I},\widetilde{\mathbf{M}}_{k}^{\star}=\widetilde{\mathbf{M}}_{k} ^{\dagger}=\widetilde{\mathbf{M}}_{k}\), \(\mathbf{V}_{k}^{\star}=\mathbf{V}_{k}^{\dagger}=\mathbf{V}_{k}\) and the difference between \(\widetilde{\mathbf{b}}_{k}^{\star}\) and \(\widetilde{\mathbf{b}}_{k}^{\dagger}\) is only due to the tightening factors \(\boldsymbol{\tilde{\psi}}^{\star}\) and \(\boldsymbol{\tilde{\psi}}^{\dagger}\) corresponding to \(\mathcal{A}_{k}^{\star}\) and \(\mathcal{A}_{k}^{\dagger}\). **Theorem 1**.: _Given \(\mathcal{I}\), let \(\mathbf{V}_{k}=\begin{bmatrix}\mathbf{V}_{1,k}&\mathbf{V}_{2,k}\end{bmatrix}, \forall k\in\mathcal{I}\), and let Assumptions 2 and 3 be satisfied. Then, \(\mathfrak{R}_{k}^{\mathrm{ol}}\) at time steps \(k\in\mathcal{I}\) is given by_ \[\mathfrak{R}_{k}^{\mathrm{ol}} = -(x_{0|k}^{\star}\!-\!x_{0|k}^{\dagger})^{\top}\Lambda_{1,k}(x_{0 |k}^{\star}\!+\!x_{0|k}^{\dagger}) \tag{39}\] \[-\widetilde{\mathbf{v}}_{k}^{\top}\mathrm{diag}(\boldsymbol{ \tilde{\psi}}_{k}^{\star}\!-\!\boldsymbol{\tilde{\psi}}_{k}^{\dagger})\Lambda_ {2,k}\mathrm{diag}(\boldsymbol{\tilde{\psi}}_{k}^{\star}\!+\!\boldsymbol{ \tilde{\psi}}_{k}^{\dagger})\widetilde{\mathbf{v}}_{k}\] \[+(x_{0|k}^{\star}\!+\!x_{0|k}^{\dagger})^{\top}\Lambda_{3,k} \mathrm{diag}(\boldsymbol{\tilde{\psi}}_{k}^{\star}\!+\!\boldsymbol{\tilde{ \psi}}_{k}^{\dagger})\widetilde{\mathbf{v}}_{k}\] \[+(x_{0|k}^{\star}\!+\!x_{0|k}^{\dagger})^{\top}\Lambda_{3,k} \mathrm{diag}(\boldsymbol{\tilde{\psi}}_{k}^{\star}\!-\!\boldsymbol{\tilde{ \psi}}_{k}^{\dagger})\widetilde{\mathbf{v}}_{k}\] \[-(x_{0|k}^{\star}\!-\!x_{0|k}^{\dagger})^{\top}\Lambda_{4,k}+ \widetilde{\mathbf{v}}_{k}^{\top}\mathrm{diag}(\boldsymbol{\tilde{\psi}}_{k}^ {\star}\!-\!\boldsymbol{\tilde{\psi}}_{k}^{\dagger})\Lambda_{5,k},\] _where_ \[\Lambda_{1,k} =\frac{1}{2}\alpha_{k}^{\top}H\alpha_{k}+\frac{1}{2}(\tilde{h}^{ \top}\alpha_{k}+\alpha_{k}^{\top}\tilde{h})+\mathbf{A}^{\top}\mathbf{QA},\] \[\Lambda_{2,k} =\frac{1}{2}\mathbf{V}_{2,k}^{\top}H\mathbf{V}_{2,k},\quad \Lambda_{3,k}=\frac{1}{2}(\tilde{h}^{\top}\mathbf{V}_{2,k}+\alpha_{k}^{\top}H \mathbf{V}_{2,k})\] \[\Lambda_{4,k} =\tilde{h}^{\top}\gamma_{k}+\alpha_{k}^{\top}H\gamma_{k},\quad \Lambda_{5,k}=\mathbf{V}_{2,k}^{\top}H\gamma_{k}\] _with \(\alpha_{k}=(V_{k}\widetilde{\mathbf{M}}_{k}H^{-1}-H^{-1})\tilde{h}-\mathbf{V}_ {2,k}\widetilde{\mathbf{F}}_{k}\mathbf{A}\), \(\tilde{h}=2\mathbf{B}^{\top}\mathbf{QA}\) and \(\gamma_{k}=\mathbf{V}_{1,k}\widetilde{\mathbf{d}}_{k}+\mathbf{V}_{2,k} \widetilde{\mathbf{g}}_{k}\)._ Proof.: Expression (39) is obtained from (28) using the cost function representation (31), substituting the optimal input sequences \(\mathbf{u}_{k}^{\star}\) and \(\mathbf{u}_{k}^{\dagger}\) from (34) and the definitions of \(\widetilde{\mathbf{M}}_{k}\) and \(\widetilde{\mathbf{b}}_{k}\) and performing a series of algebraic operations (not shown here for the brevity of presentation). Note that (39) is quadratic in the initial conditions \(x_{0|k}^{\star}\), \(x_{0|k}^{\dagger}\) and the tightening factors \(\boldsymbol{\tilde{\psi}}_{k}^{\star}\), \(\boldsymbol{\tilde{\psi}}_{k}^{\dagger}\) but not directly in their respective differences. **Corollary 1**.: _If no (state and input) constraints are active, then (39) simplifies to_ \[\mathfrak{R}_{k}^{\mathrm{ol}} =-(x_{0|k}^{\star}\!-\!x_{0|k}^{\dagger})^{\top}\Lambda_{1}(x_{0|k} ^{\star}\!+\!x_{0|k}^{\dagger}), \tag{40}\] \[\Lambda_{1} =\mathbf{A}^{\top}\mathbf{QA}-\frac{1}{2}\tilde{h}^{\top}H^{-1}\tilde {h}\] _and the optimal input sequences are simply given by the linear feedback laws \(\mathbf{u}_{k}^{\star}=-H^{-1}\tilde{h}x_{0|k}^{\star}\) and \(\mathbf{u}_{k}^{\dagger}=-H^{-1}\tilde{h}x_{0|k}^{\dagger}\) from the linear quadratic regulator optimization problem._ We continue analyzing the general behavior of \(\mathfrak{R}_{k}^{\mathrm{ol}}\) when no constraints are active, given that the fully informed and the DR SMPC are input-to-state stabilizing in probability in the sense of the definitions present in the appendix. **Assumption 4**.: _System (1) is exponentially input-to-state stable in probability (eISSp) under DR SMPC for both the fully informed and the DR cases. That is, \(\exists\beta^{\star},\beta^{\dagger}\in\mathcal{KL}\), given by \(\beta^{\star}(s,t)=M^{\star}\nu^{\star^{t}}s,\beta^{\dagger}(s,t)=M^{\dagger} \nu^{\dagger^{t}}s\) with \(M^{\star},M^{\dagger}>0\) and \(\nu^{\star},\nu^{\dagger}\in(0,1)\) such that \(\forall i\in[K^{\star}],K^{\circ}\in\mathbb{N}\)_ \[\mathbb{P}_{x}\left[\|x_{k+i}^{\circ}\|\leq\beta^{\circ}(\|x_{k}^{\circ}\|,k+i) +\varrho(\|w_{k+i}\|_{L^{p}})\right]\!\geq\!1\!-\varepsilon.\] _holds for \(w_{k}\in L_{p},w_{k}\sim\mathbb{P}^{w}\) for some \(p>0\) and \(\varrho\in\mathcal{K}\), given \(\varepsilon\in(0,1)\). The optimal value functions \(J_{\mathrm{SMPC}}^{\star}(x_{k}),J_{\mathrm{SMPC}}^{\dagger}(x_{k})\) are eISSp Lyapunov functions._ Note that there is an interplay between the probability \(\varepsilon\) and the horizon lengths \(K^{\star},K^{\dagger}\). For fixed \(\varepsilon\), the (maximum) horizon lengths \(K^{\star},K^{\dagger}\) for which eISSP can be established in the respective case is determined by the problem formulation and vice versa. Furthermore, lower values of \(\varepsilon\) imply lower values of \(K^{\star},K^{\dagger}\), in general. We consider the common horizon length \(\tilde{K}=\min\{K^{\star},K^{\dagger}\}\) in the following for a valid analysis in both the DR and the fully informed case. **Assumption 5**.: _There exists a set \(\Phi:=\{x\in\mathcal{X}\mid\|x\|\leq r\}\), where \(r\in\mathbb{R}_{>0}\) is chosen such that \(\Phi\) is the largest set such that no input or (tightened) state constraints are active for all \(x\in\Phi\) for both the fully informed and the DR controller._ The next lemma proves the recurrent nature (see appendix for the definition) of the above set \(\Phi\). **Lemma 2**.: _Under Assumptions 1-5, the set \(\Phi\) is recurrent for system (1) under both the fully informed and the DR controller. Furthermore, if \(x_{k}\in\Phi\), then system (1) satisfies_ \[\mathbb{P}[x_{k+i}\in\Phi,\forall i\leq\bar{K}]\geq 1-\varepsilon. \tag{41}\] Proof.: Suppose that \(V\) is an eISSp Lyapunov functions for system (1). Then, any sublevel set \(\mathcal{V}_{\gamma}:=\{x\in\mathcal{X}\mid V(x)\leq\gamma\}\) is recurrent, see Culbertson et al. (2023). As the optimal value functions (19) are eISSp Lyapunov functions for the respective closed-loop systems by Assumption 4 and \(r>0\) by Assumption 5, there exist \(\gamma^{\star},\gamma^{\dagger}\) such that \(\mathcal{V}_{\gamma^{\star}}\subset\Phi,\mathcal{V}_{\gamma^{\dagger}}\subset\Phi\). Hence, \(\Phi\) is recurrent under both the fully informed and the DR controller. Expression (41) follows directly as \(\beta(\|x_{k}\|,k\!+\!i)\!+\!\rho(\|w_{k+i}\|_{L_{p}})\leq\beta(\|x_{k}\|,k)\! +\!\rho(\|w_{k}\|_{L_{p}})=\|x_{k}\|\leq r\) for \(x_{k}\!\in\!\Phi\). Lemma 2 states that the system visits \(\Phi\) in finite time under both controllers. Furthermore, once the system has entered \(\Phi\), it will remain in \(\Phi\) for a finite horizon with high probability. That is, \(\Phi\) takes the role of a probabilistic invariant set. We now study the system's behavior while in \(\Phi\). **Lemma 3**.: _Under Assumptions 1-5, there exists \(\tilde{\beta}\in\mathcal{KL}\) such that if \(x_{k}^{\star},x_{k}^{\dagger}\in\Phi\), it holds \(\forall i\leq\bar{K}\) that_ \[\mathbb{P}[\|x_{k+i}^{\star}-x_{k+i}^{\dagger}\|\leq\tilde{\beta}(\|x_{k}^{ \star}\|+\|x_{k}^{\dagger}\|,k+i)]\geq 1-\varepsilon. \tag{42}\] Proof.: If \(x_{k}^{\star},x_{k}^{\dagger}\in\Phi\), both systems remain in \(\Phi\) for horizon \(\bar{K}\) with probability at least \(1-\varepsilon\) according to (41). While \(x_{k}^{\star},x_{k}^{\dagger}\in\Phi\), the unconstrained error dynamics between the fully informed and the DR closed-loop system is given by \(e_{k+i}:=x_{k+i}^{\star}-x_{k+i}^{\dagger}=\bar{x}_{k+i}^{\star}-\bar{x}_{k+i} ^{\dagger}\) as both systems encounter the same disturbance \(w_{k+i}\) according to Assumption 1. Hence, exploiting (59), we find that \(\forall i\leq\bar{K}\) \[\mathbb{P}[e_{k+i}\!=\!\bar{x}_{k+i}^{\star}\!-\!\bar{x}_{k+i}^{ \dagger}]\!\geq\!1\!-\varepsilon \tag{43}\] \[\Leftrightarrow \mathbb{P}[\|e_{k+i}\|\!\leq\!\|x_{k+i}^{\star}\|\!+\!\|x_{k+i}^{ \dagger}\|\!\geq\!1\!-\!\varepsilon\] (44) \[\Leftrightarrow \mathbb{P}[\|e_{k+i}\|\!\leq\!\beta^{\star}(\|x_{k}^{\star}\|,k+i )\!+\!\beta^{\dagger}(\|x_{k}^{\dagger}\|,k+i)]\!\geq\!1\!-\!\varepsilon \tag{45}\] with \(\beta^{\star},\beta^{\dagger}\) from Assumption 4. We can now construct \(\tilde{\beta}(\|x_{k}^{\star}\|\!+\!\|x_{k}^{\dagger}\|,k+i)=(M^{\star}\!+\!M^{ \dagger})(\nu^{\star}\!+\!\nu^{\dagger})^{k+i}(\|x_{k}^{\star}\|\!+\!\|x_{k}^{ \dagger}\|)\!\geq\!\beta^{\star}(\|x_{k}^{\star}\|,k\!+\!i)+\!\beta^{\dagger}( \|x_{k}^{\dagger}\|,k\!+\!i)\), where clearly \(\tilde{\beta}\in\mathcal{KL}\), which concludes the proof. By Lemma 3, the states of the system under the DR and the fully informed controller, respectively, converge to each other while in \(\Phi\). We state the following theorem about \(\mathfrak{R}^{\mathrm{ol}}_{k}\). **Theorem 2**.: _Let Assumptions 1-5 hold. If \(x_{k}^{\star},x_{k}^{\dagger}\in\Phi\), then \(\exists\sigma\in\mathcal{KL}\) such that_ \[\mathbb{P}[\|\mathfrak{R}^{\mathrm{ol}}_{k+i}\|\!\leq\!\sigma(\|x_{k}^{\star}\| \!+\!\|x_{k}^{\dagger}\|,k\!+\!i),\forall i\!\leq\!\bar{K}]\!\geq\!1\!-\!\varepsilon. \tag{46}\] Proof.: If \(x_{k}^{\star},x_{k}^{\dagger}\in\Phi\), both systems remain in \(\Phi\) for horizon \(\bar{K}\) with probability at least \(1-\varepsilon\) according to (41). As no constraints are active for \(x\in\Phi\), we find that \(\forall i\leq\bar{K}\) \[\mathbb{P}[\mathfrak{R}^{\mathrm{ol}}_{k+i}=\eqref{eq:22}]\!\geq \!1\!-\!\varepsilon\] \[\Leftrightarrow \mathbb{P}[\|\mathfrak{R}^{\mathrm{ol}}_{k+i}\|\!\leq\!\|x_{k+i}^{ \star}\!-\!x_{k+i}^{\dagger}\|\!\|\Lambda_{1}\|\!\|x_{k+i}^{\star}\!+\!x_{k+i}^ {\dagger}\|]\!\geq\!1\!-\!\varepsilon\] \[\Leftrightarrow \mathbb{P}[\|\mathfrak{R}^{\mathrm{ol}}_{k+i}\|\!\leq\!\tilde{ \beta}(\|x_{k}^{\star}\|\!+\!\|x_{k}^{\dagger}\|,k\!+\!i)2\|\Lambda_{1}\|r ]\!\geq\!1\!-\varepsilon,\] where \(\|x_{k+i}^{\star}\!+\!x_{k+i}^{\dagger}\|\leq\|x_{k}^{\star}\|\!+\!\|x_{k}^{ \dagger}\|\leq 2r\) as \(x_{k+i}^{\star},x_{k+i}^{\dagger}\in\Phi\). Clearly, \(2\|\Lambda_{1}\|r>0\) and hence, \(\sigma(\|x_{k}^{\star}\|\!+\!\|x_{k}^{\dagger}\|,k+i)=2\|\Lambda_{1}\|r\tilde{ \beta}(\|x_{k}^{\star}\|\!+\!\|x_{k}^{\dagger}\|,k\!+\!i)\) is a class \(\mathcal{KL}\) function. Theorem 2 states that \(\mathfrak{R}^{\mathrm{ol}}_{k}\) decreases with high probability over a finite number of subsequent time steps from \(k\) on. That is, \(\mathfrak{R}^{\mathrm{ol}}_{k}\) will converge to zero as long as no disturbance realization causes the system states \(x_{k}^{\star}\) and \(x_{k}^{\dagger}\) to leave \(\Phi\). We now analyze \(\mathfrak{R}^{\mathrm{cl}}_{k}\). To this end, we express the closed-loop input \(u_{k}^{\diamond}=u_{0|k}^{\diamond}\) (the first element of (34)) and state realization \(\hat{x}_{k}^{\diamond}\) explicitly in terms of the initial state \(x_{0}\) and the disturbance realizations \((\hat{w}_{i})_{i\in[0:k-1]}\) as \[\hat{x}_{k}^{\diamond} =\left(A^{k}+\sum_{i=0}^{k-1}A^{i}B\Psi_{k-1-i}^{\diamond}\right) x_{0}+\sum_{i=0}^{k-1}A^{i}B\gamma_{k-1-i}^{\diamond}+\sum_{i=0}^{k-1}A^{i} \hat{w}_{k-1-i}, \tag{47}\] \[u_{k}^{\diamond} =\underbrace{P_{k}^{\diamond}\left(A^{k}+\sum_{i=0}^{k-1}A^{i}B \Psi_{k-1-i}^{\diamond}\right)}_{=:\Psi_{k}^{\diamond}}x_{0}+\gamma_{k}^{ \diamond}. \tag{48}\] Therein, the closed-loop input is expressed as \(u_{k}^{\diamond}=P_{k}^{\diamond}\hat{x}_{k}^{\diamond}+q_{k}^{\diamond}\) with \(P_{k}^{\diamond}\) and \(q_{k}^{\diamond}\) implicitly defined via (34) and \[\gamma_{k}^{\diamond}=P_{k}^{\diamond}\left(\sum_{i=0}^{k-1}A^{i}B\gamma_{k-1- i}^{\diamond}+\sum_{i=0}^{k-1}A^{i}\hat{w}_{k-1-i}\right)+q_{k}^{\diamond}. \tag{49}\] Using (47) and (48) in (27) enables further analysis. **Theorem 3**.: _Let Assumptions 1-5 hold and let \(\hat{x}_{k-\kappa}^{\star},\hat{x}_{k-\kappa}^{\dagger}\in\Phi\) for some \(\kappa\in\mathbb{N}\). Further, let the disturbance realization \((\hat{w}_{k-\kappa+i})_{i\in[0:\kappa-1]}\) be such that \(\forall i\in[\![\kappa]\!]:\hat{x}_{k-\kappa+i}^{\star},\hat{x}_{k-\kappa+i}^ {\dagger}\in\Phi\). Then, \(\exists\lambda\in\mathcal{KL}\) such that \(\forall i\in[\![\kappa]\!]\) it holds that_ \[\|\mathfrak{R}^{\mathrm{cl}}_{k-\kappa+i}\|\leq\|\mathfrak{R}^{\mathrm{cl}}_{ k-\kappa+i-1}\|+\lambda(\|\hat{x}_{k-\kappa}^{\star}\|+\|\hat{x}_{k-\kappa}^{ \dagger}\|,k-\kappa+i). \tag{50}\] Proof.: According to (27), the closed-loop regret over the finite horizon \([k-\kappa,k]\) is given by \[\mathfrak{R}^{\mathrm{cl}}_{k-\kappa+i}=\mathfrak{R}^{\mathrm{cl}}_{k-\kappa+i -1}+\|\hat{x}_{k-\kappa+i}^{\dagger}\|_{Q}^{2}-\|\hat{x}_{k-\kappa+i}^{\star} \|_{Q}^{2}+\|u_{k-\kappa+i}^{\dagger}\|_{R}^{2}-\|u_{k-\kappa+i}^{\star}\|_{R} ^{2}. \tag{51}\] Applying the triangle inequality then yields \[\|\mathfrak{R}^{\mathrm{cl}}_{k-\kappa+i}\|\leq\|\mathfrak{R}^{\mathrm{cl}}_{ k-\kappa+i-1}\|+\|\hat{x}_{k-\kappa+i}^{\dagger}-\hat{x}_{k-\kappa+i}^{\star}\|_{Q}^ {2}+\|u_{k-\kappa+i}^{\dagger}-u_{k-\kappa+i}^{\star}\|_{R}^{2}. \tag{52}\] Lemma 3 implies that \(\forall i\in[\![\kappa]\!]\) \[\|x_{k-\kappa+i}^{\star}-x_{k-\kappa+i}^{\dagger}\|\leq\tilde{\beta}(\|\hat{x }_{k-\kappa}^{\star}\|+\|\hat{x}_{k-\kappa}^{\dagger}\|,k-\kappa+i). \tag{53}\] Exploiting (53) and Corollary 1, we find that \[\|u_{k-\kappa+i}^{\dagger}-u_{k-\kappa+i}^{\star}\|_{R}^{2} \leq\|\tilde{h}^{\top}H^{-1}RH^{-1}\tilde{h}\|\cdot\tilde{\beta}^{2 }(\|\hat{x}_{k-\kappa}^{\star}\|+\|\hat{x}_{k-\kappa}^{\dagger}\|,k-\kappa+i), \tag{54}\] \[\|\hat{x}_{k-\kappa+i}^{\dagger}-\hat{x}_{k-\kappa+i}^{\star}\|_ {Q}^{2} \leq\|Q\|\cdot\tilde{\beta}^{2}(\|\hat{x}_{k-\kappa}^{\star}\|+\| \hat{x}_{k-\kappa}^{\dagger}\|,k-\kappa+i). \tag{55}\] Substituting (54) and (55) in (52), we obtain \[\|\mathfrak{R}^{\mathrm{cl}}_{k-\kappa+i}\|\leq\|\mathfrak{R}^{\mathrm{cl}}_{ k-\kappa+i-1}\|+\lambda(\|\hat{x}_{k-\kappa}^{\star}\|+\|\hat{x}_{k-\kappa}^{ \dagger}\|,k-\kappa+i) \tag{56}\] with \(\lambda(\|\hat{x}_{k-\kappa}^{\star}\|+\|\hat{x}_{k-\kappa}^{\dagger}\|,k- \kappa+i)=(\|\tilde{h}^{\top}H^{-1}RH^{-1}\tilde{h}\|+\|Q\|)\tilde{\beta}^{2}( \|\hat{x}_{k-\kappa}^{\star}\|+\|\hat{x}_{k-\kappa}^{\dagger}\|,k-\kappa+i)\). We see, by the construction of \(\tilde{\beta}\) in Lemma 3, that \(\lambda\in\mathcal{KL}\). By Theorem 3, loosely speaking, \(\mathfrak{R}^{\mathrm{cl}}_{k}\) converges to a constant value while the system states are in \(\Phi\) for both the DR and the fully informed case. In consequence of Theorems 2 and 3, \(\mathfrak{R}^{\mathrm{total}}_{k}\) can converge to constant values for finite time periods. Furthermore, since \(\mathfrak{R}^{\mathrm{cl}}_{k},\mathfrak{R}^{\mathrm{ol}}_{k}\) and \(\mathfrak{R}^{\mathrm{total}}_{k}\) are mainly induced by active state constraints, those results indicate the potential of dynamically allocating the individual risk budgets \(\delta_{i}\) to the constraints by leveraging the insights obtained from regret analysis. ## 4 Numerical Simulation We consider a discretized double integrator with discretization time step \(dt=0.05\) given by \[A=\begin{bmatrix}1&dt\\ 0&1\end{bmatrix},B=\begin{bmatrix}0\\ dt\end{bmatrix},C=I_{2},D=0_{2\times 1}. \tag{57}\] The model was simulated for a total of \(T=200\) time steps with a total risk budget of \(\Delta=0.1\). The prediction horizon was set to be \(N=5\) time steps. The process noise was sampled from a multivariate Laplacian distribution with zero mean and covariance of \(\mathbf{\Sigma}^{\mathbf{w}}=0.01^{2}I_{2}\). The state and control penalty matrices are \(Q=\text{diag}(1,0.1)\) and \(R=0.1\) respectively. The control and the state constraints are given by \(u\in[-20,2]\) and \(x\in[-4,11]\times[-4,1.5]\).3 Footnote 3: The code is available under: [https://github.com/mpefferk/Regret-and-Conservatism-of-SMPC](https://github.com/mpefferk/Regret-and-Conservatism-of-SMPC). The results of simulating the system given by (57) using both the fully informed and the DR SMPC controller are shown in Figures 1 and 2. Specifically, the states are plotted in Figure 1 and \(\mathfrak{R}^{\text{cl}}_{k},\mathfrak{R}^{\text{ol}}_{k}\) and \(\mathfrak{R}^{\text{total}}_{k}\) associated with the DR SMPC controller are plotted in Figure 2. In the beginning, \(\mathfrak{R}^{\text{cl}}_{k}\) is negative as it is a short-sighted measure: initially, the increased cautiousness of the DR SMPC leads to lower closed-loop costs when compared to the fully informed SMPC. At the same time, \(\mathcal{R}^{\text{ol}}_{k}\) increases rapidly: the DR SMPC is anticipated to perform worse in the future. While at later times \(\mathfrak{R}^{\text{cl}}_{k}\) is positive, \(\mathfrak{R}^{\text{ol}}_{k}\) becomes negative. The latter is because the system under the cautious DR controller is at that time closer to the reference than its fully informed counterpart and hence, its remaining cost-to-go is lower. The total regret \(\mathfrak{R}^{\text{total}}_{k}\) is positive throughout the entire run. At \(t=4.1s\), all constraints become inactive for both the DR and the fully informed controller, indicating that both systems have entered the (here implicitly defined) set \(\Phi\). From that time point on until the end of the simulation, we observe the convergence behavior of \(\mathfrak{R}^{\text{cl}}_{k},\mathfrak{R}^{\text{ol}}_{k}\) and \(\mathfrak{R}^{\text{total}}_{k}\) proven in Theorems 2 and 3. Figure 1: Simulation results for the DR and the fully informed case. Conclusion A linear DR SMPC formulation exploiting moment-based ambiguity sets has been analyzed in regard of conservatism and regret. We have analyzed the different aspects of regret, which together create the overall picture of its behavior. Especially, regret was shown to converge to constant values over finite time periods and to increase only in between two such periods. These findings were underlined by simulations. Future research will be dedicated towards applying the established framework to extended SMPC formulations that may include online estimation or learning of unknown quantities. Furthermore, the effect of non-uniform risk allocation on conservatism and regret is to be further investigated. ## Acknowledgment This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program under grant agreement No 834142 (Scalable Control) and from the German Research Foundation (Research Training Group 2297). V. Renganathan is a member of the ELLIIT Strategic Research Area in Lund University.
2310.00391
On Yang-Mills fields from anti-de Sitter spaces
Motivated by some recent progress involving a non-compact gauge group, we obtain classical gauge fields using non-compact foliations of anti-de Sitter space in 4 dimensions (required dimensionality for conformal invariance of Yang-Mills theory) and transfer these to Minkowski spacetime using a series of conformal maps. This construction also builds upon some previous works involving SU(2) gauge group in that we now use its non-compact counterpart SU(1, 1) here. We note down gauge fields in both Abelian as well as non-Abelian settings and find them to be divergent at some hyperboloid, which is a hypersurface of co-dimension 1 inside the conformal boundary of AdS4. In spite of this hurdle we find a physically relevant field configuration in the Abelian case, reproducing a known result.
Kaushlendra Kumar
2023-09-30T14:28:53Z
http://arxiv.org/abs/2310.00391v1
# On Yang-Mills fields from anti-de Sitter spaces ###### Abstract Motivated by some recent progress involving a non-compact gauge group, we obtain classical gauge fields using non-compact foliations of anti-de Sitter space in 4 dimensions (required dimensionality for conformal invariance of Yang-Mills theory) and transfer these to Minkowski spacetime using a series of conformal maps. This construction also builds upon some previous works involving \(\mathrm{SU}(2)\) gauge group in that we now use its non-compact counterpart \(\mathrm{SU}(1,1)\) here. We note down gauge fields in both Abelian as well as non-Abelian settings and find them to be divergent at some hyperboloid, which is a hypersurface of co-dimension 1 inside the conformal boundary of \(\mathrm{AdS}_{4}\). In spite of this hurdle we find a physically relevant field configuration in the Abelian case, reproducing a known result. ## 1 Introduction Vacuum Yang-Mills theory in 4 dimensions is conformally invariant--a fact that can be used to map their solutions to any conformally related spacetime1 after obtaining them on a more suitable one equipped with symmetric-foliations, and therefore admitting a natural gauge group. This exercise was carried out for \(\mathrm{dS}_{4}\) whose foliation submanifold \(S^{3}\) arises as group manifold of \(\mathrm{SU}(2)\); the resulting \(\mathrm{SU}(2)\)-equivariant connection ansatz reproduced [4] a well known Yang-Mills solutions obtained in 1977 by Luscher [8]. This was achieved by following set of conformal correspondences: \(\mathcal{I}\times S^{3}\stackrel{{\mbox{\scriptsize conformal}}}{{ \longleftarrow}}\mathrm{dS}_{4}\stackrel{{\mbox{\scriptsize conformal}}}{{ \longleftarrow}}\mathds{R}^{1,3}\) where the cylinder domain \(\mathcal{I}=(-\frac{\pi}{2},\frac{\pi}{2})\) needs to be doubled to facilitate a gluing of two \(\mathrm{dS}_{4}\) copies so as to cover entire Minkowski spacetime. Furthermore, solutions for its Abelian counterpart \(U(1)\subset\mathrm{SU}(2)\) (in a non-symmetric setting) produces a family of electromagnetic knotted fields [4, 5]. These EM knots have rather intriguing physical features, many of which have been explored recently [6, 7]. Another motivation for such an exploration of classical gauge fields is that only a few of them are known in mathematical physics literature (see e.g. [1, 2, 3] for reviews). To this end, novel results were obtained on de Sitter and anti-de Sitter spaces in [10, 11] for \(\mathrm{SU}(2)\) as well as some higher-dimensional generalizations (in de Sitter case) for \(\mathrm{SO}(4)\) in [12]. Footnote 1: It is a known fact that all three variants of FLRW spacetimes viz. Minkowski \(\mathds{R}^{1,3}\), de Sitter \(\mathrm{dS}_{4}\) and anti-de Sitter \(\mathrm{AdS}_{4}\) are covered this way. There has been some recent progress [13] towards obtaining gauge fields via \(H^{3}\) and \(\mathrm{dS}_{3}\) foliations of Minkowski space regions with non-compact gauge group \(\mathrm{SO}(1,3)\), as opposed to compact ones discussed above. Inspired by this success, we considered \(\mathrm{AdS}_{3}\) foliations of \(\mathrm{AdS}_{4}\) since the former arises as group manifold of \(\mathrm{SU}(1,1)\). Classical gauge fields were obtained here for \(\mathrm{SU}(1,1)\)-symmetric gauge configuration [14] (of both Abelian and non-Abelian type) by employing following strategy: \(\mathcal{I}\times\mathrm{AdS}_{3}\stackrel{{\mbox{\scriptsize conformal}}}{{ \longleftarrow}}\mathrm{AdS}_{4}\stackrel{{\mbox{\scriptsize conformal}}}{{ \longleftarrow}}\mathcal{I}\times S^{3}_{+}\) followed by the previous \({\cal I}\times S^{3}\xrightarrow{\rm conformal}\mathds{R}^{1,3}\) after gluing two AdS\({}_{4}\) copies to recover full \(S^{3}\). However, unlike the previous compact case, this gluing is not smooth and leads to a singular conformal boundary; this feature propagates to the corresponding gauge fields but in a milder fashion. We present such field configurations in a compact form and compute their stress energy tensor. We also make use of several plots to explain various features of these conformal maps and resulting field configurations (such as orientation preserving gluing). ## 2 Geometrical toolkit for anti-de Sitter space \({\rm AdS}_{4}\) We start with the following isometric embedding of AdS\({}_{4}\)--endowed with coordinates \((x^{1},x^{2},x^{3},x^{4},x^{5})\) and global radius \(R\)--inside \(\mathds{R}^{2,3}\) via \[-(x^{1})^{2}-(x^{2})^{2}+(x^{3})^{2}+(x^{4})^{2}+(x^{5})^{2}\ =\ -R^{2}. \tag{1}\] ### \({\rm AdS}_{4}\)-foliations of the form \({\cal I}\times{\cal M}_{3}\) First, we have \({\cal M}_{3}={\rm AdS}_{3}\) foliation with a spacelike parameter \(\psi\in{\cal I}=(-\pi/2,\pi/2)\). This \({\rm AdS}_{3}\hookrightarrow\mathds{R}^{2,2}\) can be described using embedding coordinates \(\alpha^{i}(\rho,\tau,\phi)\) for \(i=1,\ldots,4\) with spatial parameters \(\rho\in\mathds{R}_{+}\), \(\phi\in[0,2\pi]\) and temporal parameter \(\tau\in[0,2\pi]\) as \[\alpha^{1}=\cosh\rho\,\cos\tau,\ \alpha^{2}=\cosh\rho\,\sin\tau, \qquad\Longrightarrow\quad-(\alpha^{1})^{2}-(\alpha^{2})^{2}+(\alpha^{3})^{2 }+(\alpha^{4})^{2}=-1. \tag{2}\] \[\alpha^{3}=\sinh\rho\,\cos\phi,\ \alpha^{4}=\sinh\rho\,\sin\phi,\] The following global AdS\({}_{4}\) embedding coordinates, \[x^{i}\ =\ R\sec\chi\,\alpha^{i},\qquad x^{5}\ =\ R\tan\chi\, \tag{3}\] then yields its induced metric (arising from flat \(\mathds{R}^{2,3}\) metric) in terms of AdS\({}_{3}\) metric \({\rm d}\Omega^{2}_{1,2}\) as \[{\rm d}s^{2}\ =\ \frac{R^{2}}{\cos^{2}\!\psi}\big{(}{\rm d}\psi^{2}-\cosh^{2} \!\rho\,{\rm d}\tau^{2}+{\rm d}\rho^{2}+\sinh^{2}\!\rho\,{\rm d}\phi^{2}\big{)} \ =\ \frac{R^{2}}{\cos^{2}\!\psi}\big{(}{\rm d}\psi^{2}+{\rm d}\Omega^{2}_{1,2}\big{)}. \tag{4}\] Next, we have \({\cal M}_{3}\)=\(S^{3}_{+}\) (upper hemisphere) foliations, embedded inside \(\mathds{R}^{2,3}\) using \[x^{1}=R\sec\chi\cos\tau,\ x^{2}=R\sec\chi\sin\tau,\ x^{3}=R\tan\chi\,\beta^{1},\ x^{4}=R\tan\chi\,\beta^{2},\ x^{5}=R\tan\chi\,\beta^{3}\, \tag{5}\] where \(\chi\in[0,\pi/2)\) for the half-sphere and canonical \(S^{2}\) coordinates \(\beta^{i}(\theta,\phi)\) with \(\theta\in[0,\pi]\) are \[\beta^{1}\ =\ \sin\theta\,\cos\phi\,\qquad\beta^{2}\ =\ \sin\theta\,\sin\phi\, \qquad\beta^{3}\ =\ \cos\theta. \tag{6}\] These are related to the coordinates \((\rho,\psi)\) above (3) as follows, \[\tanh\rho\ =\ \sin\theta\,\sin\chi\qquad\mbox{and}\qquad\tan\psi\ =\ -\cos \theta\,\tan\chi. \tag{7}\] In this case, the induced metric demonstrates a \(S^{3}_{+}\)-cylinder structure with round metric \({\rm d}\Omega^{2}_{3+}\); the latter can be expressed using \(S^{2}\) round metric \({\rm d}\Omega^{2}_{2}\) as \[{\rm d}s^{2}\ =\ \frac{R^{2}}{\cos^{2}\!\chi}\big{(}-{\rm d}\tau^{2}+{\rm d} \chi^{2}+\sin^{2}\!\chi\,{\rm d}\Omega^{2}_{2}\big{)}\ =\ \frac{R^{2}}{\cos^{2}\!\chi}\big{(}-{\rm d}\tau^{2}+{\rm d}\Omega^{2}_{3+}\big{)}. \tag{8}\] This temporal parameter \(\tau\) can be extended to full \(\mathds{R}\) by going to the universal cover \(\widetilde{\rm AdS}_{4}\). ### Gluing of two \(\mathrm{AdS}_{4}\) copies We now proceed to glue two copies of \(\mathrm{AdS}_{4}\) to recover full round 3-sphere, i.e. \(S^{3}=S^{3}_{+}\cup S^{2}\cup S^{3}_{-}\) in order to apply below-mentioned conformal map. To this end, we note down the following map that glues northern copy \(S^{3}_{+}\) with southern one \(S^{3}_{-}\) along the boundary \(S^{2}\) at \(\chi{=}\frac{\pi}{2}\): \[\begin{array}{c}\tanh\rho\ =\ \varepsilon\,\sin\theta\,\sin\chi\quad\text{and} \quad\tan\psi\ =\ -\varepsilon\,\cos\theta\,\tan\chi\;,\quad\text{with}\\ \varepsilon|_{S^{3}_{+}}=+1:\ \rho\in\mathds{R}_{+}\Leftrightarrow\chi\in[0,\frac{ \pi}{2})\quad\text{and}\quad\varepsilon|_{S^{3}_{-}}=-1:\ \rho\in\mathds{R}_{-} \Leftrightarrow\chi\in(\frac{\pi}{2},\pi]\;.\end{array} \tag{9}\] This map preserves the orientation along the gluing boundary \(\partial S^{3}_{\pm}{=}S^{2}\). To see this, we note down some key points in table 1 for both coordinate systems and then plot these in spherical coordinates \((\chi,\theta)^{2}\) individually in figures 1 and 2 as well as in combined fashion in figure 3. Notice in the figures that the open boundary of the AdS space (for fixed \(\tau\)) is depicted with dashed lines while some points (like the north pole) are identified as is clear from table 1. Finally, the gluing happens by identifying the points \((P_{2},\ldots,P_{5})\) with \((P^{\prime}_{2},\ldots,P^{\prime}_{5})\) pairwise so that same-colored segments are coincide and the orientation remains preserved. Another point to note here is that this gluing is not smooth, but consists of a singularity arising from the conformal boundary \(\chi=\pi/2\) as is clear from above metric (8). ### Conformal mapping to Minkowski space \(\mathds{R}^{1,3}\) So far, we have seen an effective comformal map between \(\mathrm{AdS}_{3}\)-cylinder (4) and \(S^{3}\)-cylinder (8,9): \((\rho,\psi)\to(\chi,\theta)\) while keeping \(\tau\) and \(\phi\) fixed. We can now map the \(S^{3}\)-cylinder (post gluing) to Minkowski space \({\rm I\kern-1.8ptR}^{1,3}\) equipped with polar coordinates \((t,r,\theta,\phi)^{3}\): \((\tau,\chi)\to(t,r)\) via \[\sin\tau\ =\ \gamma\frac{t}{R}\quad\mbox{and}\quad\sin\chi\ =\ \gamma\frac{r}{R}\quad\mbox{with}\quad\gamma\ =\ \frac{2R^{2}}{\sqrt{4R^{2}t^{2}+(r^{2}-t^{2}+R^{2})^{2}}}\, \tag{10}\] where the \(S^{2}\) coordinates \((\theta,\phi)\) are identified. A key features of this map is that the full Minkowski space gets embedded inside half of the doubled AdS domain with null-boundary given by \(\chi\)=\(|\tau|\); this is due to the following inequality: \[\gamma\ =\ \cos\tau-\cos\chi\ >0\qquad\implies\qquad\chi>|\tau|. \tag{11}\] This feature is clearly exemplified through \((\tau,\chi)\) Penrose diagram in figure 5 demonstrating the flat spacetime embedding inside half of the (glued) AdS\({}_{4}\) space with future and past null boundaries. The \((t,r)\) plot in figure 5 further illustrates the gluing-boundary inside Minkowski spacetime. It should be noted here that every point inside shaded regions of these plots hides a 2-sphere and that such regions coming from different AdS\({}_{4}\) copies are color-coded (yellow and orange shades). The conformal metric (8) in these Minkowski coordinates take the following form: \[{\rm d}s^{2}\ =\ \frac{4\,R^{4}}{(r^{2}{-}t^{2}{-}R^{2})^{2}}\left(-{\rm d }t^{2}+{\rm d}r^{2}+r^{2}{\rm d}\Omega_{2}^{2}\right)\,, \tag{12}\] which shows that the singularity at the gluing-boundary is a one-sheeted hyperboloid \(H_{R}^{1,2}\): \[\left\{\chi{=}\frac{\pi}{2}\right\}\qquad\Longleftrightarrow\qquad\left\{r ^{2}{-}t^{2}{=}R^{2}\right\}\ =:\ H_{R}^{1,2}. \tag{13}\] ## 3. \(\mathbf{SU}(1,1)\)-symmetric gauge theory on AdS\({}_{4}\) Let us review some algebraic results required to construct the relevant connection one-form. To this end, we start with the group manifold of \(\mathrm{SU}(1,1)\) which is AdS\({}_{3}\), easily seen by the map \[g:\;\mathrm{AdS}_{3}\,\to\,\mathrm{SU}(1,1)\qquad\mathrm{via}\qquad(\alpha^{1}, \alpha^{2},\alpha^{3},\alpha^{4})\,\mapsto\,\begin{pmatrix}\alpha^{1}{-} \mathrm{i}\alpha^{2}&\alpha^{3}{-}\mathrm{i}\alpha^{4}\\ \alpha^{3}{+}\mathrm{i}\alpha^{4}&\alpha^{1}{+}\mathrm{i}\alpha^{2}\end{pmatrix}. \tag{14}\] We use this \(g\) to obtain left-invariant one-forms \(e^{\alpha},\ \alpha=0,1,2\) via Maurer-Cartan method: \[\Omega_{L}(g)\ =\ g^{-1}\mathrm{d}g\ =\ e^{\alpha}\,I_{\alpha} \tag{15}\] where \(I_{\alpha}\) are \(\mathfrak{sl}(2,\mathds{R})\) generators satisfying \[[I_{\alpha},I_{\beta}]\ =\ 2\,f^{\gamma}_{\ \alpha\beta}\,I_{\gamma}\qquad \mathrm{and}\qquad\mathrm{tr}(I_{\alpha}\,I_{\beta})\ =\ 2\,\eta_{\alpha\beta}\, \tag{16}\] with \(f^{2}_{\ 01}=f^{1}_{\ 20}=-f^{0}_{\ 12}=1\) and \((\eta_{\alpha\beta})=\mathrm{diag}(-1,1,1)\). The resulting one-forms look like, \[\begin{split} e^{0}&\ =\ \cosh^{2}\!\rho\; \mathrm{d}\tau+\sinh^{2}\!\rho\;\mathrm{d}\phi\,\\ e^{1}&\ =\ \cos\left(\tau{-}\phi\right)\, \mathrm{d}\rho+\sinh\rho\;\cosh\rho\;\sin\left(\tau{-}\phi\right)\,\mathrm{d}( \tau{+}\phi)\,\\ e^{2}&\ =\ -\sin\left(\tau{-}\phi\right)\, \mathrm{d}\rho+\sinh\rho\;\cosh\rho\;\cos\left(\tau{-}\phi\right)\,\mathrm{d}( \tau{+}\phi)\.\end{split} \tag{17}\] These obey Cartan structure equation and provide orthonormal-frame on AdS\({}_{3}\)-cylinder (4): \[\mathrm{d}e^{\alpha}+f^{\alpha}_{\ \beta\gamma}\;e^{\beta}\wedge e^{\gamma}=0 \qquad\mathrm{and}\qquad\mathrm{d}s^{2}_{\mathrm{cyl}}\ =\ \mathrm{d}\psi^{2}+\eta_{\alpha\beta}\,e^{\alpha}e^{\beta}. \tag{18}\] Now a generic _gauge field_\(\mathcal{A}\) in this frame can be made \(\mathrm{SU}(1,1)\)-symmetric by, \[\begin{split}\mathcal{A}&\ =\ \mathcal{A}_{\psi}\,e^{\psi}+ \mathcal{A}_{\alpha}\,e^{\alpha}\qquad\xrightarrow{\mathcal{A}_{\psi}=0}\qquad \mathcal{A}\ =\ X_{\alpha}(\psi)\;e^{\alpha}\\ &\implies\mathcal{F}\ =\ \mathrm{d}\mathcal{A}+\mathcal{A} \wedge\mathcal{A}\ =\ X^{{}^{\prime}}_{\alpha}\;e^{\psi}\wedge e^{\alpha}+\tfrac{1}{2}\big{(}-2f^ {\alpha}_{\ \beta\gamma}X_{\alpha}+[X_{\beta},X_{\gamma}]\big{)}\;e^{\beta}\wedge e^{ \gamma}\,\end{split} \tag{19}\] where \(X^{\prime}_{\alpha}\) in the field strength expression correspond to \(\mathrm{d}X_{\alpha}/\mathrm{d}\psi\). Next, we impose the Gauss-law constraint \([X_{\alpha},X^{{}^{\prime}}_{\alpha}]=0\) arising from the eom \({*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{ *}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} {*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*}{*} ## 4 AdS\({}_{4}\) gauge fields on Minkowski space We have already seen a series of conformal maps \((\tau,\rho,\psi)\to(\tau,\chi,\theta)\to(t,r,\theta)\) defined via (9) and (10) above. We can use these with \(R\)=1 along with abbreviations \(x\cdot{\rm d}x:=x_{\mu}{\rm d}x^{\mu}\), \(\varepsilon^{1}=\varepsilon^{2}:=\varepsilon\) and \(\varepsilon^{0}:=1\) to write the AdS\({}_{3}\)-cylinder one-forms \(e^{\alpha}\) (17) and \(e^{\psi}:={\rm d}\psi\) in Minkowski coordinates as, \[\begin{array}{l}e^{\alpha}\ =\ \frac{\varepsilon^{\alpha}}{\lambda^{2}+4z^{ 2}}\left(2(\lambda{+}2)\,{\rm d}x^{\alpha}-4x^{\alpha}\,x\cdot{\rm d}x-4\,f^{ \alpha}_{\ \beta\gamma}\,x^{\beta}{\rm d}x^{\gamma}\right)\,,\\ e^{\psi}\ =\ \frac{\varepsilon}{\lambda^{2}+4z^{2}}\left(-2\lambda\,{\rm d}z+4z \,x\cdot{\rm d}x\right)\,,\ \ \ \ \ \ \ {\rm where}\ \ \ \lambda\ :=\ r^{2}-t^{2}-1\.\end{array} \tag{24}\] This allows one to read-off various vierbein components, viz. \(e^{\alpha}_{\mu}\) and \(e^{\psi}_{\mu}\) to be used below. ### Nonabelian fields For the nonabelian case we use the the ansatz (23) for gauge field \({\cal A}\) (19). The corresponding field strength \(F\) of \({\cal A}\equiv A=\frac{1}{2}\Big{(}1+\Phi\big{(}\psi(x)\big{)}\Big{)}\,I_{ \alpha}\ e^{\alpha}_{\ \mu}\ {\rm d}x^{\mu}\) then computes to, \[F\ =\ \frac{1}{2}\Big{(}\Phi^{\prime}\big{(}\psi(x)\big{)}\,I_{\alpha}\,e^{ \psi}_{\ \mu}e^{\alpha}_{\ \nu}\ -\ \frac{1}{2}\big{(}1{-}\Phi\big{(}\psi(x)\big{)}^{2}\big{)}\,I_{\alpha}\,f^{ \alpha}_{\ \beta\gamma}\,e^{\beta}_{\ \mu}e^{\gamma}_{\ \nu}\Big{)}\,{\rm d}x^{\mu}\wedge{\rm d }x^{\nu}. \tag{25}\] One can easily extract the color-electromagnetic fields from this \(F\) as follows: \(E_{a}\):=\(F_{a0}\) and \(B_{a}\):=\(\frac{1}{2}\epsilon_{abc}F_{bc}\). We can express the color EM fields thus obtained rather succinctly in terms of a Riemann-Silberstein vector \({\bf S}:={\bf E}+{\rm i}{\bf B}\) as, \[S_{x}=-\frac{2({\rm i}\varepsilon\Phi^{\prime}+\Phi^{2}-1)}{( \lambda-2{\rm i}z)(\lambda+2{\rm i}z)^{2}}\left\{2\big{[}ty{+}{\rm i}x(z{+}{ \rm i})\big{]}I_{0}+2\varepsilon\big{[}xy{+}{\rm i}t(z{+}{\rm i})\big{]}I_{1}+ \varepsilon\big{[}t^{2}{-}x^{2}{+}y^{2}{+}(z{+}{\rm i})^{2}\big{]}I_{2}\right\}\,, \tag{26}\] \[S_{y}=\frac{2({\rm i}\varepsilon\Phi^{\prime}+\Phi^{2}-1)}{( \lambda-2{\rm i}z)(\lambda+2{\rm i}z)^{2}}\left\{2\big{[}tx{-}{\rm i}y(z{+}{ \rm i})\big{]}I_{0}+\varepsilon\big{[}t^{2}{+}x^{2}{-}y^{2}{+}(z{+}{\rm i})^{2 }\big{]}I_{1}+2\varepsilon\big{[}xy{-}{\rm i}t(z{+}{\rm i})\big{]}I_{2}\right\}\,,\] (27) \[S_{z}=\frac{2({\rm i}\varepsilon\Phi^{\prime}+\Phi^{2}-1)}{( \lambda-2{\rm i}z)(\lambda+2{\rm i}z)^{2}}\left\{{\rm i}[t^{2}{+}x^{2}{+}y^{2} {-}(z{+}{\rm i})^{2}\big{]}I_{0}+2\varepsilon\big{[}{\rm i}tx{-}y(z{+}{\rm i}) \big{]}I_{1}+2\varepsilon\big{[}{\rm i}ty{+}x(z{+}{\rm i})\big{]}I_{2}\right\}\,. \tag{28}\] Interestingly, any explicit solution \(\Phi\) do not couple to color components of these fields \({\bf S}\); this fact reflects below in that the corresponding physical quantities (arising from the stress-energy tensor) depends only on a conserved parameter--rather than an explicit form--of such solutions. We can now compute the corresponding stress-energy tensor \(T_{\mu\nu}\) given by, \[T_{\mu\nu}\ =\ -{{{ 1}\over{ 2g^{2}}}}\left(\delta_{\mu}^{\ \rho}\delta_{\nu}^{\ \lambda}\eta^{\sigma\tau}-{{{ 1}\over{ 4}}}\eta_{\mu\nu}\eta^{\rho\lambda}\eta^{\sigma\tau}\right){\rm tr} \big{(}F_{\rho\sigma}F_{\lambda\tau}\big{)}\, \tag{29}\] and express them, using mechanical energy \(\epsilon:=-{{{ 1}\over{ 4}}}\big{(}(\Phi^{\prime})^{2}+(1 {-}\Phi^{2})^{2}\big{)}\), compactly as follows: \[\big{(}T_{\mu\nu}\big{)}\ =\ {{{ 8}\over{ g^{2}}}}\,{{\epsilon}\over{(\lambda^{2}{+}{ 4z^{2}})^{3}}}\left(\begin{matrix}{\rm t}_{\alpha\beta}&{\rm t}_{\alpha 3}\\ {\rm t}_{3\alpha}&{\rm t}_{33}\end{matrix}\right)\ \ \ {\rm with}\ \ \ \left\{\begin{array}{l}{\rm t}_{\alpha\beta} \ =\ -\eta_{\alpha\beta}(\lambda^{2}{+}4z^{2})+16x_{\alpha}x_{\beta}z^{2}\,,\\ {\rm t}_{3\alpha}\ =\ {\rm t}_{\alpha 3}\ =\ -8x_{\alpha}z\,(\lambda{-}3z^{2}) \,\\ {\rm t}_{33}\ =\ 3\lambda^{2}-4z^{2}(1+4\lambda-4z^{2})\.\end{array}\right. \tag{30}\] We find that these fields \({\bf E},{\bf B}\) and their stress-energy tensor \(T_{\mu\nu}\) are not singular at the full hyperboloid \(H^{1,2}\equiv\lambda{=}0\) but on a hypersurface given by the intersection \[\{\lambda{=}0\}\cap\{z{=}0\}\Leftrightarrow x^{2}+y^{2}-t^{2}\ =\ 1. \tag{31}\] We have plotted the energy density \(\rho={\rm t}_{00}\) at \(t{=}0\) highlighting the role of \(xy\)-plane in figure 7. ### Electromagnetic fields We consider following harmonic functions as solutions to the Abelian eom (22) such that they would have same structural form across the gluing surface (mediated by \(\varepsilon\)): \[\widetilde{A}^{(0)}\ =\ -{{{ 1}\over{ 2}}}\,\cos 2\big{(}\psi(x){+} \varepsilon\psi_{0}\big{)}\,e^{0}_{\ \mu}\,\,{\rm d}x^{\mu}\,\] \[\widetilde{A}^{(1)}\ =\ {{{ 1}\over{ 2}}}\,\sin 2\big{(}\psi(x){+} \varepsilon\psi_{0}\big{)}\,e^{1}_{\ \mu}\,\,{\rm d}x^{\mu}\, \tag{32}\] \[\widetilde{A}^{(2)}\ =\ {{{ 1}\over{ 2}}}\,\sin 2\big{(}\psi(x){+} \varepsilon\psi_{0}\big{)}\,e^{2}_{\ \mu}\,\,{\rm d}x^{\mu}\.\] As before, we can go ahead and compute the corresponding field strengths \(\widetilde{F}\), e.g. this one \[\widetilde{F}^{(0)}\ =\ \big{\{}\sin 2\big{(}\psi(x){+}\varepsilon\psi_{0}\big{)} \,e^{\psi}_{\ \mu}e^{0}_{\ \nu}-\cos 2\big{(}\psi(x){+}\varepsilon\psi_{0}\big{)}\,e^{1}_{\ \mu}e^{2}_{\ \nu}\big{\}}\,\,{\rm d}x^{\mu}\wedge{ \rm d}x^{\nu}\, \tag{33}\] and then extract the electric \(\widetilde{\bf E}\) and magnetic \(\widetilde{\bf B}\) fields. These can again be casted into a nice compact form using Riemann-Silberstein vector \(\widetilde{\bf S}:=\widetilde{\bf E}+{\rm i}\widetilde{\bf B}\) as follows, \[\widetilde{\bf S}^{(0)}\ =\ {{{ 4{\rm e}^{{\rm i}\psi_{0}}}}\over{(\lambda{+}2{\rm i}z)^{3}}} \left(\begin{matrix}-2\left(t\,y+{\rm i}x\,(z{+}{\rm i})\right)\\ 2\left(t\,x-{\rm i}y\,(z{+}{\rm i})\right)\\ {\rm i}\left(t^{2}+x^{2}+y^{2}-(z{+}{\rm i})^{2}\right)\end{matrix}\right)\, \tag{34}\] \[\widetilde{\bf S}^{(1)}\ =\ -{{{ 4{\rm e}^{{\rm i}\psi_{0}}}}\over{(\lambda{+}2{\rm i}z)^{3}}} \left(\begin{matrix}2{\rm i}\left(x\,y+{\rm i}t\,(z{+}{\rm i})\right)\\ -{\rm i}\left(t^{2}+x^{2}-y^{2}+(z{+}{\rm i})^{2}\right)\\ 2\left(t\,x+{\rm i}y\,(z{+}{\rm i})\right)\end{matrix}\right)\,\] (35) \[\widetilde{\bf S}^{(2)}\ =\ {{{ 4{\rm e}^{{\rm i}\psi_{0}}}}\over{(\lambda{+}2{\rm i}z)^{3}}} \left(\begin{matrix}-{\rm i}\left(t^{2}-x^{2}+y^{2}+(z{+}{\rm i})^{2}\right)\\ 2{\rm i}\left(x\,y-{\rm i}t\,(z{+}{\rm i})\right)\\ -2\left(t\,y-{\rm i}x\,(z{+}{\rm i})\right)\end{matrix}\right). \tag{36}\] Like before, these fields are also singular on the hypersurface (31). We demonstrate this in figure 8 by plotting typical field lines for \(\widetilde{\bf E}^{(0)}\) and \(\widetilde{\bf B}^{(0)}\) (34); these accumulate and intersect at the singular boundary (denoted with black circle). We omit here the expressions for the stress-energy components (due to their bulky form) and refer the reader to [14], where such expressions corresponding to fields in (34) have been noted. Incidentally, the above Riemann-Silberstein vectors are reminiscent of the Hopf-Ranada (HR) electromagnetic knots [4] and suggests their interpretation as a non-compact cousin of the HR knot. Upon further exploration we find that a special case of the field configuration (34): \[\psi_{0}=z=0\quad\implies\quad\frac{4}{(x^{2}{+}y^{2}{-}t^{2}{-}1)^{3}}\left( \begin{array}{c}2(x-t\,y)\\ 2(y+t\,x)\\ t^{2}{+}x^{2}{+}y^{2}{+}1\end{array}\right)\, \tag{37}\] reproduces the magnetic fields4 of a recently constructed magnetic vortex [15, eq. (5.15)] under the following identification of the AdS\({}_{3}\) coordinates \((x^{0},x^{1},x^{2})\) used in [15], Footnote 4: In our convention, these gauge fields are interpreted as two electric fields and one magnetic field. \[x^{0}\ =\ -t\,\quad x^{1}\ =\ -y\quad\mbox{and}\quad x^{2}\ =\ x. \tag{38}\] ## 5 Summary * First, we employed the AdS\({}_{3}\)-slicing of AdS\({}_{4}\) and the group manifold structure of SU\((1,1)\) to find Yang-Mills solutions on \({\cal I}\times{\rm AdS}_{3}\). * Next, we used a series of conformal maps to transfer these solutions to Minkoowsi space, since Yang-Mills theory is conformally invariant in 4-dimensions. * These gauge fields are singular on a 2-dimensional hyperboloid \(x^{2}{+}y^{2}{-}t^{2}=1\), but this singularity is milder than the one we started with, namely \(r^{2}{-}t^{2}{-}1=0\). * Due to this singularity the total energy diverges for both kinds of gauge fields (see [14] for details), thereby limiting their physical usefulness. * Nevertheless, our Abelian solutions were found to match the magnetic field of a known vortex magnetic mode on SU\((1,1)\)[15]. The status of other two Abelian solutions in this regard remains to be explored.
2309.14287
Domain wall dynamics driven by a transient laser-induced magnetisation
One of the fundamental effects of the laser-matter interaction is the appearance of an induced transient magnetisation. While the underlying phenomena differ in their microscopic origin and cover a diverse array of materials, here we address a fundamental question about the possibility to drive domain-wall dynamics on the femtosecond timescale of the exchange interactions solely by longitudinal changes of the magnetic moments. We verify the viability of this hypothesis in the case of a generic ferromagnetic system described in the framework of the high-temperature micromagnetic model based on the Landau-Lifshitz-Bloch equation. The effect is investigated in a 1D model at constant temperature as well as in a full micromagnetic framework considering realistic laser-induced heating. Our results demonstrate that domain-wall deformation in a femtosecond timeframe leads to the displacement of the wall on a larger timescale up to nanoseconds accompanied by a release of excess energy in the form of spin waves. The domain wall deformation leads to the appearance of a magnetisation gradient across the wall which promotes the motion towards the region consisting of spins with decreased magnetisation length. The total displacement is enhanced at larger temperatures and smaller damping due to an increase of the longitudinal relaxation time which ensures the longer presence of the induced magnetisation gradient. We also demonstrate an enhanced domain wall motion in the presence of the Dzyaloshinskii-Moriya interaction attributed to augmented magnonic torques. Our results are important towards the understanding of ultrafast magnetism phenomena on the sub-picosecond timescale.
Paul-Iulian Gavriloaea, Elías Saugar, Rubén Otxoa, Oksana Chubykalo-Fesenko
2023-09-25T16:51:41Z
http://arxiv.org/abs/2309.14287v1
# Domain wall dynamics driven by a transient laser-induced magnetisation ###### Abstract One of the fundamental effects of the laser-matter interaction is the appearance of an induced _transient magnetisation_. While the underlying phenomena differ in their microscopic origin and cover a diverse array of materials, here we address a fundamental question about the possibility to drive domain-wall dynamics on the femtosecond timescale of the exchange interactions solely by longitudinal changes of the magnetic moments. We verify the viability of this hypothesis in the case of a generic ferromagnetic system described in the framework of the high-temperature micromagnetic model based on the Landau-Lifshitz-Bloch equation. The effect is investigated in a 1D model at constant temperature as well as in a full micromagnetic framework considering realistic laser-induced heating. Our results demonstrate that domain-wall deformation in a femtosecond timeframe leads to the displacement of the wall on a larger timescale up to nanoseconds accompanied by a release of excess energy in the form of spin waves. The domain wall deformation leads to the appearance of a magnetisation gradient across the wall which promotes the motion towards the region consisting of spins with decreased magnetisation length. The total displacement is enhanced at larger temperatures and smaller damping due to an increase of the longitudinal relaxation time which ensures the longer presence of the induced magnetisation gradient. We also demonstrate an enhanced domain wall motion in the presence of the Dzyaloshinskii-Moriya interaction attributed to augmented magnonic torques. Our results are important towards the understanding of ultrafast magnetism phenomena on the sub-picosecond timescale. ## I Introduction The light-matter interaction holds the key to the fastest and least dissipative magnetisation dynamics observed so far. A cornerstone experimental result, the sub-picosecond demagnetisation of Ni thin films obtained by Beaurepaire _et al._ in 1996 [1], sparked the ever-growing interest for laser induced, ultra-fast magnetisation dynamics, a field of investigation already in its third decade of existence. The optical manipulation of the magnetic order parameter has been explored in a wide variety of materials including metals [1; 2; 3; 4; 5; 6; 7; 8], dielectrics [9; 10; 11; 12; 13] and semiconductors [14; 15; 16; 17; 18]. However, a complete understanding of strongly non-equilibrium magnetism is yet to be achieved, leading often to divergent opinions on the microscopic origin of the observed phenomena. While the exhaustive analysis of light-induced effects sets itself as a laborious task, the overall complexity may be reduced by isolating and classifying several mechanisms based on their similar physical nature. One such distinction was discussed in the work of Kirilyuk _et al._[19], where an ultra-fast laser pulse excitation is seen to give rise firstly to a class of thermally induced effects, in which the energy carried by the photon system is transferred to the electron and phonon baths, leading to an ultrarapid heating of the sample. Irrespective of the laser pulse polarisation, the heating alone can trigger demagnetisation [1; 9; 14], precession [2; 4; 15] or complete switching of the magnetic order parameter [3; 6; 20]. Secondly, they identify the broader class of non-thermal effects which are intrinsically dependent on the polarisation of the laser pulse. In the grand picture, non-thermal effects can induce changes in the magnetocrystalline anisotropy of garnet systems by modifying the charge distribution of magnetic ions [12; 13], as well as they allow the dynamic control of the magnetisation in ferromagnetic semiconductors for excitations above the band gap level [16; 17]. Furthermore, they can lead to switching via an inverse optomagnetic route in metals and dielectrics [8; 10] or give rise to spin polarisation on the basis of light-induced phonon excitation in antiferromagnetic insulators [21; 22; 23]. Although in practical situations the heating of the laser-excited sample is always present to some extent, we shall further employ the _non-thermal_ syntagm to describe only those light-induced effects which ideally do not modify from a thermodynamic point of view the temperature of the system. The _transient magnetisation_ terminology has been previously used in the literature to define the after-effect of a given laser-matter interaction, which can include the individual or combined effect of thermal or non-thermal phenomena. This umbrella term is generally expressed mathematically as a time-dependent, vector quantity \(\delta\mathbf{m}(t)\) which is used to describe laser-induced changes of the (reduced) magnetisation vector \(\mathbf{m}\) that may or may not outlive the timescale of the excitation. In the work of John _et al._[8], the origin of the transient \(\delta\mathbf{m}(t)\) magnetisation is assumed to be the non-thermal mechanism of the inverse Faraday effect (IFE), a phenomenon widely investigated in many classes of materials as for example metals [24; 25; 26; 27; 28; 29], plasmonic systems [30; 31] as well as non-traditional materials such as molecular magnets [32] and magnetic ionic liquids [33]. In a recent _ab-initio_ study it was shown that the absorption of circularly polarised light can further induce a cumulative, helicity-dependent magnetisation component in ferromagnets [34]. Unlike the IFE, the latter mechanism is of dissipative nature and scales linearly both with the laser pulse intensity and with time, being also argued it becomes dominant in the little explored ultra violet regime [35]. In a different fashion, THz driven phonon excitation has been shown capable of inducing an ultrafast, first-order phase transition in the magnetic insulator DyFeO\({}_{3}\)[22]. The use of a sub-picosecond, mid-infrared electric field pulse in resonance with an optical phonon mode leads to the appearance of a macroscopic transient magnetisation in the system which accompanies the internal coherent spin reorientation from an antiferromagnetic to an weakly ferromagnetic state. Similarly, THz excitation of optical phonons in the antiferromagnetic difluoride CoF\({}_{2}\), followed by subsequent lattice dynamics in conjunction with transverse or longitudinal piezomagnetism gives rise to a transient net magnetisation [21; 23]. In addition, the study of magneto-optical phenomena might benefit from the advancements in the quantum optics field as the ultra-strong, light-matter coupling regime is being explored [36]. For example, the adjacent topic of cavity magnonics offers the promise of enhanced magneto-optical fields with tailored chirality at desired wavelengths [37; 38; 39], which might augment the degree of control as well as the amplitude of an optically induced transient magnetisation component. Typical numerical models of laser-induced magnetisation dynamics describe non-thermal phenomena more often via transverse and precession dynamics or relaxation mechanisms [10; 11; 16; 40; 41], with any optically induced longitudinal magnetisation changes being usually neglected due to their smaller amplitudes or much faster equilibration times. Modifications of the magnetisation vector length are ultimately introduced as a result of the thermal effect of the laser pulse excitation in stochastic atomistic modelling [20; 42; 43] or by employing the Landau-Lifshitz-Bloch (LLB) equation [8; 17; 44]. Furthermore, the IFE effect is often taken into account as a local field of very large amplitude of the order of 10 T and even above [43; 44; 45; 46; 47]. In previous micromagnetic works concerning the investigation of laser-induced domain-wall (DW) dynamics, the light-matter interaction primarily leads to the appearance of temperature gradients or spin-wave (SW) excitations which ultimately drive DWs via entropic and magnonic torques [48; 49; 50]. Generally in such situations, thermally induced transverse and precessional relaxation processes dominate the dynamics on the \(ps-ns\) timescale, where spatial non-uniformities in the anisotropy or exchange stiffness parameters become relevant. Concerning _ab-initio_ models, it has been argued that the IFE produces both magnetisation torque [29] as well as the modification of spin and orbital magnetic moment [27; 28]. In an attempt to reflect the results of the _ab-initio_ theory, in relation to electrically induced Rashba Edelstein effect in Mn\({}_{2}\)Au, recent atomistic spin-resolved models include explicitly the presence of the orbital magnetic moment, and its interaction with the spin moment [51]. In this work, we investigate the possibility to convert a transient, non-thermal, magnetisation contribution into a transverse DW motion in a ferromagnetic system. As pointed out by Zhang _et al._[52], a femtosecond laser pulse excitation can help reduce the energy cost of current induced DW dynamics in perpendicularly magnetised wires. In their work, it is shown that the presence of a helicity-dependent, optical effect can reduce by 50% the threshold density current of a spin-orbit torque driving mechanism in ultra thin Co/Ni/Co films, motivating from a technological point of view the potential importance of our study. Thus, the pure non-thermal route of driving magnetisation dynamics via longitudinal magnetisation changes remains little or completely unexplored. Owing to the diversity of light-induced phenomena which can lead to the appearance of a transient magnetisation contribution, we approach the problem of DW dynamics without reference to a particular effect. The key idea of our study relies on the assumption that a non-thermal \(\delta\mathbf{m}(t)\) contribution manifests as a longitudinal distortion in the magnetic texture. The fundamental question which we try to answer here is the following: could pure longitudinal relaxation processes pass angular momentum to the transverse dynamics and lead to translational DW motion? In order to model this mechanism we make use of the micromagnetic LLB equation which naturally allows for the description of longitudinal relaxation processes. In the immediate section following this introductory part, we describe the theoretical background of our micromagnetic approach. The third section of the article is divided in two parts. Firstly, in subsection "A" we approach the problem of DW dynamics in a 1D model. It is assumed the induced transient magnetisation acts instantaneously on the magnetic texture, all while disregarding any heating effects which might arise during a laser pulse excitation. The aim of this model is to present in a clear picture the mechanism of converting a longitudinal deformation of the magnetic texture into a DW motion based on the appearance of a non-thermal magnetisation gradient. A more realistic full micromagnetic analysis of DW dynamics is employed in the second part "B" of the same results section. Here, we investigate the dynamics of an out-of-plane (OOP) DW in a thin magnetic stripe, also taking into account the presence of heating and the magnetostatic effects. The DW dynamics are investigated as a function of the laser fluence and amplitude of the induced magnetisation. Finally, we show how in the presence of the interfacial Dzyaloshinskii-Moryia interaction (DMI), it is possible to enhance the DW velocity and the maximum displacement achieved in a single-pulse excitation. The final section is reserved for conclusions and comments on the outlook of the developed model. Numerical model We consider a generic micromagnetic model describing a ferromagnetic sample discretized in a lattice of \(N\) identical cubic elements with lateral spacing \(\Delta\), governed by the LLB equation [53; 54; 55; 56]. Importantly, this formalism is valid at high temperatures and does not conserve the magnetisation length, thus, allowing modelling of longitudinal dynamics. The LLB equation reads: \[\frac{d\mathbf{m}_{i}}{dt} =-\gamma\mathbf{m}_{i}\times\mathbf{H}^{i}_{eff}+\gamma\alpha_{||}\frac{( \mathbf{m}_{i}\cdot\mathbf{H}^{i}_{eff})\mathbf{m}_{i}}{m_{i}^{2}} \tag{1}\] \[-\gamma\alpha_{\perp}\frac{\mathbf{m}_{i}\times(\mathbf{m}_{i}\times\mathbf{ H}^{i}_{eff})}{m_{i}^{2}}.\] The unit vector \(\mathbf{m}_{i}\) is defined as \(\mathbf{m}_{i}=\mathbf{M}_{i}(T)/M_{s}(0)\) with \(M_{s}(0)\) being the saturation magnetisation at \(0\)\(K\) while \(\mathbf{M}_{i}(T)\) is the temperature dependent magnetisation vector. The constant \(\gamma\) denotes the electron gyromagnetic ratio, while \(\alpha_{||}\) and \(\alpha_{\perp}\) are the dimensionless longitudinal and transverse damping parameters defined as: \[\alpha_{||}=\lambda\frac{2T}{3T_{c}},\quad\alpha_{\perp}=\lambda\ \begin{cases}(1-\frac{T}{3T_{c}})&T\lesssim T_{c},\\ \frac{2T}{3T_{c}}&T\gtrsim T_{c}.\end{cases} \tag{2}\] The proportionality factor \(\lambda\) is a measure of the intrinsic spin-flip scattering events which defines the coupling strength between the spin degrees of freedom and the thermal bath. The damping parameters \(\alpha_{||}\) and \(\alpha_{\perp}\) are directly proportional to the \(T/T_{c}\) ratio where \(T\) is the bath temperature and \(T_{c}\) is the Curie point of the ferromagnetic sample. The dynamics of each macrospin \(i\) are governed by the total effective field \(\mathbf{H}^{i}_{eff}\), defined in our case as: \[\mathbf{H}^{i}_{eff} =\mathbf{H}^{i}_{ex}+\mathbf{H}^{i}_{ani}+\mathbf{H}^{i}_{dem} \tag{3}\] \[+\begin{cases}\frac{1}{2\tilde{X}_{||}}\left(1-\frac{m_{i}^{2}}{ m_{e}^{2}}\right)\mathbf{m}_{i}&T\lesssim T_{c},\\ -\frac{1}{\tilde{X}_{||}}\left(1+\frac{3T_{c}}{5(T-T_{c})}m_{i}^{2}\right) \mathbf{m}_{i}&T\gtrsim T_{c}.\end{cases}\] The temperature dependent \(m_{e}=M_{e}(T)/M_{s}(0)\) value is the macrospin vector length at thermal equilibrium, obtained here within the mean-field approximation (MFA) by solving the self-consistent Curie-Weiss equation. Assuming a classical spin system (\(S\to\infty\)) this evaluates to: \(m_{e}=L(\beta J_{0}m_{e})\), where \(L\) is the Langevin function, \(J_{0}=3k_{B}T_{C}\) expresses the strength of the Heisenberg exchange coupling and \(\beta=1/k_{B}T\) is a measure of the thermal field. The \(\tilde{X}_{||}\) term denotes the reduced longitudinal susceptibility which in the MFA evaluates to: \[\tilde{X}_{||}(T)=\begin{cases}\frac{\mu_{i}BL^{\prime}}{1-\beta J_{0}L^{ \prime}}&T\lesssim T_{c},\\ \frac{\mu_{e}T_{c}}{J_{0}(T-T_{c})}&T\gtrsim T_{c},\end{cases} \tag{4}\] where \(L^{\prime}\) represents the derivative of the Langevin function with respect to the argument \(x=\beta J_{0}m_{e}\) and the \(\mu_{s}\) constant denotes the atomic magnetic moment. The exchange field \(\mathbf{H}^{i}_{ex}\) field term accounts for the micromagnetic exchange interaction between macrospins, being fundamentally defined and numerically approximated in the following manner [56]: \[\mathbf{H}^{i}_{ex}=\frac{2A(T)}{M_{s}(0)m_{e}^{2}}\nabla^{2}\mathbf{m}_{i}\simeq \frac{2A(T)}{M_{s}(0)m_{e}^{2}\Delta^{2}}\sum_{j=1}^{n_{i}}{(\mathbf{m}_{j}-\mathbf{m }_{i})}, \tag{5}\] with the summation taking into account all the \(n_{i}\) neighbours of each individual macrospin vector \(\mathbf{m}_{i}\) and the \(A(T)\) parameter indicating the temperature-dependent exchange stiffness. Given a Cartesian \(Oxyz\) frame of reference, we consider an uni-axial magnetocrystalline anisotropy with the easy-axis (EA) taken along the \(Oz\) direction. The anisotropy field \(\mathbf{H}^{i}_{ani}\) is thus defined as: \[\mathbf{H}^{i}_{ani}=-\frac{(m_{x}^{i}\mathbf{e}_{x}+m_{y}^{i}\mathbf{e}_{y})}{\tilde{X}_{ \perp}} \tag{6}\] For practical reasons, the reduced transverse susceptibility \(\tilde{X}_{\perp}\) is linked to a temperature dependent, anisotropy constant \(K(T)\) via the relationship \(\tilde{X}_{\perp}=[M_{s}(0)m_{e}]^{2}/2K(T)=M_{s}(T)^{2}/2K(T)\). The demagnetising field \(\mathbf{H}^{i}_{dem}\) is expressed as a discrete convolution sum of the demagnetising tensor \(\mathbf{N}(\mathbf{r}_{i}-\mathbf{r}_{j})\) and the reduced magnetisation vector \(\mathbf{m}_{j}\), taking into account the contribution of all macro-cells in the system: \[\mathbf{H}^{i}_{dem}=-\mu_{0}M_{s}(0)\sum_{j}N(\mathbf{r}_{i}-\mathbf{r}_{j})\cdot\mathbf{m}_{ j}. \tag{7}\] Since the demagnetising tensor depends only on the relative positions \(\mathbf{r}_{i,j}\) of the cells, its calculation is done only once at the start of each simulation employing the method of Newell _et al._[57] The overall calculation of the \(\mathbf{H}^{i}_{dem}\) field is carried out using the traditional approach based on the use of the Fast Fourier Transform (FFT) algorithm. This field contribution is only present for full micromagnetic calculations in a thin film geometry discussed in subsection "III.B". The last term in Eq. (3) constitutes the so-called longitudinal field \(\mathbf{H}^{i}_{lon}\), a measure of the competition between the atomic spin ordering and the disorder induced by the thermal bath, which ultimately dictates the length of the macroscopic vector \(\mathbf{m}\). The material parameters we use to model the ferromagnetic sample correspond to a generic Co system and can be consulted in Table 1. Their temperature dependence is obtained making use of the equilibrium magnetisation \(m_{e}\) discussed earlier in the context of the MFA. Thus, the uni-axial anisotropy constant is assumed to follow the Callen-Callen scaling law \(K(T)\propto m_{e}^{3}\)[58], while for the exchange stiffness parameter we initially consider the \(A(T)\propto m_{e}^{2}\) dependence [56]. The latter will be revised in the case of the more realistic micromagnetic model used to describe DW dynamics in a stripe geometry. If otherwise not specified, the microscopic damping parameter is set to \(\lambda=0.1\). ## III Results In this section we apply the micromagnetic model described previously to the problem of DW dynamics in a ferromagnetic sample, assuming the presence of a transient, non-thermal magnetisation contribution \(\delta\mathbf{m}(t)\) which longitudinally deforms the magnetic texture. Throughout this work, the \(\delta\mathbf{m}(t)\) term will only be assumed to act along the \(Oz\) direction. The time-dependence of the aforementioned magnetisation term is disregarded initially in the case of the 1D model discussed in subsection "A", but later introduced in the case of the thin film geometry modelled in subsection "B". For these reasons, in the following pages we will be using either the \(\delta m_{z}\) or \(\delta m_{z}(t)\) notations when referring to the induced magnetisation component. ### 1D model Let us consider first a spin chain of \(500\ nm\) in length which contains at a constant temperature close to the Curie point \(T=0.91T_{c}\) an \(180^{0}\) Neel wall in the \(Oxz\) plane - see Fig. 1(a). To verify that we obtain a correct static Neel DW configuration within our model, we compare the numerically extracted wall width against the temperature-dependent expression given by [61]: \[\delta(T)=\sqrt{\frac{2\tilde{\chi}_{\perp}A(T)}{M_{s}m_{e}^{2}}}=\sqrt{\frac{ A(T)}{K(T)m_{e}}}, \tag{8}\] Numerically we obtain the DW width fitting the \(m_{z}\) magnetisation profile along the chain using the following equation: \[m_{z}=-m_{e}\tanh[(x-x_{0})/\delta_{N}(T))], \tag{9}\] where \(x\) denotes the individual position of the spins and \(x_{0}\) is the center of the wall. In our case, \(\delta_{N}(T)\) evaluates to \(12.62\ nm\) at \(T=0.91T_{c}\), in good agreement with the analytically obtained value of \(\delta(T)=12.64\ nm\). Although no magnetostatic interaction is taken into account in this spin chain study, we took two extra precautionary measures in trying to assure the reliability of our results: a) the discretisation size is chosen in such a way as to be smaller than the characteristic exchange length given by \(l_{ex}=\sqrt{\frac{2A(T)}{\mu_{0}M_{e}(T)^{2}}}=2.85\ nm\)[62]. The \(l_{ex}\) parameter is independent of \(T\) due to the scaling law of the temperature dependent exchange stiffness taken as \(A(T)\sim m_{e}^{2}\). b) Although in the bulk of our results we used \(\Delta=2.5\ nm\), the calculations have also been carried out for \(\Delta=1\ nm\) producing no relevant qualitative nor quantitative differences in the results. Starting from the equilibrium DW configuration displayed in Fig. 1(a), we introduce a magnetisation contribution which alters the macrospins' vector component Figure 1: (a) An \(180^{0}\) Néel DW configuration equilibrated in a \(500\ nm\) chain system at \(T=0.91T_{c}\). By fitting the \(m_{z}\) magnetisation profile we extract a wall width of \(\delta_{N}(T)=12.62\ nm\) which compares well with the analytical value \(\delta(T)=12.64\ nm\) given by Eq. (8). (b) The effect of the longitudinal magnetisation contribution introduced via Eq. (10) on the modulus of each macro-spin vector in the chain. The black straight line denotes the situation before the deformation is introduced, wherein all spins are characterised by the same modulus \(|\mathbf{m}|\). Depending on their orientation with respect to the EA, the spins will either elongate or contract their length - graphically exemplified for two edge spins - leading to a magnetisation gradient \(\nabla|\mathbf{m}|\) across the DW as it can be seen in the highlighted region, obtaining the initial configuration at \(t=0\ ps\). Subsequently, \(\nabla|\mathbf{m}|\) will rapidly vanish on a \(fs\) time-scale as suggested by the snapshots at \(t=\ 0.01\ ps\) and \(t=\ 0.1\ ps\). \begin{table} \begin{tabular}{c c} Parameter & Value \\ \hline \(M_{s}(0)\) & \(1400\ kA/m\)[59] \\ \(A(0)\) & \(10\ pJ/m\) \\ \(K(0)\) & \(0.45\ MJ/m^{3}\)[59] \\ \(T_{c}\) & \(1480\ K\)[60] \\ \end{tabular} \end{table} Table 1: Generic Co material parameters. The exchange stiffness parameter has been chosen smaller than the typical values found in literature [59; 60] in order to decrease the \(0\ K\) DW width. along the EA direction in the following manner: \[m_{z}^{*i}=m_{z}^{i}+\delta m_{z}|\cos\theta|, \tag{10}\] where \(m_{z}^{i}\) and \(m_{z}^{*i}\) are the \(z\) magnetisation components of each macro-spin before and after the system is deformed; \(\delta m_{z}\) is the amplitude of the longitudinal magnetisation change, modulated by the angle \(\theta\) in turn defined by the relative orientation of the macro-spin vector with respect to the EA. Eq. (10) reflects the fact that our assumed non-thermal phenomenon as in the case of the IFE, depends on the angle between the laser polarisation and the magnetisation direction [29]. Thus, we consider that for a laser polarised perpendicularly to the local spin direction there is no modification of the \(m_{z}\) component, i.e. \(|\delta m_{z}|=0\), when they are perpendicular to each other. The LLB model allows one to go beyond the rigid macrospin approximation of standard micromagnetism (\(|\mathbf{m}|=1\)). The longitudinal magnetisation change introduced in Eq. (10) will distinctly modify the length of each spin vector, which in thermodynamic equilibrium conditions should be uniform across the chain and readily available solving the self-consistent Curie-Weiss equation discussed previously in section "II". In Fig. 1(b) we evaluate the change in the local magnetisation modulus \(|\mathbf{m}|\) across our spin chain due to the magnetisation contribution introduced along \(Oz\). As it can be seen in the initial configuration displayed at \(t=0\ ps\), depending on their relative orientation with respect to the EA and the strength of the deformation, the spin vectors will either elongate or contract their magnetisation length leading to a \(\nabla|\mathbf{m}|\) gradient across the DW. Note that a similar magnetisation gradient might occur during an ultrafast laser pulse excitation of a magnetic system leading to a non-uniform temperature profile in the sample. This can arise either due to the spatial profile of the laser pulse itself or due to a differential absorption of circularly polarized light for the spin-up and spin-down expectation values on account of the magnetic circular dichroism effect [64, 43, 63]. Ultimately, all these effects translate into the appearance of the "hot"/"cold" regions, where locally the magnetic parameters \(M_{s}\),\(K\), \(A\) are uneven through their temperature dependence. A pure optical route to induce a magnetisation gradient might be based on the IFE [27, 28, 29]. In this case, the temperature across the sample may remain uniform during excitation as presented in our model. However, since the strength of the magneto-optical response in the presence of the IFE depends on the relative orientation between the light propagation axis and the local magnetic vector, magnetisation gradients are expected to arise in non-coherent spin systems as for example in a DW configuration. Thus, although we study this effect at constant temperature, we shall use in the next paragraphs the "hot"/"cold" terminology to describe the regions of "small"/"large" magnetisation. Moreover, in similar fashion to the spin-Seebeck driven motion [48], we report the displacement of our Neel DW towards the "hot" region of the system, that is the region where the macro-spin vectors have reduced their magnetisation length. In Fig. 2(a), one can track the DW motion in time for three different transient magnetisation contributions. The position of the wall is tracked by identifying its center of mass from the \(m_{x}\) magnetisation profile using the following relationship [65]: \[DW_{pos}=\sum_{i}m_{x}^{i}x^{i}/\sum_{i}m_{x}^{i}, \tag{11}\] where \(x^{i}\) represents a macrospins' position along the chain. In all situations investigated, the dynamic response can be separated in three distinct regions: I) Firstly, a rapid displacement is obtained on the \(fs\) timescale. Governed by longitudinal relaxation effects, the induced change in magnetisation is converted into a transverse motion of the DW in the direction of a smaller magnetisation length. During this process, the longitudinal field will revert the spin vectors' length back to the initial value before the transient magnetisation is introduced, thus neutralising the observed \(\nabla|\mathbf{m}|\) gradient. In Fig. 1(b) one can see how does \(|\mathbf{m}|\) vary across the chain at different moments in time for the smallest deformation considered. Interestingly, already at the \(0.1\ ps\) mark the magnetisation gradient nearly vanishes and the spin vectors regain their original lengths. II) No more longitudinal relaxation processes take place in the absence of \(\nabla|\mathbf{m}|\). Furthermore, since the dynamics timescale is yet too short for any relevant transverse or precessional torques to act, the DW practically preserves its acquired position for several \(ps\). III) Beyond 10 ps the remaining excess energy induced by the presence of the transient magnetisation is invested in much slower transverse and precessional relaxation processes which promote an oscillating behaviour of the DW on the \(ns\) timescale until its final equilibrium position is reached. As we mention above, the direction of motion is towards the so-called "hot" region, as it also takes place in the DW motion driven by temperature gradients. Previously, Schlickeiser _et al._[49] have explained the motion of DWs in the presence of temperature gradients based on the existence of the so-called entropic torque which induces dynamics in a direction given by the \(-\nabla A(T)\) gradient. Interestingly, in their analytical treatment of the DW dynamics developed in the framework of the LLB equation, they include also a driving mechanism due to the magnetisation gradient \(-\nabla|\mathbf{m}|\) term - see equations 8 and 9 found in the Supplementary Material of [49] - but conclude its effect is small in comparison to the former \(-\nabla A\) contribution. In their situation, the dynamics were investigated on a longer time-scale where any longitudinal relaxation processes have already taken place. In order to understand the driving mechanism, we evaluate the time evolution of the exchange energy, using the following numerical approximation: \[E_{ex}=A(T)V\sum_{i}\sum_{j}\left(\frac{\mathbf{n}_{j}-\mathbf{n}_{i}}{\Delta^{2}}\right)^ {2}, \tag{12}\] where the counters \(i\),\(j\) loop over all the macro-spins in the system and their individual neighbours respectively and \(V=\Delta^{3}\) is the volume of the cubic macro-cell. In standard micromagnetics the macrospin's vector length is always conserved (\(|\mathbf{m}|=1\)). To recover a similar definition, we normalise the \(\mathbf{m}\) vector to the equilibrium magnetisation value \(m_{e}\) and define \(E_{ex}\) making use of the variable \(\mathbf{n}=\mathbf{m}/m_{e}=\mathbf{M}(T)/M_{s}(T)\), where \(|\mathbf{n}|\neq 1\). In Fig. 2(b) we represent the time variation of the micromagnetic exchange energy \(E_{ex}\) stored in the chain in all three situations. Comparing the initial and final states, one can appreciate qualitatively the amount of exchange energy introduced in the system due to the elongation/contraction of the spins. Thus, the DW distortion produces a large torque on neighboring spins through their elongation/contraction. Beyond the \(10\ ps\) time threshold, the oscillating behaviour described earlier is also clearly observed in the exchange energy. In this timescale, however, \(|\mathbf{n}|=1\). A further mention here refers to the timescale and huge instantaneous velocity acquired by the DW during its initial displacement - see Fig. 2(c). Indeed, at this timescale the change of \(\mathbf{m}(t)\) is governed by the longitudinal field \(\mathbf{H}_{lon}\) - consequence of the relaxation process governed by the internal Heisenberg interactions between an ensemble of atomic spins. On average the disorder present at the atomic level will be translated at the macroscopic scale as a change in the instantaneous magnetisation \(\mathbf{m}(t)\), a representation of the competition between internal exchange fields and thermal fluctuations. The Heisenberg exchange coupling is the strongest known interaction in any magnetic system giving rise to ultra-fast dynamics on the \(fs\) timescale. For our chosen system this internal exchange field \(J_{0}/\mu_{at}\) evaluates to approximately \(1.68\times 10^{4}\ T\) for a simple cubic cell with the size of \(a=0.25\ nm\). As evidenced in subplot (b) of Fig. 1, the induced magnetisation gradient vanishes around the \(0.1\ ps\) time thresh Figure 2: (a) DW displacement in time for different non-thermal, transient magnetisation contributions along \(Oz\). The black solid line indicates the initial position of the wall. The DW position is tracked based on the center of mass method: \(DW_{pos}=\sum_{i}m_{x}^{i}x^{i}/\sum_{i}m_{x}^{i}\). (b) The time variation of the micromagnetic exchange energy stored in the chain for all three different scenarios. (c) Instantaneous DW velocity extracted as the time derivative of the data presented in subfigure (a). The inset is adjusted to fit the points beyond the \(10\ ps\) time mark. (d) Time dependence of the Néel wall magnetisation profile along \(Ox\) during the dynamics induced by a transient magnetisation of amplitude \(\delta m_{z}=0.16\). The solid vertical line defines the initial DW position while the dotted line marks an intermediate state at which the wall finds itself approximately \(3\ nm\) away from the starting point according to Eq. (11). The x axes in subplots (a) (b) and (c) are logarithmic. old. Up to this point dominant longitudinal relaxation processes will take place under large \(\mathbf{H}_{lon}\) fields giving rise to the enormous instantaneous velocities seen in Fig. 2(c). These large velocity values arise due to the ultrafast timescale imposed by the longitudinal field. Any displacement of just a few \(nm\) on the \(fs\) timescale will lead to DW velocities in the \(km/s\) domain. The analysis of the magnetisation profile (\(m_{x}\) component) in Fig. 2(d) shows that the displacements of the DW center of mass results from the accommodation of the DW profile to the equilibrium one on the timescale of longitudinal relaxation. The snapshot at \(t=0.1\ ps\) presents the largest shape deviation from the initial configuration corresponding to the maximum DW displacement of approximately \(5\ nm\) recorded in Fig. 2(a). As discussed earlier, the excess energy introduced in the system will be invested on the \(ps-ns\) timescale in transverse and precession like torques which will lead finally to smaller velocities since the dynamics are governed then by the much smaller anisotropy \(\mathbf{H}_{ani}\) and the micromagnetic exchange \(\mathbf{H}_{ex}\) fields with \(|\mathbf{n}|=1\). An oscillating behaviour of the DW position is observed in the subsequent time snapshots taken at \(t=100\ ps\) and \(t=1000\ ps\). At this latter time threshold, a clear shift in the \(m_{x}\) curve peak with respect to the initial state can be easily identified as direct evidence of the wall motion we have claimed. The DW will display onwards a back and forth motion until final equilibration is reached. The oscillations mentioned so far become more obvious turning our attention to the \(m_{y}\) magnetisation profile discussed in the Supplemental Material in Fig. S1 [66]. Lastly, evidence of the DW motion can also be extracted from the \(m_{z}\) magnetisation profile discussed in the same figure in the Supplemental [66]. To inquire the possibility for augmented DW dynamics, we investigate the displacement of the Neel wall as a function the applied temperature \(T\) and the amplitude \(\delta m_{z}\) of the induced transient magnetisation for two different values of the microscopic damping \(\lambda\). For any given set of parameters, we run the LLB dynamics for several \(ns\) extracting the final DW displacement after complete equilibration is reached. The result of this parameter sweep can be seen in Fig. 3. For any transient magnetisation amplitude \(\delta m_{z}\) and irrespective of \(\lambda\), the final displacement will be increased as we get closer to the Curie point. Interestingly, a smaller microscopic damping of \(\lambda=0.01\) affects the end result to a very small extent, leading to a slightly larger final displacement at \(T=0.98Tc\) and \(\delta m_{z}=0.16\). The enhancement in displacement achieved when the temperature is increased can be explained referring ourselves solely to the longitudinal relaxation processes. The LLB equation allows to define a longitudinal relaxation time as [67; 56]: \[\tau_{||}=\frac{\tilde{\chi_{||}}}{\gamma\alpha_{||}}=\frac{3\tilde{\chi_{||} }}{2\gamma}\frac{T_{c}}{\lambda T} \tag{13}\] A general effect characteristic of second order phase transitions [68; 55], the longitudinal relaxation time experiences a critical slowing down approaching the Curie temperature \(T_{c}\). The main factor responsible for the overall increase of the \(\tau_{||}\) relaxation time is the divergence of the parallel susceptibility \(\tilde{\chi}_{||}\) near the phase transition point, which dominates the \(1/\lambda T\) dependence resulting from the longitudinal damping parameter \(\alpha_{||}\). In our case we can turn the "slow" longitudinal dynamics at elevated temperatures to our advantage: the key driving mechanism in this study is the magnetisation gradient \(\nabla|\mathbf{m}|\) we induce across the DW through a longitudinal deformation of the system. As seen in Fig. 1(b) and 2(a), as long as \(\nabla|\mathbf{m}|\) is preserved the wall will continue in its displacement until temporarily reaching a halt when \(\nabla|\mathbf{m}|=0\). Thus, increasing the life-time of \(\nabla|\mathbf{m}|\) will lead to larger DW displacements overall. The behavior of \(\tau_{||}\) also explains a slight increase of the DW displacement for a smaller damping value. The relaxation or life-time of the induced magnetisation is governed here by the thermal longitudinal dynamics and consequently slows down near the Curie point. On the contrary, this effect could be parameterised by a Figure 3: Final DW displacement achieved after complete equilibration as a function of the temperature \(T\) and the amplitude \(\delta m_{z}\) of the induced magnetisation for two different microscopic damping values (a) \(\lambda=0.1\) and (b) \(\lambda=0.01\). stand-alone relaxation parameter which could result in a magnetisation gradient \(\nabla|\mathbf{m}|\) that outlives the thermal longitudinal dynamics of the studied optical effect thus leading to significant DW displacements even away from \(T_{c}\). To summarise, in this subsection we demonstrate the possibility to convert a non-thermal, transient magnetisation contribution followed by longitudinal relaxation processes into a subsequent transverse DW motion in a ferromagnetic system. The fundamental mechanism at the origin of the effect is the appearance of a magnetisation gradient \(\nabla|\mathbf{m}|\) along the DW which allows for a displacement towards the "hot" region, that is the area of small magnetisation. The total distance covered by the DW is dependent on the life-time of the induced gradient as well the amplitude of the magnetisation contribution. Increasing the temperature of the bath \(T\), one can delay the relaxation of the macro-spin vectors towards their equilibrium lengths allowing for a prolonged displacement of the wall. ### Full micromagnetic model: stripe system Moving away from the simple chain system discussed in the previous subsection, we further inquire the possibility for DW dynamics in a more complex situation by taking into account also the heating produced by the laser. We consider a stripe geometry, introducing also the magnetostatic contribution to the total effective field. Given the current perpendicular magnetic recording paradigm, an OOP anisotropy is preferred. For the material parameters presented in Table 1, the competition between the OOP anisotropy and the magnetostatic interactions produces a spin-reorientation transition below room temperature. To circumvent this, we augment the magnetocrystalline anisotropy of the system by increasing the \(K(0)\) constant to \(2.25~{}MJ/m^{3}\). We consider a \(1500~{}nm\times 50~{}nm\times 1~{}nm\) stripe system discretised in cubic cells of lateral size of \(\Delta=1~{}nm\). While in the chain model the exchange length parameter \(l_{ex}=\sqrt{\frac{2A(T)}{\mu_{0}M_{s}(T)^{2}}}\) was temperature independent due to the assumed MFA \(A(T)\sim m_{e}^{2}\) scaling law, here we use a more realistic scaling (\(A(T)\sim m_{e}^{1.8}\)) [60] which leads to an increasing \(l_{ex}\) as a function of temperature starting from the \(0~{}K\) value of \(2.85~{}nm\), assuring ourselves the discretisation size \(\Delta\) is properly chosen. At room temperature (\(T=300~{}K\)), the DW configuration is of a Bloch type, as seen in Fig. 4 at \(t=0\). Starting from this initial configuration, we consider a spatially uniform (\(\nabla T=0\)) laser-pulse heating of the thin film sample on the \(fs\) time-scale. This rapid heating of the system is described by the two-temperature model (TTM) [69; 70; 71] which couples the electron and phonon baths via: \[C_{e}\frac{dT_{e}}{dt}=-G_{ep}(T_{e}-T_{ph})+P(t)-C_{e}\frac{T_{e}-T_{room}}{ \tau_{th}}, \tag{14}\] \[C_{ph}\frac{dT_{ph}}{dt}=G_{ep}(T_{e}-T_{ph}), \tag{15}\] where \(C_{e}\) and \(C_{ph}\) are the electron and phonon volumetric heat capacities and \(G_{ep}\) is a measure of the coupling between the two baths. Working within the free electron approximation, the specific heat \(C_{e}\) is taken as \(C_{e}=\gamma_{e}T_{e}\), where \(\gamma_{e}\) is a material dependent proportionality factor. Assuming \(T_{ph}\) to be larger than the Debye temperature, \(C_{ph}\) will be considered constant. A simple Newton cooling law is employed to model the heat transfer with the external medium (\(T_{room}\)) at a rate given by \(\tau_{th}\). The time dependent laser pulse power is given by: \[P(t)=\frac{A_{ab}F}{1.0645t_{p}2\sqrt{\ln 2}\delta_{opt}}\exp\left[-\left(\frac{t -t_{0}}{t_{p}}\right)^{2}\right], \tag{16}\] where \(A_{ab}\) is an absorption coefficient, \(F\) is the laser fluence at full width half maximum (FWHM), \(\delta_{opt}\) is the optical penetration depth, \(t_{p}\) is the pulse duration and \(t_{0}\) denotes the moment of time when the laser pulse power reaches its peak amplitude. The numerical constants arise due to the FWHM relationship between the laser fluence \(F\) and intensity \(I\): \(F=(\tau/2)\sqrt{\pi/\ln 2}I\approx 1.0645\tau I\), where \(\tau=2t_{p}\sqrt{\ln 2}\). Here, \(t_{p}\) is set at \(200~{}fs\). The TTM parameters we are employing are listed in Table 2. The \(\gamma_{e}\), \(C_{e}\) and \(G_{ep}\) values correspond to a generic Co sample and have been extracted from [72; 73] while the \(\tau_{th}\) parameter has been set to a standard value of \(50~{}ps\). The absorption coefficient is set to a small value of \(25\%\) similar to the work reported in [8] and \(\delta_{opt}\) is set equal to the thin film thickness of \(1~{}nm\). In a similar fashion to John _et al._[8], we further assume the laser-matter interaction gives rise to a transient magnetisation in the system along the \(\mathbf{k}\) vector direction, parallel in this case to the EA (\(Oz\)) of the sample. We model the laser-induced magnetisation in the following manner: \[m_{z}^{i*}=m_{z}^{i}+\delta m_{z}^{0}\exp\left[-\left(\frac{t-t_{0}}{t_{p}} \right)^{2}\right]=m_{z}^{i}+\delta m_{z}(t) \tag{17}\] \begin{table} \begin{tabular}{c c} Parameter & Value \\ \hline \(\gamma_{e}\) & \(5.53\times 10^{3}~{}Jm^{-3}K^{-2}\) \\ \(C_{ph}\) & \(2.07\times 10^{6}~{}Jm^{-3}K^{-1}\) \\ \(G_{ep}\) & \(4.05\times 10^{18}~{}Js^{-1}m^{-3}K^{-1}\) \\ \(\tau_{th}\) & \(50~{}ps\) \\ \end{tabular} \end{table} Table 2: TTM parameters for a generic Co sample as extracted from [72; 73]. where \(m_{z}^{i}\) is the equilibrium \(z\) magnetisation component of each macro-spin in the system and \(\delta m_{z}(t)\) is the time-dependent, laser-induced, non-thermal contribution of amplitude \(\delta m_{z}^{0}\), assumed uniform across the sample. Unlike the chain model we discussed previously, we have dropped for simplicity the \(|\cos\theta|\) angle dependence - see Eq. (10) - without changing the qualitative behaviour of the investigated DW dynamics. The temporal profile of the transient, laser induced magnetisation \(\delta m_{z}(t)\) is assumed to precisely follow the time dependence of the laser pulse power given in Eq. (16). Fig. 4 presents an example of our results considering for the purpose of visualisation a large transient magnetisation component of \(\delta m_{z}^{0}=0.2\). The optically induced dynamics are calculated using a pulse of fluence \(F=3.0~{}mJ/cm^{2}\) and duration \(t_{p}=200~{}fs\). The laser power is set to reach its peak value at \(t_{0}=3t_{p}=600~{}fs\). In subplots (d), (e) and (f) of Fig. 4, we display the state of the system at \(t=600~{}fs\), the exact moment of time when the laser pulse power reaches its peak amplitude. Due to the presence of the transient magnetisation \(\delta m_{z}(t)\) induced on the time-scale of the ultra-fast heating, a \(\nabla|\mathbf{m}|\) gradient is observed in subplot (f). As a consequence of the longitudinal dynamics, a positive Figure 5: Center of mass DW dynamics extracted along the bottom-edge of the thin film system (\(y=1~{}nm\)) in Fig. 4 under the effect of a spatially uniform, ultra-fast laser pulse heating (\(F=3~{}mJ/cm^{2}\), \(t_{p}=200~{}fs\), \(t_{0}=3t_{p}=600~{}fs\)) and a transient magnetisation component of amplitude \(\delta m_{z}^{0}=0.2\) induced along the \(Oz\) direction. The inset figure displays the electron and phonon temperature dynamics as well as the \(\delta m_{z}\) variation in time. Figure 4: DW dynamics under the effect of a spatially uniform, ultra-fast laser pulse heating (\(F=3~{}mJ/cm^{2}\), \(t_{p}=200~{}fs\), \(t_{0}=3t_{p}=600~{}fs\)) and a transient magnetisation component of amplitude \(\delta m_{z}^{0}=0.2\) induced along the \(Oz\) direction - see equations (16) and (17). From left to right we display the \(m_{x}\), \(m_{z}\) and \(|\mathbf{m}|\) magnetisation components along the thin film. For better visualisation we crop the system approximately between \(x=640~{}nm\) and \(x=860~{}nm\). The starting position of the wall is found around \(x_{DW}^{0}=750~{}nm\), the middle region of the stripe. Subplots (a), (b) and (c) display the initial configuration of the sample at \(t=0~{}ps\). Subplots (d), (e) and (f) showcase the magnetisation at the exact time when the laser power reaches a peak amplitude (\(t=t_{0}=3t_{p}=600~{}fs\)). In subplots (g), (h) and (i) we display the magnetic configuration at \(t=100~{}ps\). The short-lived transient magnetisation induced on the \(fs\) time-scale leads to the generation of SWs which propagate on the \(ps-ns\) time-scale - see subplot (g). As seen in (i), the spin vectors recover their original magnetisation length and the \(\nabla|\mathbf{m}|\) gradient disappears. Finally, subplots (j), (k) and (l) showcase the state of the system at \(8000~{}ps\), shortly before final equilibrium is reached. At this point, the DW has displaced approximately \(35~{}nm\) away from its initial position as indicated in (k). The white dashed lines in (b), (e), (h) and (k) display the initial DW position. In subplot (b) we indicate six different points (crossed marks) at which the SW signal is analysed. \(\delta m_{z}(t)\) contribution elongates the vector length of the down spins ("cold" region on the left side of the stripe) while it contracts the length of up spins ("hot" region on the right side of the stripe). Comparatively to the chain case, both non-thermal and thermal longitudinal relaxation are now present. Moreover, thermal longitudinal effects are acting on a larger timescale (up to \(ns\)) due to the elevated electron temperatures compared to the non-thermal ones which are limited to a \(fs\)-\(ps\) time-scale as seen in the inset of Fig. 5. Due to the dynamical behavior of the transient magnetisation which adapts to the intensity of the laser pulse on the timescale larger than the longitudinal relaxation time, the DW displacement on the \(fs\) timescale is almost null and can be attributed more to a small spatial re-arrangement. A well visible DW motion can be observed within several \(ps\) from the start of the pulse - see Fig. 5. The corresponding velocities are in the range of \(1-10\)\(km/s\), being large, but smaller than the ones observed in the chain case. Interestingly, similar to the 1D case, after this initial displacement, the DW is not moving around the \(10\)\(ps\) time stamp since by then all non-thermal longitudinal effects have stopped and the transverse motion requires a larger time frame to develop. More surprisingly is that a much larger and oscillating DW displacement is visible at even larger timescale, see \(t=100\)\(ps\), when the spin-electron temperature and magnetisation magnitude have already reached the almost equilibrium value. Importantly, at the \(t=100\)\(ps\) time mark we observe the presence of SWs propagating along the thin film sample as seen in subplot (g) of Fig. 4. The emission of these SWs is due to the adjustment of the DW profile to the equilibrium value due to the initial energy input introduced by the transient magnetisation contribution with the excess of the energy emitted in form of SWs. Finally, we record the state of the system at \(t=8\)\(ns\) shortly before final equilibrrium is reached. The magnetisation profile slowly returns to its original shape as it can be seen comparing the first and last three sets of plots in Fig. 4. At this point, the DW has displaced approximately \(35\)\(nm\) away from its initial position as indicated in subfigure (k). Thus, in similar fashion to the chain model presented earlier, a transient magnetisation induced in the system followed by longitudinal relaxation of the macro-spin vectors on the \(fs\) timescale leads to a DW displacement on the \(ps-ns\) timescale towards the "hot" region of the sample - in this case denoting the area where the spin vectors have decreased in length keeping in mind that no temperature gradients arise in our simulation. The induced transient magnetisation gives rise to the emission and propagation of SWs (analogous to the oscillations discussed in Fig. 2) which drive the DW long after the \(\nabla|\mathbf{m}|\) gradient has vanished. DW dynamics have been widely investigated both theoretically and experimentally also in the presence of Figure 6: The numerical experiment of Fig. 4 is repeated in the presence of an interfacial DMI. From left to right we display the \(m_{x}\), \(m_{z}\) and \(|\mathbf{m}|\) magnetisation components along the stripe where for better visualisation we have cropped the edges of the system. Subfigures (d), (e) and (f) correspond to the moment of time when the laser pulse power reaches its peak amplitude (\(t_{0}=t_{p}=600\)\(fs\)). Figures (g), (h) and (i) display the magnetisation configuration at \(t=100\)\(ps\). In figures (j), (k) and (l) we showcase the state of the system shortly before final equilibrium is reached. In the presence of the DMI, the DW dynamics are enhanced: in just \(1000\)\(ps\) the total displacement achieved is equal to approximately \(50\)\(nm\). The white dashed lines in (b), (e), (h) and (k) indicate the initial DW position. In subplot (b) we indicate six different points (crossed marks) at which the SW spectrum is analysed. the interfacial DMI [74; 75]. While this antisymmetric exchange contribution has been shown detrimental to the field driven DW dynamics of in-plane magnetised ferromagnet/heavy-metal nano-wires [76], in perpendicularly magnetised systems the DMI is seen capable of reducing the Walker breakdown field thus enhancing the DW velocities [77; 78]. On the other hand, a magnon-driven DW motion in conjunction with DMI induced linear angular momentum transfer and in the presence of an easy-plane anisotropy is shown to be more efficient in driving DWs compared to the angular momentum transfer mechanism [79]. Thus, depending on the geometry of the system and the driving force, an interfacial DMI can have different effects on the dynamics of DWs. In what follows, we consider the very same experiment graphically described in Fig. 4, only now we include the presence of an interfacial DMI contribution which can arise for example due to the coupling to a heavy-metal layer such as Pt. Thus, we add an additional term in the total effective field acting on each macro-spin in the following manner [80; 81]: \[\mathbf{H}_{DMI}=-\,\frac{2D(T)}{M_{s}(0)m_{e}}\left[\nabla m_{z}-(\nabla\cdot\mathbf{ m})\hat{\mathbf{z}}\right] \tag{18}\] The temperature scaling of the DMI constant is set to: \(D(T)=D(0)m_{e}^{2}\) (MFA), where \(D(0)=2\ mJ/m^{2}\) is taken at \(0\ K\). In Fig. 6, we display the time-evolution of our system under the action of the same laser-pulse excitation and transient magnetisation \(\delta m_{z}(t)\) as in the previous case. Due to the presence of the DMI interaction, the \(300\ K\) equilibrium DW profile used as initial state at \(t=0\ ps\) has now transitioned to a Neel configuration as seen in subfigures (a), (b), and (c). In (d), (e) and (f) we once more display the state of the system exactly at the moment of time when the laser power reaches a peak amplitude (\(t_{0}=3t_{p}=600\ fs\)). As seen in (f), as a consequence of the induced \(\delta m_{z}(t)\) contribution, a magnetisation gradient is formed across the wall. In comparison to the no-DMI case, the subsequent dynamics on the \(ps-ns\) timescale result in an asymmetric DW profile as displayed in subplots (g), (h) and (i) at \(t=100\ ps\). The DW dynamics are accompanied also in this case by SW emission and propagation along the thin film sample as it can be seen for example in the \(m_{x}\) in (g). Finally, in (j), (k) and (l) we display the state of the system just before final equilibrium is reached and the full recovery of the initial DW magnetisation profile is obtained. Interestingly, the observed dynamics are enhanced in the presence of the interfacial DMI: equilibrium is reached around the \(1000\ ps\) threshold while the final displacement we obtain increases to approximately \(50\ nm\). Fig. 7 presents the final displacement as a function of the maximum induced magnetisation for various laser pulse fluences. Irrespective of the presence/absence of DMI, the total distance covered by the DW increases linearly as a function of \(\delta m_{z}^{0}\), a similar behaviour to the chain model results obtained at temperatures \(T=0.84\ T_{c}\) and \(T=0.91\ T_{c}\) as seen in Fig. 3. However, in our stripe simulations we do not obtain a parabolic-like variation of the displacement with respect the \(\delta m_{z}^{0}\) variable as seen in the chain case at \(T=0.98\ T_{c}\). More interestingly, in the presence of the interfacial DMI contribution, the total distance is larger compared to the no-DMI situation for any (\(F\), \(\delta m_{z}^{0}\)) pair of parameters, an effect which graphically is more visible beyond the \(\delta m_{z}^{0}=0.04\) limit. Unlike the chain model discussed earlier where an instantaneous magnetisation gradient leads to a rapid DW displacement under dominant longitudinal relaxation processes on the \(fs-ps\) time-scale, in the case of a more realistic stripe modelling, we observed a significant SW emission. To expand our understanding of the observed DW dynamics and to understand the role Figure 7: Final DW displacement achieved under the action of an ultra-fast, uniform laser pulse heating and a transient magnetisation induced along \(Oz\). The displacement is graphically represented against the laser pulse fluence \(F\) and the amplitude \(\delta m_{z}^{0}\) of the induced magnetisation. The parameter sweep is considered both in the absence (a) as well as in the presence (b) of an interfacial DMI contribution - see Eq. (18). In both situations, the microscoping damping parameter is set to \(\lambda=0.1\). of the interfacial DMI contribution, we analysed the SW spectrum at pair points in the "cold" and "hot" regions taken at an equal distance with respect to the initial DW position along the bottom edge of the stripe (\(y=1\ nm\)), in the middle region (\(y=25\ nm\)) as well as the top edge (\(y=50\ nm\)) - see subplot (b) in both Fig. 4 and 6. At each of these points, we extract the \(m_{x}(t)\) time-dependent magnetisation component along \(Ox\) and perform the FFT analyzing the Power Spectral Density (PSD). In Fig. 8, we present the extracted PSD both in the absence and presence of the interfacial DMI contribution. As it is expected, PSD is different at the nanostripe edges and in the middle, due to the influence of magnetostatic interactions. Considering first the no-DMI case as seen in subplots (a), (b) and (c), we observe a dominant PSD peak, extracted from the \(m_{x}(t)\) signal corresponding to the "hot" macro-spins situated along the middle region of the stripe in comparison to their "cold" counter-parts as well as any signal analysed at the edges of the thin film. If we correlate the PSD to the energy carried by left \(\mathbf{j}_{m}^{l}\) and right \(\mathbf{j}_{m}^{r}\) propagating magnon-spin currents, the motion of the wall on the \(ps-ns\) time scale where any \(\nabla|\mathbf{m}|\) gradient has already vanished may be explained by a net magnon current \(\mathbf{j}_{m}=\mathbf{j}_{m}^{r}+\mathbf{j}_{m}^{l}\) which propagates from the "hot" to the "cold" area along a dominant channel corresponding to the middle-region of the stripe. For the conservation of the net angular momentum, the DW needs to displace opposite to the direction of \(\mathbf{j}_{m}\)[82, 83, 84, 85, 48], that is from the "cold" to the "hot" region. We further observe the amplitude of the PSD signal decreasing as a function of \(\Delta x\), a signature of the SW damping as the propagation approaches the edges of the system. Interestingly, the frequency distribution of the generated magnons increases in width and shifts to larger \(\nu\) values as we probe the SW signal further away from the initial DW position \(x_{DW}^{0}\), an effect whose origins remain to be investigated. Lastly, the SWs Figure 8: Spectral analysis of the FFT power extracted from the time-dependent \(m_{x}(t)\) signal at six independent macro-cell sites at the bottom edge (\(y=1\ nm\)), middle-region (\(y=25\ nm\)) and top edge (\(y=50\ nm\)) of the stripe system situated at a variable distance \(\Delta x\) away from the initial DW position \(x_{DW}^{0}\) as graphically represented in subplot (b) of both Fig. 4 and 6. The analysis is employed both in the absence - top row - as presented - bottom row - of the interfacial DMI exchange contribution (\(D(0\ K)=2.0\ mJ/cm^{2}\)). We consider three \(\Delta x\) values: \(\pm 70\ nm\), \(\pm 100\ nm\) and \(\pm 200\ nm\) (subplots seen from left to right) such that the DW is always confined between the (\(x_{DW}^{0},x_{DW}^{0}+\Delta x\)) boundaries during its displacement. In each of the subplots, the data graphically represented in red (blue) corresponds to the hot (cold) region. We remind the reader the ”hot/cold” terminology refers to regions of small/large magnetisation, in our work being generated in the absence of any temperature gradients. The bottom edge signal is represented by solid lines, the middle region signal by dashed lines and finally the top edge corresponding data by squared points. analysed at the bottom and top edges of the thin film sample display a closely matching power spectrum, evidenced for all considered \(\Delta x\) values. With the introduction of the interfacial DMI, the DW dynamics are enhanced as seen both by the time-dependent analysis graphically employed in Fig. 4 and 8 as well as the final displacement investigation of Fig. 7. As evidenced by our spectral analysis of the SW signal, the addition of the antisymmetric exchange leads to several effects: firstly, while the net magnonic spin current preserves its direction of motion from the "hot" to the "cold" area, the dominant propagation channel can no longer be attributed to the middle-region of the stripe but to the top and bottom edges as evidenced in subplots (d), (e) and (f) of Fig. 8. This edge channeling effect in in the presence of the DMI has been previously reported in [86]. Secondly, the magnitude of the PSD is much larger than for the non-DMI case suggesting augmented magnon currents which could account for the enhanced DW dynamics we observe. Thirdly, the frequency distribution of the generated magnon population increases in the presence of the DMI; unlike the previous case, we no longer observe an evident frequency shift and widening of the calculated PSD as we modify the \(\Delta x\) variable. Finally, the analysed SWs display a less pronounced damping in space as evidenced by the small change in the peak amplitude of the PSD as we move away from the initial \(x_{DW}^{0}\) DW position. To avoid repetition, we will reserve the conclusions dedicated to this subsection for the next and final part of the article. ## IV Conclusions The main goal of our work has been the demonstration of a new DW driving mechanism and not an exhaustive analysis of its efficiency or direct comparison to other displacement methods. A fully quantitative analysis has been avoided also in light of the generality of the effect as not one, but multiple magneto-optical phenomena have been shown capable of inducing a transient \(\delta\mathbf{m}(t)\) magnetisation contribution as discussed in the introductory part [8; 21; 22; 34; 35; 23]. The rigourous investigation of acquired DW velocities as a function of parameters such as the laser pulse width or fluence can be investigated in a future work with exact reference to a given magneto-optical phenomenon at origin. Nonetheless, making use of the inherent longitudinal dynamics described by the LLB equation, we demonstrate the possibility to convert a transient, non-thermal magnetisation contribution followed by a subsequent longitudinal relaxation of the macro-spins into a transverse DW motion. First of all, we approached the problem in a simpler, chain macro-spin model where no heating effects have been considered nor the presence of the magnetostatic interaction has been taken into account. Nonetheless, we showed that by virtue of a longitudinal deformation of the magnetic texture induced by a transient, magnetisation contribution it is possible to displace a ferromagnetic DW. The mechanism is based on the appearance of a non-thermal magnetisation gradient \(\nabla|\mathbf{m}|\) which enables the motion of the DW towards the "hot" region corresponding to an area of small magnetisation. It has been shown the displacement primarily takes places on a very fast \(fs\) timescale where longitudinal relaxation processes are dominant. In our analysis, we have seen the distance covered by the DW is proportional to the life-time of the induced gradient as well as the amplitude of the transient magnetisation. The role of the precession and transverse relaxation mechanisms in the acquired displacement is reduced to some DW oscillations on the \(ns\) timescale in this chain model as the DW loses its main drive on the longer \(ps\)-\(ns\) timescale where the \(\nabla|\mathbf{m}|\) has already vanished. Secondly, we verified the viability of the suggested driving mechanism in a more realistic micromagnetic study in which we nucleate an OOP DW configuration in a stripe geometry. In this case, we expand the complexity of the model by taking into account the presence of the magnetostatic interactions as well as consider the system which no longer finds itself in thermal equilibrium but it is uniformly heated across its area (\(\nabla T=0\)) using an ultra-fast laser pulse excitation described by the TTM. Furthermore, we considered the induced transient magnetisation contribution that follows the temporal profile of the pump pulse moving away from the instantaneous-like behaviour assumed in the chain model. While we once again identified the appearance of an optically induced magnetisation gradient which favours the DW displacement towards the "hot" region, the overall driving mechanism displaces several different features. The presence of the transient magnetisation contribution in conjunction with the uniform laser heating effect give rise to emission and propagation of SWs which drive the DW long after the \(\nabla|\mathbf{m}|\) gradient has vanished. Interestingly, the observed motion can be enhanced in the presence of an interfacial DMI contribution irrespective of the laser pulse fluence or strength of the induced magnetisation contribution. A PSD analysis revealed the existence of a net magnon spin-current, propagating from the "hot" to the "cold" region, opposite to the direction of the wall displacement. The addition of the antisymmetric exchange interaction gives rise to a SW edge channeling effect [86] while it overall increases the density of the net magnon spin current, thus accounting for the enhanced DW dynamics. A laser-induced transient magnetisation equivalent to our \(\delta m_{z}^{0}\) parameter was calculated for example in the case of L1\({}_{0}\) FePt in the work of John _et al._[8] on account of the previously established quantum theory of the IFE [27]. The \(\delta m_{z}^{0}\) contribution at a photon energy of \(1.55~{}eV\) and laser intensity of \(68~{}GW/cm^{2}\) was found to be \(-7.1\%M_{s}(0~{}K)\) or \(-3.45\%M_{s}(0~{}K)\) for a left (\(\sigma-\)) and right (\(\sigma+\)) circularly polarised laser pulse. These values, in light of our simulations, would produce a visible DW displacement even for one laser pulse. In a Co sample, Berritta _et al._[28] calculated in the same approach the IFE induced magnetisation for an identical photon energy but smaller laser intensity of \(10\)\(GW/cm^{2}\), obtaining contributions of \(-4.8\times 10^{-3}\)\(\mu_{B}\)/at. vol. and \(-13\times 10^{-3}\)\(\mu_{B}\)/at. vol. for a \(\sigma+\) and \(\sigma-\) polarisation respectively. Assuming a cubic atomic volume of lateral size \(a=0.25\)\(nm\)[87] and considering the saturation magnetisation of our Co sample \(M_{s}(0\ K)=1400\)\(kA/m\), the latter _ab initio_ result evaluates to approximately \(-0.2\%M_{s}(0\ K)\) and \(-0.55\%M_{s}(0\ K)\). In our model this small transient magnetisation leads to insignificant displacements. Since the IFE scales linearly with the laser pulse intensity \(I\), it is expected a larger intensity of \(68\)\(GW/cm^{2}\) as used in the aforementioned FePt work would enhance the effect in a Co sample. While we increase the amplitude of the induced transient magnetisation up to \(20\%\) of the zero-Kelvin \(M_{s}\), our work suggests that a contribution in the range of \(4\%-8\%\) in the presence of the interfacial DMI is enough to obtain a DW displacement between \(10\ nm-18\ nm\). To further increase the strength of the effect, one could envision the use of multiple laser pulse excitations or the investigation of large Verdet constants materials which display a better suitability for the magneto-optical coupling based on the IFE [88, 89] reminding the reader once more that a transient magnetisation can be attributed to a broad range of light-induced phenomena and not solely the IFE. Additionally, it remains to be seen how taking into account an absorption based contribution to the induced magnetisation as discussed by Scheid _et al._[34, 35] can further enhance the dynamics. Finally, noncollinear spiral systems have been reported to contribute significantly to the magnitude of optically induced effects [90]. ###### Acknowledgements. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie ITN COMRAD (grant agreement No 861300). The authors acknowledge financial support by the grant PID2019-108075RB-C31 funded by Ministry of Science and Innovation of Spain MCIN/AEI/ 10.13039/501100011033 and the grant 023AEP034 funded by Spanish National Research Council (CSIC).
2309.06683
Federated PAC-Bayesian Learning on Non-IID data
Existing research has either adapted the Probably Approximately Correct (PAC) Bayesian framework for federated learning (FL) or used information-theoretic PAC-Bayesian bounds while introducing their theorems, but few considering the non-IID challenges in FL. Our work presents the first non-vacuous federated PAC-Bayesian bound tailored for non-IID local data. This bound assumes unique prior knowledge for each client and variable aggregation weights. We also introduce an objective function and an innovative Gibbs-based algorithm for the optimization of the derived bound. The results are validated on real-world datasets.
Zihao Zhao, Yang Liu, Wenbo Ding, Xiao-Ping Zhang
2023-09-13T02:44:01Z
http://arxiv.org/abs/2309.06683v1
# Federated PAC-Bayesian Learning on Non-IID Data ###### Abstract Existing research has either adapted the Probably Approximately Correct (PAC) Bayesian framework for federated learning (FL) or used information-theoretic PAC-Bayesian bounds while introducing their theorems, but few considering the non-IID challenges in FL. Our work presents the first non-vacuous federated PAC-Bayesian bound tailored for non-IID local data. This bound assumes unique prior knowledge for each client and variable aggregation weights. We also introduce an objective function and an innovative Gibbs-based algorithm for the optimization of the derived bound. The results are validated on real-world datasets. Zihao Zhao\({}^{\star}\)+ Yang Liu\({}^{\dagger\ddagger}\)+ Wenbo Ding\({}^{\star\ddagger}\)+ Xiao-Ping Zhang\({}^{\star}\)+\({}^{\star}\)Tsinghua-Berkeley Shenzhen Institute, \({}^{\dagger}\)Institute for AI Industry Research, \({}^{\ddagger}\)Shanghai AI Lab Footnote †: This work was supported by the National Key R&D Program of China under Grant No.2022ZD0160504b, by Tsinghua Shenzhen International Graduate School-Shenzhen Pengxi Young Faculty Program of Shenzhenbergai Fru foundation (No. SZPR2023005), and by Tsinghua-Toyeta Joint Research Institute inter-disciplinary Program and Tsinghua University (AIR)-Asiainfo Technologies (China) Inc. Joint Research Center under grant No. 20203910074. \({}^{\star}\)Tsinghua-Berkeley Shenzhen Institute, \({}^{\dagger}\)Institute for AI Industry Research, \({}^{\ddagger}\)Shanghai AI Lab Federated learning, PAC-Bayesian framework, generalization error ## 1 Introduction To address privacy concerns in distributed learning, _federated learning_ (FL) has emerged as a viable solution, enabling multiple local clients to collaboratively train a model while retaining their private data and without sharing it [1, 2]. However, in real-world scenarios, data across different devices is not identically and independently distributed (non-IID), which poses challenges in model training and convergence [3]. Significant efforts have been made to improve performance and analyze convergence in non-IID FL [4], but few have provided theoretical guarantees by establishing generalization bounds. Most existing FL generalization analyses rely on the Probably Approximately Correct (PAC) Bayesian theory, first formulated by McAllester [5, 6]. Building on the McAllester's bound, these analyses typically compute local bounds or apply existing PAC-Bayesian bounds directly, overlooking the non-IID nature of FL. This approach is flawed, as the PAC-Bayesian framework assumes that each data point is IID, ignoring non-IID data and directly employing the PAC-Bayesian theory, which potentially results in inaccurate or overly relaxed bounds. Consequently, techniques developed for the PAC-Bayesian framework are not directly applicable to non-IID FL. Therefore, this work aims to advance the theoretical underpinnings of non-IID FL. **Related works.** The PAC-Bayesian framework has been extensively researched in recent years [7, 8, 9], yielding tighter and non-vacuous bounds. However, there has been limited exploration in the context of FL. Some studies have proposed information theoretic-based PAC-Bayesian bounds using Rate-distortion theory to prove generalization bounds [10, 11], providing an information-theoretic perspective on enhancing generalization capacity. Others have followed McAllester's approach, attempting to directly apply the FL paradigm to the bound. For example, the authors in [12, 13] applied McAllester's bound in a multi-step FL scenario; Omnig-Fedge [14] used the PAC-Bayesian learning framework to construct a weighted sum objective function with a penalty, considering only a local client bound instead of the entire system, which precludes obtaining global information; and FedPAC [15] employed PAC learning to balance utility, privacy, and efficiency in FL. However, these approaches do not account for the non-IIDness of FL. **Our contributions. First**, we derive a federated PAC-Bayesian learning bound for non-IID local data, providing a unified perspective on federated learning paradigms. To the best of our knowledge, this is the first non-vacuous bound for a model averaging FL framework. Specifically, due to the non-IID nature of clients, we assume that each client has unique prior knowledge rather than a common one. **Additionally**, the aggregation weights for non-IID clients vary instead of being uniform. Based on the derived bound, we define an objective function that can be computed by each local client rather than on the server and propose a Gibbs-based algorithm dubbed _FedPB_ for its optimization. This algorithm not only preserves the privacy of each client but also enhances efficiency. **Finally**, we validate our proposed bounds and algorithm on two real-world datasets, demonstrating the effectiveness of our bounds and algorithm. ## 2 Problem Setting In this section, we introduce the federated PAC-Bayesian learning setting. The whole system comprises \(K\) clients, each equipped with its own dataset \(S_{i}=(x_{i},y_{i})_{i=1}^{n}\subseteq(\mathcal{X},\mathcal{Y})^{n}\) consisting of \(n\) IID data points. Here \(\mathcal{X}\) denotes the input space and \(\mathcal{Y}\) denotes the output space. Each dataset \(S_{i}\) is presumed to be drawn from an unknown data generating distribution \(D_{k}^{\bigotimes n}\). Moreover, let \(\ell:\mathcal{Z}\times\mathcal{W}\rightarrow\mathbb{R}^{+}\) be a given loss function and \(h_{k}\in\mathcal{H}\) is a stochastic estimator on client \(k\) where \(\mathcal{H}\) is the hypothesis class. In the PAC-Bayesian framework, each client holds a tailored prior distribution \(P_{k}\). The objective of each client is to furnish a posterior distribution \(Q_{k}\in\mathcal{M}\), where \(\mathcal{M}\) denotes the set of distributions over \(\mathcal{H}\). We then define the _population risk_: \[L(Q_{1},\ldots,Q_{K})\triangleq\frac{1}{K}\sum_{k=1}^{K}\underset{h_{k}\sim Q _{k}(x_{k},y_{k})\sim D_{k}}{\mathbb{E}}[\ell(h_{k}(x_{k}),y_{k})], \tag{1}\] and the _empirical risk_: \[\hat{L}\left(Q_{1},\ldots,Q_{K}\right)\triangleq\frac{1}{nK}\underset{h_{k} \sim Q_{k}}{\mathbb{E}}\sum_{i=1}^{n}\ell\left(h_{k}\left(x_{k,i}\right),y_{k,i}\right), \tag{2}\] by averaging over the posterior distribution of each client. In federated learning, each client will upload their posterior distributions to a central server, and then the server will aggregate the transmitted model in a weighted manner: \[\bar{P}=\prod_{k=1}^{K}P_{k}^{p(k)},\quad\bar{Q}=\prod_{k=1}^{K}Q_{k}^{p(k)},\] where \(\bar{P}\) and \(\bar{Q}\) are the global prior and posterior, respectively, and the averaging weight \(p=(p(1),\ldots,p(K))\) be a probability distribution on \(\{1,\ldots,K\}\). For the sake of generality, we can assume that \(p(k)\in(0,1)\) and \(\sum_{k=1}^{K}p(k)=1\). For intuition of this aggregation, we can see that minimizing the weighted objective function is actually equivalent to maximizing the logarithm of the corresponding posterior: \(\min_{h}L(h)=\min_{h}\sum_{k=1}^{K}p(k)L_{k}(h)=\max_{h}\ln\prod_{n=1}^{N}\)\(p\left(h\mid\mathcal{D}_{k}\right)^{p(k)}.\) In addition, we denote the Kullback-Leibler (KL) divergence as \(D_{KL}(Q\|P)\triangleq\underset{\bar{Q}}{\mathbb{E}}\left[\log\frac{dQ}{dP}\right]\) if \(Q\ll P\) and \(D_{KL}(Q\|P)=+\infty\) otherwise. ## 3 Main Theorem In this section, we will present our novel bounds on the non-IID FL scenario. **Theorem 1** (Federated PAC-Bayesian learning bound).: _For any \(\delta\in(0,1]\), assume the loss function \(\ell(\cdot,\cdot)\) is bounded in \([0,C]\), the following inequality holds uniformly for all posterior distributions \(Q\) and for any \(\delta\in(0,1)\),_ \[\underset{S_{1},\ldots,S_{K}}{\mathbb{P}}\bigg{\{}\forall Q_{1}, \ldots,Q_{K},L(Q_{1},\ldots,Q_{K})\!\leq\!\hat{L}(Q_{1},\ldots,\!Q_{K})\] \[+\frac{\sum_{k=1}^{K}\!p(k)D_{KL}(Q_{k}\|P_{k})\!+\!\log\!\frac{1} {\delta}}{\lambda}+\frac{\lambda C^{2}}{8Kn}\bigg{\}}\!\!>\!\!1\!-\!\delta \tag{3}\] Proof.: Let the local generalization error: \(\operatorname{gen}(D_{k},h_{k})=\underset{(x_{k},y_{k})\sim D_{k}}{\mathbb{E }}[\ell(h_{k}(x_{k}),y_{k})]-\frac{1}{n}\sum_{i=1}^{n}\ell\left(h_{k}\left(x_{ k},z_{k}\right),y_{k,i}\right)\), then the global generalization error: \(\overline{\operatorname{gen}}(D,h)=L(Q_{1},\ldots,Q_{K})\)\(-\)\(\hat{L}(Q_{1},\ldots,Q_{K})=\frac{1}{K}\sum_{k=1}^{K}\underset{h_{k}\sim Q_{k}}{ \mathbb{E}}\operatorname{gen}(D_{k},h_{k})\). For any \(\lambda>0\) and \(t>0\), applying the Hoeffding's lemma to \(\mathbb{E}[\ell_{i}]-\ell_{i}\), we have that, for each client \(k\), \[\underset{S_{k}}{\mathbb{E}}\underset{P_{k}}{\mathbb{E}}\left[\mathrm{e}^{ \frac{\lambda}{K}\operatorname{gen}(D_{k},h_{k})}\right]\leq\mathrm{e}^{ \frac{\lambda^{2}C^{2}}{8Kn}}.\] Since each \(S_{k}\) may come from different \(D_{k}\). i.e., non-IID, we cannot directly plug this result to the PAC-Bayesian bound. Note that for each client \(k\in[K]\), \(P_{i}\) is independent of \(S_{1},\ldots,S_{K}\), we have that \[\underset{S_{1},\ldots,S_{K}}{\mathbb{E}}\underset{P_{1}}{\mathbb{E}}\ldots \underset{S_{K}}{\mathbb{E}}\underset{P_{K}}{\mathbb{E}}\left[\mathrm{e}^{ \frac{\lambda}{K}\sum_{k=1}^{K}\operatorname{gen}(D_{k},h_{k})}\right]\leq \mathrm{e}^{\frac{\lambda^{2}C^{2}}{8Kn}}.\] And we apply Donsker and Varadhan's variational formula [16] for \(P_{1},\ldots,P_{K}\) to get: \[\underset{S_{1},\ldots,S_{K}}{\mathbb{E}}\bigg{[}e^{\sup_{Q_{1}, \ldots,Q_{K}}\lambda\underset{Q_{1}}{\mathbb{E}}\ldots\underset{Q_{K}}{ \mathbb{E}}[\frac{1}{K}\sum_{k=1}^{K}\operatorname{gen}(D_{k},h_{k})]}\] \[/e^{D_{KL}}\big{(}\prod_{k=1}^{K}Q_{k}^{p(k)}\|\prod_{k=1}^{K}P_{ k}^{p(k)}\big{)}\bigg{]}\leq\mathrm{e}^{\frac{\lambda^{2}C^{2}}{8Kn}}. \tag{4}\] Recall the definition of the global generalization error: \[\overline{\operatorname{gen}}(D,h)=\frac{1}{K}\sum_{k=1}^{K}\underset{h_{k} \sim Q_{K}}{\mathbb{E}}\operatorname{gen}(D_{k},h_{k}),\] and note that \(D_{KL}\left(\prod_{k=1}^{K}Q_{k}^{p(k)}\|\prod_{k=1}^{K}P_{k}^{p(k)}\right)=\)\(\sum_{k=1}^{K}p(k)D_{KL}(Q_{k}\|P_{k}).\) Applying the Chernoff bound: \[\underset{S_{1},\ldots,S_{K}}{\mathbb{P}}\bigg{[}\sup_{Q_{1}, \ldots,Q_{K}}\lambda\overline{\operatorname{gen}(D,h)}-\sum_{k=1}^{K}p(k)D_{KL }(Q_{k}\|P_{k})-\frac{\lambda^{2}C^{2}}{8Kn}>s\bigg{]}\] \[\leq\underset{S_{1},\ldots,S_{K}}{\mathbb{E}}\bigg{[}e^{\sup_{Q_{1}, \ldots,Q_{K}}\lambda\overline{\operatorname{gen}(D,h)}-\sum_{k=1}^{K}p(k)D_{KL }(Q_{k}\|P_{k})-\frac{\lambda^{2}C^{2}}{8Kn}}\bigg{]}e^{-x}\] \[\leq\mathrm{e}^{-s}.\] Let \(\delta=e^{-s}\), that is, \(s=-\log\delta\). Thus, plug this into the above result we have that \[\underset{S_{1},\ldots,S_{K}}{\mathbb{P}}\bigg{\{}\exists Q_{1}, \ldots,Q_{K},\overline{\operatorname{gen}}(D,h)>\] \[\frac{1}{\lambda}\sum_{k=1}^{K}p(k)D_{KL}(Q_{k}\|P_{k})\,+\, \frac{\lambda C^{2}}{8Kn}\,+\,\frac{1}{\lambda}\log\frac{1}{\delta}\bigg{\}} \ \leq\ \delta\quad.\] Therefore, we prove the statement by leveraging the complement of the probability. The RHS of Theorem 1 comprises two components: the **empirical term** and the **complexity term**. Note that our bound eschews the typical smoothness and convexity assumptions on the loss often made by other FL frameworks. Moreover, an intuition can be presented for Theorem 1 that the bound will be tighter with the increasing of clients scales, which is further corroborated by the evaluation in Section 5.3. **Corollary 1** (The choice of \(\lambda\)).: _Suppose \(\lambda\in\Xi\triangleq\{0,\ldots,\xi\}\) and \(|\cdot|\) denotes the cardinality of a set. For any \(\delta\in(0,1)\) and a properly chosen \(\lambda\), with probability at least \(1-\delta\),_ \[L(Q_{1},\ldots,Q_{K})\leq\hat{L}(Q_{1},\ldots,Q_{K})\\ +C\sqrt{\frac{\sum_{k=1}^{K}p(k)D_{KL}(Q_{k}\|P_{k})+\log\frac{| \Xi|}{\delta}}{2Kn}}. \tag{5}\] Proof.: Suppose \(\mathcal{S}=S_{1}\cap S_{2}\cap\cdots\cap S_{K}\) and \(\mathcal{Q}=Q_{1}\cap Q_{2}\cap\cdots\cap Q_{K}\). Since we have the previous result (4) for some fixed \(\lambda\), we can sum (4) over all \(\lambda\in\Xi\): \[\sum_{\lambda\in\Xi}\mathop{\mathbb{E}}_{\mathcal{S}}\left[\mathop{\mathrm{e} }_{\mathcal{Q}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Calculating the gradients \(\nabla_{\mu_{Q_{k},i}}\) and \(\nabla_{\mu_{Q_{k},i}}\) directly can be intricate, but the re-parameterization trick is capable of tackling this issue. Concretely, we translate the \(h\sim Q_{k}\) to \(\varepsilon\sim\mathcal{N}(0,I_{d})\) and then compute the deterministic function \(h=\mu+\sigma\odot\varepsilon\), where \(\odot\) signifies an element-wise multiplication. As a result, we have \(\nabla_{\mu_{Q_{k},i}}\underset{h\sim Q}{\mathbb{E}}l(h)=\nabla_{\mu_{Q_{k},i }}\underset{\varepsilon\sim q(\varepsilon)}{\mathbb{E}}l(\varepsilon)\), indicating its computability using an end-to-end framework with back-propagation. ## 5 Evaluation In this section, we demonstrate our algorithm and our theoretical arguments with the FL non-IID setting. Specifically, the aggregation weight \(p(k)\) is defined as the sample ratio of client \(k\) relative to the entire data size across all clients. For the global aggregation, the global mean and covariance are calculated by \(\bar{\mu}=\sum_{k=1}^{K}p(k)\mu_{k}\sigma_{k}^{-2}/\sum_{k=1}^{K}p(k)\sigma_{k }^{-2}\) and \(\bar{\sigma}=1/\sum_{k=1}^{K}p(k)\sigma_{k}^{-2}\), respectively. Furthermore, we utilize two real-world datasets: MedMNIST (medical image analysis) [19] and CIFAR-10 [20]. For each dataset, we adopt three distinct data-generating approaches for local clients: 1) **Balanced**: each client holds an equal number of samples; 2) **Unbalanced**: varying sample counts per client (e.g., \([0.05,0.05,0.05,0.05,0.1,0.1,0.1,0.2,0.2]\) for 10 clients); 3) **Dirichlet**: differing sample counts per client following a Dirichlet distribution [21]. Besides, the entire FL system encompasses \(K=10\) clients, initializing their posterior models uniformly from a global posterior model. Additionally, we deploy two versions of Bayesian neural networks: one with 2 convolutional layers for MedMNIST and another with 3 layers for CIFAR-10. The CrossEntropy loss serves as our loss function, and it is optimized by the Adam optimizer [22] with a learning rate of 1e-3. ### Bound evaluation To validate our bounds, we set the confidence bound on \(1-\delta=95\%\). Our evaluation underscores a correlation between the generalization error and the complexity, emphasizing the tightness of our bound. Fig. 1 illustrates an initial increase in the generalization error and a concurrent decrease in complexity during the early stages of the training process, attributed to empirical loss optimization. Subsequently, as neural network training advances, the KL-divergence stabilizes. Throughout this progression, we can observe that the generalization error is consistently bounded by the complexity value. ### Data-dependent prior Here, we preform the ablation study of data-dependent (trainable) prior compared with data-independent (fixed, chosen before training) prior and report the mean \(\pm\) standard deviation (std) accuracy of the global model in Table 1, evaluated over multiple experimental seeds. The results demonstrate the superior efficacy of the data-dependent strategy over both datasets across all three scenarios. This superiority arises from the data-dependent prior's ability to harness more global knowledge, combined with its adaptability during training. ### Different client scales Lastly, we assess the influence of varying client scales on our complexity bounds. As depicted in Fig. 2, in the evaluation of both datasets with the Dirichlet generating method, by increasing the number of clients \(K\) from 10 to 20 and 50, we observe a consistent decrease in the value of complexity. This observation aligns with our analysis in Theorem 1. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & \multicolumn{3}{c}{MedMNIST} & \multicolumn{3}{c}{CERNIO} \\ \cline{2-7} & Balanced & Unbalanced & Disaligned & Balanced & Unbalanced & Disaligned & Disaligned & Disaligned \\ \hline Data-independent & \(\mathbf{53.47\pm 1.12}\) & \(\mathbf{49.44\pm 1.10}\) & \(\mathbf{55.24\pm 6.92}\) & \(\mathbf{50.09\pm 0.62}\) & \(\mathbf{47.19\pm 0.92}\) & \(\mathbf{57.03\pm 0.55}\) \\ Data-dependent & \(\mathbf{77.10\pm 4.25}\) & \(\mathbf{77.34\pm 3.42}\) & \(\mathbf{77.48\pm 4.75}\) & \(\mathbf{84.41\pm 0.94}\) & \(\mathbf{79.39\pm 0.56}\) & \(\mathbf{86.11\pm 0.53}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Model accuracy (%) for the data-independent prior and data-dependent prior in three data-generating scenarios. Figure 1: The results of the generalization error and model performance of FedPB over the Dirichlet generating method. Figure 2: The impact of different client scales over FedPB on the value of the complexity term.
2309.16490
Active SLAM Utility Function Exploiting Path Entropy
In this article we present a utility function for Active SLAM (A-SLAM) which utilizes map entropy along with D-Optimality criterion metrices for weighting goal frontier candidates. We propose a utility function for frontier goal selection that exploits the occupancy grid map by utilizing the path entropy and favors unknown map locations for maximum area coverage while maintaining a low localization and mapping uncertainties. We quantify the efficiency of our method using various graph connectivity matrices and map efficiency indexes for an environment exploration task. Using simulation and experimental results against similar approaches we achieve an average of 32% more coverage using publicly available data sets.
Muhammad Farhan Ahmed, Vincent Fremont, Isabelle Fantoni
2023-09-28T14:55:38Z
http://arxiv.org/abs/2309.16490v2
# Active SLAM Utility Function Exploiting Path Entropy ###### Abstract In this article we present a utility function for Active SLAM (A-SLAM) which utilizes map entropy along with D-Optimality criterion metrices for weighting goal frontier candidates. We propose a utility function for frontier goal selection that exploits the occupancy grid map by utilizing the path entropy and favors unknown map locations for maximum area coverage while maintaining a low localization and mapping uncertainties. We quantify the efficiency of our method using various graph connectivity matrices and map efficiency indexes for an environment exploration task. Using simulation and experimental results against similar approaches we achieve an average of 32% more coverage using publicly available data sets. Active SLAM, Mapping, Information Theory, Entropy. ## I Introduction Simultaneous Localization and Mapping (SLAM) encompasses a range of methods used by robots to determine their own position while simultaneously creating a map of their surroundings during navigation. Most SLAM algorithms are considered passive, wherein the robot is moving freely, and the navigation or path-planning algorithm plays no active role in directing the robot's movements or trajectory. In contrast, A-SLAM aims to address the optimal exploration of unknown environments by proposing a navigation strategy that generates future target positions and actions. These actions are designed to reduce uncertainty in the map and the robot's pose, enabling a fully autonomous SLAM system. In A-SLAM, the robot's exploration process begins by initially identifying potential target positions within its current map estimate. Once the robot has established a map of its surroundings, it proceeds to locate positions worth exploring. One commonly used method for this task is frontier-based exploration, initially introduced by [1]. Frontier-based exploration defines the 'frontier' as the boundary separating known map locations from unknown ones, as observed by the robot's sensors. After identifying these goal frontiers, the robot computes a cost or utility function. This function relies on the potential reward associated with selecting the optimal action from a set of all possible actions. In an ideal scenario, this utility function should account for the complete joint probability distribution of both the map and the robot's poses. To quantify this uncertainty, we turn to two well-established domains: Information Theory (IT), and the Theory of Optimal Experimental Design (TOED), as detailed in [3]. The subsequent crucial step involves executing the optimal action, guiding the robot towards its goal position using path planning techniques. In this article, we propose a utility function for selecting the goal frontier candidate for autonomous exploration by the robot. Our function takes into account the amount of uncertainty in the map measured as path entropy and Euclidean distance to each frontier candidate. We add this utility function to the one of [15] which selects the frontier weighted by the D-optimality criterion as the maximum number of spanning trees (in pose-graph) towards it. Using the proposed utility function brings the advantage that it incorporates not only the SLAM uncertainty but also entropy reduction in the environment to guide the robot to promising unknown areas for the exploration task. This article is organized as follows: Section II, provides an insight into the related work. Section III gives preliminary knowledge about the structure of modern graph SLAM, A-SLAM, and how uncertainty is measured and related to graph connectivity. In Section IV we present the approach and formulation of our proposed method. Section V presents and discusses our simulation results. Finally, we conclude in Section VII summarizing our contributions and motivating future research aspects. ## II Related Work As discussed above, A-SLAM involves frontier detection, utility computation by quantifying and minimizing the uncertainty, and finally generating the action for robot navigation. The method proposed by [5] formulates a hybrid control switching exploration method. Within the occupancy grid map, each frontier is segmented, a trajectory is planned for each segment, and the trajectory with the highest map segment covariance is selected from the global cost map, which renders this method computationally expensive and limited to static obstacles. Meanwhile, the approach in [6] deals with dynamic environments with multiple ground robots and uses frontier exploration for autonomous exploration, and a utility function based on Shannon and Renyi entropy [2] is used for the computation of the utility of paths. When dealing with uncertainty quantification from IT perspective, the authors of [7] address the joint entropy minimization exploration problem and propose two modified versions of Rapidly exploring Random Trees (RRT) [4], as dRRT* and eRRT* respectively. dRRT* uses distance, while eRRT* uses entropy change per distance traveled as the cost function. Actions are computed in terms of the joint entropy change per distance traveled. The simulation results proved that a combination of both of these approaches provides the best path-planning strategy. An interesting comparison between IT approaches is given in [8] where frontier-based exploration is deployed to select future candidate target positions. A comparison of joint Entropy reduction between the robot path and map is done against Expected Map Mean (EMM) and Kullback-Leiber Divergence. It was concluded that most of these approaches were not able to properly address the probabilistic aspects of the problem and are most likely to fail because of high computational cost and map grid resolution dependency on performance. The authors in [9] use entropy reduction only over map features and use an entropy metric based on Laplacian approximation with a unified quantification of exploration and exploitation gains. Recently in the works of [11] and [12], the authors exploit the graph SLAM connectivity and pose it as an Estimation over Graph (EoG) problem, where each node (state vector) and vertex (measurement) connectivity is strongly related to the SLAM estimation reliability. By exploiting the spectral graph theory which deals with the Eigenvalues, Laplacian, and Degree matrix of the associated Fisher Information Matrix (FIM) and graph connectivity respectively the authors state that the graph Laplacian is related to the SLAM information matrix and the number of Weighted number of Spanning trees (WST) is directly related to the estimation accuracy in graph SLAM. The authors in [14][15] extend [12] by debating that the maximum number of WST is directly related to the Maximum Likelihood Estimate (MLE) of the underlying graph SLAM problem formulated over lie algebra. Instead of computing the D-optimality criterion over the entire SLAM sparse information matrix, a modern D-optimality criterion is computed over the weighted graph Laplacian where each vertex is weighted using edge D-Optimality. Furthermore, it is proven that the maximum number of WST of this weighted graph Laplacian is directly related to the underlying pose graph uncertainty. ## III Preliminaries ### _Graph SLAM_ Modern SLAM approaches adopt a graphical approach (bipartite graph) where each node represents the robot or landmark pose and each edge represents a pose to pose or pose to landmark measurement. As an example for a robot with four pose states, shown in Fig. 1\(x_{0:3}\), \(lm_{1:3}\)\(u_{1:3}\), \(m_{1:7}\) represent the robot pose, landmark pose, robot pose measurement, and landmark measurement respectively. The objective is to find the optimal state vector \(x^{*}\) which minimizes the measurement error \(e_{i}(x)\) weighted by the covariance matrix \(\Omega_{i}\in\mathbb{R}^{l\times l}\) where \(l\) is the dimension of the state vector \(x\) as shown in Equation 1. We direct interested readers to [22] for an introduction and comparison of SLAM methods. \[x^{*}=\arg\min_{x}\sum_{i}\mathbf{e}_{i}^{T}(x)\Omega_{i}\mathbf{e}_{i}(x) \tag{1}\] In A-SLAM the robot has to navigate in an unknown environment by performing actions in the presence of noisy sensor measurements that reduce its state and map uncertainties with respect to the environment. Such a scenario is modeled as an instance of the Partially Observable Markov Decision Process (POMDP) [16][17][18]. Let us consider a scenario where the robot is in state \(x\) and takes an action \(a\) to move to \(x^{{}^{\prime}}\). The robot's goal is to choose the optimal policy \(\chi^{*}\) that maximizes the associated expected reward (\(\mathbb{E}\)) for each state-action pair and it can be modeled as Equation 2: \[\chi^{*}=\operatorname*{argmax}_{t}\sum_{t=0}^{\infty}\mathbb{E}\Gamma^{t} \alpha(x_{t},a_{t}) \tag{2}\] Where \(x_{t}\), \(a_{t}\) and \(\Gamma^{t}\) are respectively the state, action, and discount factor evolution at time \(t\) and \(\alpha(x_{t},a_{t})\) is the reward function associated with the action performed to reach \(x\). Although the POMDP formulation of A-SLAM is the most widely used approach, it is considered computationally expensive as it considers planning and decision-making under uncertainty. For computational convenience, it is divided into three main steps which identify the potential goal positions/waypoints, compute the cost to reach them, and then select actions based on a utility criterion that decreases map uncertainty and increases the robot's localization. ### _Uncertainty Quantification_ The cost or utility is computed based on the reward value of the optimal action selected from a set of all possible actions according to Equation 2. For uncertainty quantification IT and TOED methods are used. In IT, Shannon entropy/entropy measures the amount of uncertainty associated with a random variable or random quantity. Since robot pose and the map is estimated as a multivariate Gaussian distributions the authors in [19] describe Shannon Fig. 1: Graph SLAM structure entropy of the map \(E\in(0,1)\) as in Equation 3 where the map \(M\) is represented as an occupancy grid and each cell \(c_{i,j}\) is associated with a probabilistic estimation \(P(c_{i,j})\) of its value of \(1\) being occupied and \(0\) as free. The objective is to reduce both the robot pose and map entropy. However, measuring entropy can be computationally demanding because computing probabilistic estimations of both the robot's position and the map over the entire map area is required along with its associated grid resolution. \[E[p(M)]=-\sum_{i,j}(p(c_{i,j})log(p(c_{i,j}))\\ +(1-p(c_{i,j}))log(1-p(c_{i,j})) \tag{3}\] Alternatively, if we consider task driven utility functions, the uncertainty metric is evaluated by reasoning over the propagation of uncertainty associated with the Fisher Information Matrix (FIM) \(\Delta=\Omega^{-1}\) of graph SLAM. TOED provides many optimally criteria, which give a mapping of the covariance matrix to a scalar value. Less covariance contributes to a higher weight of the action set \(\chi\). For a covariance matrix \(\Omega\in\mathbb{R}^{n\times n}\) and having eigenvalues \(\zeta_{n}\), A,D and E optimality criterion are defined which minimize the average variance, covariance ellipsoid and maximum Eigenvalue. TOED approaches require both the robot pose and map uncertainties to be represented as covariance matrix and may be computationally expensive, especially in landmark-based SLAM where its size increases as new landmarks are discovered. Hence, IT-based approaches are preferred over TOED. In [11] and [12] the authors debate how the graphical topology of SLAM has an impact on estimation reliability. They establish a relationship between the Weighted number of Spanning Trees (WST) and the D-Optimality criterion and show the graph Laplacian is closely related to the FIM. Three graph connectivity metrics are discussed with connection to the SLAM estimation accuracy, namely: 1) Algebraic Connectivity (A.C), defines the robustness/resilience of a graph to stay connected even after the removal of some nodes. For connected undirected graph \(G=(v,\epsilon)\), having \(v\) vertices (poses) and \(\epsilon\) edges (measurements), the A.C is defined as the second largest Eigen value of the weighted Laplacian \(\lambda_{2}(L_{0})\). 2) Average Degree (\(\bar{d}\)), indicates the number of edges incident upon its vertices. As \(\bar{d}=\frac{1}{v}\sum_{i=1}^{v}deg(i)\) increases, the number of measurements also increases which eventually improves the MLE estimate. 3) Tree Connectivity, it is shown that WST is directly related to the determinant of the weighted graph Laplacian and the MLE estimate. It is given as \(t_{w}(G)=det(L_{w})\), where \(L_{w}\) is the weighted Laplacian. The above graph connectivity indices provide alternate methods that are computationally less expensive to measure SLAM uncertainty as compared to TOED and IT approaches discussed previously. In Section V we will weigh the effectiveness of our proposed method using these indices. ## IV Methodology The A-SLAM methods outlined in Section II assess uncertainty either through the overall map entropy or the SLAM's covariance matrix in its entirety, resulting in high computational costs. In this article, we propose a utility function that incorporates the path entropy and distance to the frontier candidate and adds it to the modern D-Optimality criterion as defined in [15]. Computing path entropy is computationally efficient along with penalizing frontiers with large distances. The final utility function not only provides a reliable SLAM estimate but also maximizes the map coverage by minimizing the unknown map area. Figure 2 shows the overview of the proposed utility function. We add this utility function to that of [15] which uses Lidar-based Kardo SLAM [21] back-end, with frontier detection over an Occupancy Grid (OG) map. For each frontier centroid (frontier candidate) we apply the Bresenham's line algorithm [20] to compute the number of pixels and their occupancy values in a straight line from the robot's current position. We then compute the path entropy to each frontier candidate and weigh the paths that have a higher number of unknown cells to favor the robot to explore the unknown environment resulting in better coverage. The lengthy candidate frontier paths are penalized so that the robot does not favor paths longer than a certain threshold and maintains its SLAM accuracy. Finally, the utility is computed by adding the proposed utility to that of [15]. Fig. 3 shows the implementation of the proposed utility function in ROS. Fig. 3: Proposed utility function ROS implementation, purple line = Bresenham’s line, sphere = frontier candidate, Green Squares = frontier centroids. Fig. 2: Framework and proposed utility function For each candidate frontier \(F=\{f_{1},f_{2},.....f_{N}\}\in\mathbb{R}^{2}\), we get the occupancy values of each path as \(G^{n}=\{m_{0},m_{1},.....m_{L}\},\forall n\in N\), where \(m_{0}...m_{L}\) are the pixel occupancy values of path length \(L\). We assign the probability value of \(P_{unk}=0.1\) for unknown pixels (with values = -1) to quantify for low entropy and high information gain (as we are more interested in unknown area of the environment). Occupancy values of obstacles and free space are mapped to probability \(P_{Offee}=0.45\), weighing high entropy since we are not interested in places already known to the robot. Equation 4 computes the path entropy \(E^{n}\) for each frontier candidate with the above assigned probability values. \[E^{n}=E^{n}[p(m)]_{m\in G^{n}}=-\sum_{m\in G^{n}}(P(m_{i,j})log_{2}(p(m_{i,j})) \\ +P(1-m_{i,j})log_{2}(1-p(m_{i,j})),\forall m_{i,j}\in M \tag{4}\] Once the path entropy is computed it is normalized with the number of pixels within the path \(K^{n}\), as shown in Equation 5. \(n=\{n_{x},n_{y}\}\) and \(R=\{R_{x},R_{y}\}\) are the selected frontier and robot positions respectively. \[K^{n}=\sum_{i=R_{x}}^{n_{x}}\sum_{j=R_{y}}^{n_{y}}m_{i,j} \tag{5}\] The path entropy computed in Equation 4 may lead to select frontiers with large distances from the robot. These large distance frontiers may decrease the localization of the robot once it moves toward them. For penalizing these frontiers to contain the SLAM uncertainly, we apply an exponential decay operator \(\gamma^{n}\) as shown in Equation 6 and will use it in computing the utility function \(U_{2}^{n}\) in Equation 7. In Equation 6 \(\lambda\) is the decay rate operator which acts as a tuning factor for removing frontiers with large distances. We fix \(\lambda=0.6\) since we assume the environment is static and the frontier path entropy remains constant when the robot moves towards the frontier. \(n\) and \(R\) are frontier and robot locations in \(\mathbb{R}^{2}\) respectively and \(dist(*)\) measures the Euclidean distance between \(n\) and \(R\). \[\gamma^{n}=\exp^{-(\lambda*dist(R,n))} \tag{6}\] The utility \(U_{2}^{n}\) as shown in Equation 7 is computed by weighing normalized entropy \(E^{n}/K^{n}\) with \(\rho^{n}=10^{\beta}\), where \(\beta\) is a factor which depends on the number spanning trees of the weighted graph Laplacian \((L_{w}^{n})\) computed in Equation 8. More specifically, \(\beta\) is the count of the number of digits before the decimal places of \(U_{1}^{n}\) and acts as a balancing factor between entropy and the number of spanning trees. Finally, we obtain the proposed utility function \(U_{tot}\) in Equation 9 as the maximum of the sum \(U_{1}^{n}\) and \(U_{2}^{n}\), where Equation 8 is adopted from [15]. The advantage of \(U_{tot}\) is that it not only provides a good SLAM estimate based on the robot. The robot's optimal metric is the distance between the frontier path entropy and distance. \[U_{2}^{n}=(1-E^{n}/K^{n})*\rho^{n}+\gamma^{n} \tag{7}\] \[U_{1}^{n}=\text{Spann}(L_{w}^{n}) \tag{8}\] \[U_{tot}=\text{max}(U_{1}^{n}+U_{2}^{n}) \tag{9}\] ## V Simulation Results The simulations were carried out on ROS Noetic, Ubunto 20.04 on Intel Core i7(r), with a system RAM of 32GB and NVIDIA RTX 1000 GPU. We used the approach of [15] and implemented the proposed approach as mentioned in IV using Open Kardo as SLAM backend, Turtlebot(r) equipped with Lidar, Dynamic Window Approach (DWA) [13] and Dijkstra's algorithm as local and global planners from the ROS navigation stack. We compared the proposed approach against two different methods which use Frontier Detection based Exploration (FD) [23] and Active Graph SLAM (AGS) of [15] and using two different simulation environments namely the modified Willow Garage (W.G) 3 measuring 2072\(m^{2}\), having no dynamic obstacles, and modified office (Office) measuring 741\(m^{2}\) with obstacles 4. The ground truth occupancy grid maps were generated using the gazebo_2Dmap_plugin 5 which uses wavefront exploration. Footnote 3: [https://github.com/arpg/Gazebo/](https://github.com/arpg/Gazebo/). Footnote 4: [https://github.com/mhelerd/](https://github.com/mhelerd/) Footnote 5: [https://github.com/marinaKollmitz/gazebo](https://github.com/marinaKollmitz/gazebo). For qualitative comparison, we weighed our approach with performance metrics relating to graph connectivity, map efficiency, and % of the area covered. We used graph connectivity metrics like Algebraic Connectivity (A.C), average degree (\(\bar{d}\)), normalized tree connectivity (\(\tilde{\tau}(\mathcal{G})\)), and evolution of uncertainty measured as edge D-optimality in the resulting pose-graph. These metrics are strongly related to SLAM estimation accuracy as described in Section III. Regarding map efficiency matrices, Structural Similarity Index (SSIM) and Root Mean Square Error (RMSE) are used. SSIM\(\in(0,1)\) indicates the similarity between two maps by computing the covariance of their pixel values. We conducted 15 simulations of 30 minutes each for both W.G. and office environments using FD, AGS, and our methods. The results from these simulations along with above mentioned performance matrices are elaborated in Table I and Fig. 5. The first benchmark is FD in which the robot is guided towards the nearest frontier and once it reaches the frontier, \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **Env.** & **Meth.** & A.C & \(d\) & \(\tilde{\tau}(\mathcal{G})\) & **SSIM** & RMSE \\ \hline \multirow{3}{*}{W.G} & FD & 0.104 & **3.290** & 1.016 & 0.05 & 0.70 \\ & AGS & 0.426 & 2.907 & 1.139 & 0.05 & 0.64 \\ & Our & **0.485** & 2.925 & **1.205** & **0.08** & **0.60** \\ \hline \multirow{3}{*}{Office} & FD & 3.061 & **3.179** & 1.229 & 0.09 & 0.83 \\ & AGS & 5.740 & 2.742 & 1.312 & 0.07 & 0.80 \\ \cline{1-1} & Our & **9.617** & 2.612 & **1.941** & **0.11** & **0.77** \\ \hline \end{tabular} \end{table} TABLE I: Average graph connectivity and map quality comparison of 15 simulations (30 minutes each for every method) it is added to the blacklist of chosen frontiers in order to avoid detecting it again. This approach does not take into account the FIM uncertainty of the pose-graph and nor does it favor revisiting already visited frontiers for loop closing. From Table I on both environments, we can conclude that this approach provides good \(\bar{d}\) because Open Karto creates many loop closure constraints between nodes but does not contribute to uncertainly reduction because the nodes are very close to one another. This method severely lacks in A.C (especially in W.G) as compared to our approach. As described in Section III A.C is directly related to the accuracy of SLAM estimation and its higher value is encouraged. Further, we observe a lower SSIM and higher RMSE, which renders this approach not suitable for area coverage tasks as compared to preceding methods. Since this method uses a greedy frontier detection search without any quantification of SLAM uncertainly or loop closure, as a result, the SLAM covariance increases, and exploration halts after some time. Eventually the resulting SLAM pose graph has high unreliability and less coverage as compared to Our method. The second method in Table I is AGS. We can infer that this method results in good graph connectivity and map quality matrices in both environments as compared to FD. Especially in office environment the graph connectivity metrics and high. High SSIM and low RMSE indicate better SLAM estimation as compared to W.G. because this environment has more obstacles that bring structure to SLAM estimation. Our method when compared to the preceding methods in Table I outperforms them, especially in A.C, \(\hat{\tau}(\mathcal{G})\), SSIM, RMSE, and Map size for both environments. We can observe that for Willow Garage our method has 100% more A.C, 60% more SSIM, and 6% less RMSE error than the last best value. For modified Office we get 67% more A.C, 47% more \(\hat{\tau}(\mathcal{G})\), 22% more SSIM, and 37% less RMSE. These promising values indicate the effectiveness of our proposed method. Figure 4 plots the uncertainty evolution over time (s) in the W.G and Office environment. We quantify the uncertainty as D-Optimality for each edge in the entire pose graph. The circles denote the selected goal frontiers. Each method has a different goal frontier detection frequency depending on whether the robot has reached it or not. From Fig. 4 and Table II we can deduce that initially the uncertainty is high and as the robot explores the environment, it decreases. Our approach and AGS manage to keep the uncertainty bounded to 44% and 45% of their maximum threshold respectively, while FD keeps it at 20% hence resulting in a poor SLAM estimate due to lack of loop closure and using a greedy frontier search. In Fig. 5 a comparison of the evolution of the percentage of map discovered is presented. The average values along with the standard deviation of 15 simul \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Env.** & **Method** & **D-Opti(Max&Min)** & Diff & \%R \\ \hline \multirow{3}{*}{W.G} & FD & 3700k2900 & 800 & 20 \\ & AGS & 4800k2600 & 2200 & **45** \\ & Our & 4700k2600 & 2100 & 44 \\ \hline \multirow{3}{*}{Office} & FD & 4600k3400 & 1200 & 26 \\ & AGS & 4100k2700 & 1400 & 34 \\ \cline{1-1} & Our & 4900k2900 & 2000 & **40** \\ \hline \end{tabular} \end{table} TABLE II: Uncertainty reduction (%R) comparison Fig. 4: Uncertainty evolution of AGS, FD and Our in 4a, Willow Garage, 4b Office Environments. Fig. 5: Comparison of evolution of map discovered with average and standard deviation for our, AGS and FD methods. With 15 simulations (30 minutes each for every method) AGS, and our are shown. Since FD uses a greedy frontier detection and exploration approach it starts with a higher slope than AGS and our method till 1000 and 500 seconds for W.G and Office respectively, after that it the SLAM covariance becomes unbounded, and the slope decreases with the final percentage of discovered map at 25.18% and 55.14% respectively. Both AGS and our method manage to keep the slope of average exploration values constant but our method eventually manages to explore 54%, 30% more area than AGS, and 20%, 25% more area than FD for both W.G and Office environments respectively. In Fig. 6 the resulting occupancy grid maps and pose graphs are overlapped on the ground truth maps to show the area covered by our approach in environments. ## VI Experimental Results The experiments were performed using four-wheel diff. drive ROSBot2 robot3as shown in Fig. 6(a) and ROS (noetic) on Ubunto 20.04.6 (LTS) running on Intel Xeon(r) W-2235 CPU 3.80GHz x 12, 64Gb RAM and Nvidia Quadro RTX 4000 GPU. Footnote 3: [https://husarion.com/manuals/rosbot/](https://husarion.com/manuals/rosbot/). The environment consists of a room (lab environment) with static obstacles and two corridors measuring 81\(m^{2}\) in total. We used mapping efficiency and exploration time as performance matrices for experimental results. Fig. 6(b) shows the computed OG map, and SLAM pose graph along with robot start and end positions using our approach. The computed OG map by the robot is overlapped with that of the ground truth (blue). We can observe high similarity between the two maps, rendering the high mapping efficiency of our approach. Figure 8 shows the average percentage of four experiments (2-AGS, 2-Our) for a total duration of 350 seconds. We can observe that initially both AGS and our method approximately map 35% of the environment this is because of the high range (up to 16m) of the Lidar sensor 4 of RosBot 2 robot. With the evolution of time we can observe that using our utility function, a steep slope is obtained from 60 to 210 seconds (approx.) which helps the robot to cover the entire environment in 230 seconds as compared to 280 seconds for AGS. These results are in contrast with Section V and indicate that our approach supersedes AGS and therefore can be utilized for efficient exploration of the environment while maintaining good SLAM efficiency. Footnote 4: [https://www.slamtec.com/en/Lidar/A2/](https://www.slamtec.com/en/Lidar/A2/). ## VII Conclusions In this article, we have presented a utility function that selects the most favorable frontier goal location within an occupancy grid map for a reliable A-SLAM with an area coverage task. The proposed utility function incorporates path entropy to select the frontier goal location which has the highest amount of unknown cells within its path thus maximizing Fig. 8: Average Coverage Percentage in four experiments. Fig. 6: Obtained pose graphs using Our approach and ground truth maps (blue). Fig. 7: Robot used for experiments and the resulting mapped environment overlapped with ground truth map(blue). the area coverage. Using simulation and experimental results on publicly available environment maps we have proved the efficiency of our approach as compared to similar methods. As a future prospective, we plan to incorporate our method in a multi-robot scenario utilizing efficient frontier-sharing for maximum environment exploration. ## Acknowledgment This work was conducted within the framework of the NExT Senior Talent Chair DeepCoSLAM, funded by the French Government through the program "Investments for the Future" administered by the National Agency for Research (ANR-16-IDEX-0007). We also extend our gratitude to the Region Pays de la Loire and Nantes Metropole for their invaluable support in facilitating this research endeavour.
2309.06668
On the uses and abuses of regression models: a call for reform of statistical practice and teaching
Regression methods dominate the practice of biostatistical analysis, but biostatistical training emphasises the details of regression models and methods ahead of the purposes for which such modelling might be useful. More broadly, statistics is widely understood to provide a body of techniques for "modelling data", underpinned by what we describe as the "true model myth": that the task of the statistician/data analyst is to build a model that closely approximates the true data generating process. By way of our own historical examples and a brief review of mainstream clinical research journals, we describe how this perspective has led to a range of problems in the application of regression methods, including misguided "adjustment" for covariates, misinterpretation of regression coefficients and the widespread fitting of regression models without a clear purpose. We then outline a new approach to the teaching and application of biostatistical methods, which situates them within a framework that first requires clear definition of the substantive research question at hand within one of three categories: descriptive, predictive, or causal. Within this approach, the development and application of (multivariable) regression models, as well as other advanced biostatistical methods, should proceed differently according to the type of question. Regression methods will no doubt remain central to statistical practice as they provide a powerful tool for representing variation in a response or outcome variable as a function of "input" variables, but their conceptualisation and usage should follow from the purpose at hand.
John B. Carlin, Margarita Moreno-Betancur
2023-09-13T02:07:31Z
http://arxiv.org/abs/2309.06668v3
## On the uses and abuses of regression models: ## Abstract When students and users of statistical methods first learn about regression analysis there is an emphasis on the technical details of models and estimation methods that invariably runs ahead of the purposes for which these models might be used. More broadly, statistics is widely understood to provide a body of techniques for "modelling data", underpinned by what we describe as the "true model myth", according to which the task of the statistician/data analyst is to build a model that closely approximates the true data generating process. By way of our own historical examples and a brief review of mainstream clinical research journals, we describe how this perspective leads to a range of problems in the application of regression methods, including misguided "adjustment" for covariates, misinterpretation of regression coefficients and the widespread fitting of regression models without a clear purpose. We then outline an alternative approach to the teaching and application of regression methods, which begins by focussing on clear definition of the substantive research question within one of three distinct types: descriptive, predictive, or causal. The simple univariable regression model may be introduced as a tool for description, while the development and application of multivariable regression models should proceed differently according to the type of question. Regression methods will no doubt remain central to statistical practice as they provide a powerful tool for representing variation in a response or outcome variable as a function of "input" variables, but their conceptualisation and usage should follow from the purpose at hand. ###### Introduction This article takes as its starting point the fact that regression methods dominate much of biostatistical practice. However, many applications of regression analysis in the medical and health research literature lack clarity of purpose and exhibit misunderstanding of key concepts. Recent developments in epidemiological methodology have highlighted the importance of clearly distinguishing the "three tasks of data science", or three types of research question.[1] Every empirical research study can be seen to have one or more of (i) a descriptive purpose - characterising the distribution of a feature or health outcome in a population,[2] (ii) a predictive purpose - producing a model or algorithm for predicting future values of an outcome given individual characteristics,[3] or (iii) a causal purpose - investigating the extent to which health outcomes in a population would be different if a particular intervention were made.[4] However, we observe that this conceptualisation has barely penetrated the teaching and practice of biostatistics, especially with respect to regression methods. Indeed, biostatisticians continue to teach, and users of biostatistical methods continue to internalise, the idea that regression models provide an all-purpose toolkit that can be implemented more or less agnostically to the actual purpose. A widespread approach is first to "find the best model for the data" and second to develop an appropriate interpretation of the fitted model. Others have pointed to some of the deficiencies and hazards of this approach. For example, Greenland and Westreich[5] coined the term "Table 2 fallacy" for the still common practice of presenting a table of estimated regression coefficients from a multivariable model with the implication that these coefficients represent usefully interpretable quantities. Such presentations commonly suggest (explicitly or implicitly) a causal interpretation, i.e. that changes in a variable would lead to changes in the outcome of a magnitude represented by its regression coefficient. As Greenland and Westreich point out, valid estimation of a causal effect requires the delineation of a range of assumptions, both causal and parametric, and there are no reasonable assumptions under which the coefficients of a multivariable regression model simultaneously provide estimates of the causal effects of every variable in the model. If this is understood, it is a short step to ask to what questions, if any, the coefficients of these ubiquitous models provide answers. Related to the last point is the widespread application of regression methods for addressing vaguely framed questions such as "what are the important risk factors for condition \(Y\)?" where what is meant by "risk factor" remains ill-defined, often encompassing a combination of potential causes and predictors.[6, 7] Such applications imply that it is meaningful to fit a multivariable regression and use data-driven variable selection to reduce the candidate list of risk factors to those that are found to have "independent effects" (another ill-defined term), after which process the remaining risk factors are deemed "important". However, the purpose for which they might be "important" is invariably unclear - with this approach, it would not be as intervention targets nor for outcome prediction. These practices reflect what we describe as the "true model myth": the notion that the statistical analyst's primary task is to identify a model that best describes the variation in an outcome in terms of a list of "independent variables". Finding the best model is rapidly conflated with the idea that the identified model provides a useful approximation to the actual data generating process - from which empirical conclusions can then be drawn. Textbooks and courses are dominated by these notions, feeding the traditional pedagogical approach in statistics of presenting a large body of technique and theory ahead of key questions about their practical application. The importance of clarity of purpose in the use of statistical models has been emphasised by many others of course, often with reference to Box's famous aphorism "All models are wrong, but some models are useful..."[8] However, there seem to have been few if any attempts to provide an in-depth examination of how the "usefulness" of a model might be defined and how the teaching and practice of statistical analysis might change accordingly. Hernan et al[1] briefly describe a similar agenda in the broader context of "data science" but do not address the central role of regression methods nor the potentially key role of statisticians in tackling these issues. We begin this paper by briefly introducing some standard notation and then describe the problems with current practice in greater detail by way of three examples from the first author's own practice. After analysing the heart of the problem and its continuing prevalence, via a brief review of mainstream clinical research journals, we present a proposal for the reform of teaching and practice. This takes the form of an outline of how regression methods could be taught from a "purpose-driven" perspective, using the taxonomy of the three types of research question, and introducing the technical aspects of regression models and estimation methods if and when relevant to the purpose. ### Notation and core concepts Acknowledging that regression modelling is clearly relevant to the broadly defined aim of understanding how variation in an "outcome" or "response" variable relates to other "independent" variables or "covariates", for later convenience and clarity of terminology we begin with an overview of basic concepts and notation. For a continuously varying outcome or response variable \(Y\) and a single covariate \(X\) in some well-defined population, the simple linear regression model may be represented as: \[Y=\beta_{0}+\beta_{1}X+\epsilon\;,\] where \(\beta_{0}\) represents the expected (average) value of \(Y\) when \(X=0\), \(\beta_{1}\) represents the difference in the expected value of \(Y\) between individuals in the population for whom the values of \(X\) differ by one unit, and \(\epsilon\) represents a zero-mean "error" or deviation of \(Y\) from the expectation \(\beta_{0}+\beta_{1}X\). The covariate \(X\) may be a dichotomous indicator, in which case \(\beta_{1}\) encodes the difference in the mean value of \(Y\) between two subgroups, or it may have multiple values, in which case the model incorporates an assumption of linear increase in the mean value across the values of \(X\). The random error \(\epsilon\) is commonly assumed to follow a normal distribution with constant variance \(\sigma^{2}\), with errors for each individual independent of each other (although the importance of some of these assumptions, especially that of normally distributed errors, is often overstated). The essence of the model is better represented by separating the expression for the expected value (conditional on \(X\)), from the assumptions about the error term or probability distribution: \[\mu(X)=E(Y|X)=\beta_{0}+\beta_{1}X;\ \ Y|X\sim N(\mu(X),\sigma^{2}). \tag{1}\] In most contexts, however, there is not just a single \(X\) that may predict or explain the variation in \(Y\), and in addition, not all outcomes of interest are continuous and lend themselves naturally to modelling their mean value in this way. Mathematically, it is appealing to address the former of these issues by making the natural extension to the _multivariable_ (more traditionally termed _multiple_) regression model, at the core of which is the following expression for the expected value: \[\mu(\mathbf{X})=E\big{(}Y|X_{1},X_{2},...,X_{p}\big{)}=\beta_{0}+\beta_{1}X_{1}+ \beta_{2}X_{2}+[...]+\beta_{p}X_{p} \tag{2}\] (where the bold \(\mathbf{X}\) signifies a vector or list of covariates \(X_{1},X_{2},...,X_{p}\)). A fundamental feature is the _linear_ nature of the model, i.e. linear in the coefficients, which lends itself to convenient matrix representations and a related range of appealing mathematical properties. Importantly, the mean specification in the canonical multiple regression model (2) may include complicated (non-linear) functions of the original measured variables, including interaction terms, i.e. products of original variables. The second of the two issues flagged above is readily addressed by introducing the concept of the _link function_, such that the linear predictor on the right-hand side of (2) may be specified as a model for a non-linear "link" function \(g\) of the expected value: \[g\big{(}\mu(\mathbf{X})\big{)}=g\big{(}E(Y|X)\big{)}=\beta_{0}+\beta_{1}X_{1}+ \beta_{2}X_{2}+[...]+\beta_{p}X_{p}. \tag{3}\] Classic examples include logistic regression for a binary outcome \(Y\), in which the link function is \(g(\pi)=\text{logit}(\pi)=\text{log}\left(\pi/(1-\pi)\right)\). This extension of the basic regression concept is facilitated by the separation in (1) between the "fixed" part of the model (the expression for the expected value or the link-function-transformed expected value) and the "random" part, which specifies the nature of the variation around the fixed part. ### Common uses and abuses of regression in empirical research Regression analysis is ubiquitous in applied biostatistics. Data analysts develop the reflexive instinct that their role in many settings is to identify an appropriate, "well-fitting" regression model for the data. Unfortunately, the commonly observed lack of specificity with respect to the actual aim of many analyses leads to a range of pitfalls and difficulties. We illustrate these issues by describing three examples of published analyses on which the first author collaborated early in his career. Following initial description of the examples in this section, the next section further examines the problematic issues that they raise. **Example 1**: Acute pyclonephritis and kidney enlargement in young children[9] Young children who acquire a urinary tract infection may also develop an infection of the kidney known as pyclonephritis. Affected kidneys become enlarged during these infections, which makes it difficult to use ultrasonic measurements of kidney size as a reliable baseline for future assessment of growth. The research described here sought to estimate how much affected kidneys were enlarged, compared to normal kidneys. Clinical researchers systematically measured kidney length using ultrasound scans in a consecutive series of 180 children diagnosed with urinary tract infections and referred for imaging between February 1990 and June 1992 at a major paediatric hospital in Melbourne. Of these children, 77 had also developed pyclonephritis (according to a nuclear medicine scan called a scintigram) in one or both kidneys. The children ranged in age from newborn to just under 5 years, and 58% were (biologically) female. Given this variation in age (especially) and sex, it is not immediately clear whether it is meaningful to characterise the extent of kidney enlargement in those who developed pyclonephritis compared to those who did not as a single summary number: perhaps an adequate description would require separate analysis by categories of age and sex. However, an initial examination of the data broken down by age (in years) and sex indicated that the difference in averages between pyclonephritic (infected) and normal (uninfected) kidneys was roughly constant across these categories. This in turn suggested to the biostatistician consulted to assist with analysis (JBC) that a regression model would provide an effective tool for estimating the difference in means between infected and uninfected kidneys at any age, with "adjustment for age" accomplished by including a smooth function of age as a covariate in the model. The original analysis also "adjusted for sex" and allowed for correlation between kidney lengths in the same child (170 children had measurements on both kidneys while for 10 a single kidney was measured), but for simplicity of exposition we ignore these details. Thus, in essence, the analysis comprised fitting a linear regression with mean specification: \[\mu(X_{1},X_{2})=E(Y|X_{1},X_{2})=\beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+\beta _{3}X_{2}^{2}+\beta_{4}X_{2}^{3} \tag{4}\] where \(Y=\) kidney length (mm), \(X_{1}=0/1\) indicator for infection status and \(X_{2}=\) age (months). This model encodes two key assumptions: that the difference in average kidney length between infected and uninfected kidneys is a constant (\(\beta_{1}\)) for all values of \(X_{2}\), while for both values of \(X_{1}\) the average length increases by a cubic function of age, determined by the coefficients \(\beta_{2}\), \(\beta_{3}\), \(\beta_{4}\). The main results can be seen in the form of two smooth curves displaying the fitted values for the infected and uninfected groups (those with and without a "scintigraphic defect") from this regression model overlayed on a scatter plot of the raw data (Figure 1). From the raw data, it's difficult to see a clear difference between the infected and uninfected kidneys (although see Supplementary Material for a clearer representation using colour and a better choice of plotting symbols), but the fitted curves portray, as per the model's underlying assumption, a constant mean difference between the two groups, estimated to be 4.1mm. The analysis has a certain prima facie appeal even though the rationale for "age adjustment" was unclear in the original article; see next section for further discussion. **Example 2**: Predicting the success of gas enema treatment for intussusception in children[10] Intussusception is an acute bowel constriction that occurs infrequently in very young children. It is painful and can be dangerous because it blocks the intestines, so surgeons are called upon to intervene and relieve the blockage. The standard treatment (at the time of the research described here) was to use a "gas enema", a simple procedure that injects air into the baby's rectum. The procedure is usually successful, but not always, so the clinical investigators of this study were aiming to understand the extent to which a successful outcome could be predicted using characteristics of the child or their clinical presentation. Data were collected prospectively on 282 consecutive cases of intussusception that were treated with gas enema, after presenting to a major tertiary paediatric hospital between Figure 1: Scatter plot of renal length measured on sonograms versus age for kidneys with and without defects shown on scintigrams. Curved lines represent cubic model used in analysis of covariance calculations. Because curves are parallel, there is a similar absolute increase in renal length at all ages. January 1987 and July 1991, and included an indicator for the outcome (success or failure) and a set of clinical covariates that were candidates for predicting the outcome. Given the binary outcome measure, a natural statistical approach to prediction is to build a logistic regression model. In this case, the biostatistician used a backwards selection procedure to reduce the number of candidate variables in the regression specification to those that seemed to be important, according to conventional (though increasingly deprecated [11, 12]) criteria based on "statistical significance"; in this case, the usual threshold of p\(<\)0.05 was used. They also followed standard procedures in sticking to simple specifications that ignore potential interaction terms between covariates. The main results were displayed in a table (Figure 2) that presented estimates with confidence intervals and p-values for the coefficients that remained "statistically significant" in the multivariable model. Although the predictive purpose of this paper was clear, there remain a number of issues with the way regression analysis was used to address this; see next section. \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{Predictor Variablea } & \(p\) & Odds & 95\% \\ & Valueb & Ratioc & Confidence \\ & & Interval \\ \hline \multicolumn{1}{c}{Dehydration level} & \(<\).001 & \\ 1-4\% & & 0.32 & (0.13, 0.80) \\ 5\% & & 0.13 & (0.05, 0.33) \\ 6-10\% & & 0.10 & (0.02, 0.42) \\ Duration of symptoms \(>\)12 hr &.03 & 0.42 & (0.02, 0.90) \\ Small-bowel obstruction &.005 & & \\ 1-2 fluid levels & & 0.78 & (0.32, 1.90) \\ \(>\)3 fluid levels & & 0.24 & (0.10, 0.57) \\ Palpable mass present &.03 & 2.43 & (1.07, 5.50) \\ \hline \hline \end{tabular} Note. –Baseline odds of successful gas enema for well-hydrated patients who had signs and symptoms for less than 12 hr, no obstruction, and no palpable mass were 10.1. \end{table} Table 4: Results of Logistic Regression Analysis with Successful Gas Enema as Outcome for Children with Intussusception Figure 2: Reproduction of key table presenting the results of a multivariable logistic regression analysis for the prediction of successful gas enema in children with intussusception (Example 2).[10] See text for discussion. **Example 3**: Associations between multiple risk factors and risk of childhood asthma[13] In this final example, the stated aim was to estimate the "strength of association" of numerous potential "risk factors" with the risk of childhood asthma. The data derived from a cohort study of all children born in 1961 who were attending school in Tasmania in 1968 (aged 7 years), with this paper reporting a cross-sectional analysis of data collected from the parents at the time of recruitment (\(n=8585\)). Questionnaires were used to determine both the primary outcome of interest (history of asthma in the child) and the risk factors including child's sex, other atopic conditions (such as hay fever and eczema), family history of allergic disease and parental smoking. As with the previous example, a binary outcome triggered the use of logistic regression. A model was fitted by forward selection of variables, following textbook guidelines. Because of the large sample size, some risk factors with quite small estimated regression coefficients survived the selection process (adjusted to a lower-than-conventional significance threshold of p\(<\)0.01). Results were presented in the table shown in Figure 3. The main-text presentation of results also included the reporting of so-called "crude associations", with some discussion of the way in which the differences between crude and "adjusted" estimates reflected likely "confounding" effects. It was also reported that "there were no differences between the ORs in males and females", and in fact all 55 potential two-way interactions were reported to be non-significant at the 0.01 level. The opening summary of the paper's Discussion stated that "These atopic conditions [history of various other allergic conditions] were found to be independent risk factors, in that an increased risk of asthma was associated with each factor even though the increased risks associated with all other factors had been taken into consideration by the statistical model." It is unclear what this means, as mentioned in the Introduction and further discussed below. \begin{table} \begin{tabular}{l l} \hline \hline **Risk factor** & \(n\)=7368 \\ \hline **Maleness** & 1.56 (1.30–1.86) \\ **Hay fever** & 3.86 (3.12–4.78) \\ **Eczema** & 2.04 (1.63–2.55) \\ **Hives** & 1.34 (1.09–1.65) \\ **Allergy to foods or medicines** & 1.70 (1.26–2.30) \\ **Maternal asthma** & 2.63 (2.08–3.31) \\ **Paternal asthma** & 2.52 (1.99–3.19) \\ **Maternal smoking** & 1.26 (1.05–1.51) \\ \hline \hline \end{tabular} \end{table} Table 2: Odds ratios and 99% confidence intervals for child’s asthma after adjustment for all other factors in the model Figure 3: Reproduction of key table presenting the results of a multivariable logistic regression analysis for the risk that a child has asthma (Example 3)[13]. See text for discussion. atopy, and parental asthma, hay fever and smoking", Jenkins MA et al, Paediatric and Perinatal Epidemiology. Copyright© 1993 John Wiley & Sons Ltd.] ### The essence and extent of the problem As mentioned, it has been convincingly argued that the purpose of any data analysis can be classified into three categories, defined by three distinct types of inquiry or research question: descriptive, predictive or causal.[1] Beyond these three categories, it is difficult to conceive of other purposes for which data analysis might be intended, while the trichotomy provides a powerful tool for sharpening the specific purpose of any analysis. How do our three examples fit into this trichotomy? In the first example, the purpose was fundamentally descriptive, asking whether it was possible to provide an estimate of the difference in average kidney length between children whose kidney had and had not been infected in this population. Although the rationale and underlying assumptions were not clearly articulated in the paper, and the key simplifying assumption underlying the model (constant shift in the distribution between infected and uninfected across the age range) is unlikely to hold exactly in the population, the regression analysis arguably provided a useful, informative answer to the research question. In particular, the model provided enhanced precision in the estimated mean difference of interest, by accounting for extraneous variation due to age (see Supplementary Material). The second example posed a research question that was essentially predictive: how can clinical factors be used to predict successful gas enema for intussusception in young children? The analysis presented a fitted multivariable logistic regression model identified by a variable selection approach that would no longer be recommended. Importantly, no formal assessment of predictive ability or external validity was conducted, with these aspects only mentioned briefly in the paper's Discussion. Furthermore, the presentation of estimated regression coefficients in the table shown in Figure 2 (and footnote a) erroneously implied that the analysis demonstrates that the covariates included (and the way they were included, i.e. with no interactions) are the only useful predictors. It also implied that the estimated odds ratios displayed in the table have a meaningful interpretation (see footnote c), which is surely not the case: they are simply coefficients that might be used within a prediction algorithm (as was in fact illustrated briefly in the Discussion). In the final example, the authors' purpose was vague, but when examining the motivation and conclusions of the study, it appeared to be essentially causal[14]: the aim was to identify "risk factors" that might be useful for informing intervention strategies to reduce the risk of asthma. In this context, the reported regression analysis is difficult to interpret, without incurring the "Table 2 fallacy".[5] Critically, a clear causal intent requires attention to the precise definition of putative causal exposures, for example considering the extent to which they might be modifiable and if so how, because interventions to modify exposures are what causal questions seek to inform.[15] A common feature of the uses of regression in these examples is that the analyst's purpose was not fully articulated, with the analysis based on an implicit assumption that statistical methods can be used to construct the "best model" for the outcome given the available covariates, from which relevant conclusions can be drawn. In effect, the model is understood to provide a representation of the true data generating process, revealing how the independent variables combine to "produce" the outcome. This approach might work if it were empirically the case that processes in the real world of population health and medicine obeyed natural laws of the form that regression models represent, and that researchers were able to measure all relevant variables, but this is surely far from the reality. In essence, by fostering this "true model myth", the standard approach to teaching and applying regression methods has allowed users to avoid the key issue of analytic purpose, leading to conclusions that are unclear and potentially erroneous. The examples described so far reflect some of the first author's applied practice in the 1990s, but it is clear that similar statistical practices continue to be widespread today. In fact, over the past several decades the technical complexity of statistical analysis presented in medical journals has increased markedly, with much greater use of multivariable regression analysis. For example, one review noted a ten-fold increase between 1978-1979 and 2004-2005 in the number of articles using multivariable regression in the _New England Journal of Medicine_.[16] Another reported a doubling in the use of "regression models" in four psychiatric research journals between 1996 and 2018.[17] There appear to be several reasons for the popularity of multivariable regression, including the growth in availability of user-friendly software enabling regression analysis, in tandem with a widespread belief that multivariable regression is an omnibus tool for addressing many problems. A systematic review of contemporary usage of multivariable regression is beyond the scope of this paper but we provide a brief targeted review covering a single month of publication (June 2022) in three leading journals of clinical research: _Pediatrics_, _Neurology_ and _BMJ Open_. These journals were selected from a listing of the top 20 "most influential medical journals"[18], to represent journals that carry a high proportion of observational clinical studies while leaving aside those (generally the most highly ranked general medical journals e.g. _NEJM_, _Lancet_, _JAMA_) that carry a predominance of large randomised trials and others that publish a greater proportion of laboratory studies. For articles published in the (arbitrarily) selected month, we identified the proportion that reported any form of multivariable regression analysis (while briefly characterising the remaining articles) and we classified the identified articles according to (i) whether regression analysis was used, (ii) if so, whether it was used for a clear purpose (descriptive, predictive, or causal), and (iii) whether key misuses of regression methods were apparent. Overall, we examined 57 papers (18-20 per journal), in 36 (63%) of which regression methods were used. Among these papers, 25 (69%, or 44% of all papers) exhibited a type of misuse of regression along the lines that we have identified above (see Supplementary Material for details). Although the mix of types of question was quite different across the three journals, the proportion of papers in which misuse could be identified was similar. The most commonly observed problem was the fitting of multivariable regression models without full consideration of the precise aims of the study, in a manner that exemplifies the "true model myth". Specifically, we found 10 instances of multiple regression applied to ill-posed questions along the lines of "can we identify the [most important] risk factors for [condition Y]?". Furthermore, even when a clear research question was identified, we observed frequent misuse of regression, such as inadequately justified "adjustment for covariates" and erroneous interpretation of estimated coefficients. The problems identified have clear roots in the way that regression methods are taught. In particular, the multivariable model is invariably introduced as a "natural" extension of simple univariate regression, with the implication that it has widespread inherent applicability and usefulness, while key details of how and why it might be used remain vague. For example, in the introductory chapter of a popular text on logistic regression the authors state that "the goal of an analysis using this model is the same as that of any other regression model used in statistics, that is, to find the best fitting and most parsimonious, clinically interpretable model to describe the relationship between an outcome variable and a set of independent (predictor or explanatory) variables."19 Chapter 2 is entitled "The Multiple Logistic Regression Model", while in the second sentence of Chapter 3 ("Interpretation of the Fitted Logistic Regression Model") we find that "After fitting a model the emphasis shifts from the computation and assessment of significance of the estimated coefficients to the interpretation of their values." The logic here seems entirely backwards, because unless the model is believed to provide a full and faithful representation of the true data generating process, then its coefficients may be completely uninterpretable. The meaning of model coefficients and indeed the entire model should be clear at the point of model specification, _before_ any fitting and estimation takes place. Against this background, the next section maps out a new approach to the teaching of regression methods, which emphasises the purpose of analysis ahead of the specification and fitting of models. ### A proposed reform program: teaching regression with a purpose We propose that the teaching of regression methods should explicitly follow the framework of the "three types of research question". This would imply minimal emphasis on discussing regression models as mathematical entities outside of the context defined by the purpose for which they are to be used. Many, though not all, of the concepts and technical details that are covered in the conventional teaching of regression methods would still be important to learn, but they would be presented in a different way. #### Descriptive purpose Examples of studies with descriptive aims include those that examine the prevalence or incidence of diseases or health conditions in a population or across subgroups of a population; prominent recent examples of this type of study are those that have sought to understand the extent of the COVID-19 pandemic.[20, 21] Example 1 also pursued a descriptive question. We distinguish descriptive _research questions_, our main focus here, from the descriptive analysis of study participants that is invariably provided in study reports (e.g. "Table 1" of the typical paper, providing summary statistics such as means and standard deviations along with percentage breakdowns into key categories of interest. In the latter context, the purpose is literally to describe the study participants, with no intention to draw inferences about a broader population. In contrast, in studies with descriptive scientific aims, the principles of statistical inference are relevant. How can regression methods help to answer descriptive research questions? It is instructive to begin with what may be described as the "null model", which is the version of (1) (or its generalised version with a non-identity link function as in (3)) in which \(X\) is dropped (or, equivalently, fixed at \(1\) for the entire population). In that case, only one parameter is identifiable: \(\beta_{0}\), representing the mean (or \(g()\)-transformed mean) in the population. Standard regression estimation technology readily provides inference for this mean value, a common target estimand in descriptive questions. If the descriptive question instead seeks to describe the difference in mean (or risk, or prevalence, for a binary outcome) across subgroups, say in the simplest case of a binary subgroup indicator variable \(X\), coded \(0/1\), then the regression in (1) can be used as it stands, with coefficient \(\beta_{1}\) representing that difference, while \(\beta_{0}\) represents the mean in the subgroup \(X=0\). Inference for each of these parameters may be obtained by appropriate variance calculations, which could be examined mathematically in advanced classes, and can be performed in practice by fitting this regression in statistical packages (with the inference for \(\beta_{1}\) equivalent to the classical \(t\)-test with an estimation focus). Alternatively, a more natural descriptive focus might be to estimate each of the subgroup means separately, inferences for which could be obtained using the regression just described, augmented with a method for obtaining the variance of the estimate of the mean in subgroup \(X=0\), i.e. \(\beta_{0}+\beta_{1}\), _or_ by reparametrising the regression model by dropping the intercept (with its coefficient \(\beta_{0}\)) and including indicators for both subgroups, with the corresponding coefficients then representing the subgroup means. A useful teaching point is that the regression approach enables inferences for the subgroup means using a pooled estimate of the variance within groups, rather than relying on separate estimates of variance from each group. A natural extension is to the case where \(X\) takes multiple values. If these represent \(k\) unordered categories (a "nominal" variable), then description of the difference in mean for each of the \(k-1\) subgroups relative to a reference group may be achieved in a regression framework by including \(k-1\) indicator variables in the regression specification with each of the resulting coefficients encoding those differences. Alternatively, as for the two-group case, the intercept may be dropped, and a coefficient included for all indicators, representing the subgroup means. By this point in the curriculum, students should have been introduced to commonly used software tools for fitting regression models, with applications emphasising that much of the standard output of these tools (such as estimates of model coefficients that do not correspond to target parameters of interest, and breakdowns of sums of squares, including R-squared) is rarely of value for practical purposes. Another type of descriptive aim is to characterise the variation of an outcome with a continuous covariate. This provides a point of contact with the traditional introduction to linear regression that focuses on continuous \(X\) and continuous \(Y\). Univariate linear regression provides a method for describing the joint variation of \(X\) and \(Y\), focussing on the (asymmetrical) question of how the expected value of \(Y\) changes according to the value of \(X\). Under the assumption of a linear relationship, the regression equation provides a summary of the average rate of change of \(Y\) with each unit of difference in \(X\), smoothing over the variability of individual values to allow the essential strength of (linear) relationship to be estimated. Of course, there are many examples in which the simple straight-line relationship does not provide a good fit. These provide an opportunity for students to learn about "curve fitting", which may be accomplished by using polynomial functions, as in (4), or by more modern alternatives, from parametric fractional polynomials to semi-parametric splines etc.[22; 23]. Several techniques can thus be discussed within the descriptive context, although it may emerge that their true value becomes debateable as they become more complex - as the "summary measure" describing the data may become scarcely less complex than the raw data themselves. It will be important to flag the greater value of such techniques when the analyst's purpose is more sophisticated, as in prediction or causal inference, and further details can be deferred to appear under those headings. Importantly, to this point we have overlooked a key component of the descriptive research question: the notion of the target population, which is necessary for clear definition of the target parameter(s). The inferences alluded to for the simple analyses described thus far only make sense if the analytic sample is (equivalent to) a random sample of the population. This is rarely the case, of course, so where possible analysis planning should seek to develop an approach that adjusts for differences between the study sample and the population.[2] In epidemiological studies, data are often available on covariates that characterise the sample (in terms of age, sex, geography and other sociodemographics), and if the corresponding characteristics of the population are also available then there is scope for reweighting the sample mean values to the population distribution of covariates. This can be done using classical sample survey methods, as discussed in the literature on standardisation of rates in epidemiology[24; 25] and seen for example in a recent COVID-19 study[21]. Standardisation or reweighting to a population can also be facilitated by multivariable regression models[26; 27], although most discussions of this topic focus on analysis for causal questions[25; 28], and the details are too complex for this early stage of our proposed regression curriculum. We have seen, however, that the kidney lengths study (Example 1) used a multivariable regression analysis. Why was this done, and could this be a valuable teaching example? First, as in many clinical studies, the definition of the relevant population was implicit rather than explicit, and no population reference data were available. So, there appears to be little scope for reweighting or "adjustment". On the other hand, it is clear from basic biology, as well as a cursory look at the data, that the outcome depends strongly on age, so it is natural to ask whether the research question is well-posed without considering age. If the infection-related kidney enlargement changed with age, then its extent should be described as a function of age. If, as appeared to be the case, there was a near-constant difference across the age range, then an analysis that _smooths out_ the additional outcome variation due to age, by obtaining the mean difference within age categories and averaging these over the categories, can be shown to produce an estimate with lower variance than the crude mean difference ignoring age (see Supplementary Material). The multivariable linear regression model shown in (4) can therefore be introduced as a convenient and effective tool for producing an estimate of the difference between the groups that has considerably less variance than the crude mean difference, because it removes the variance associated with age. In the teaching context, one should emphasise that this tool is essentially a convenient way of containing the variability of the data in order to estimate more precisely what is _assumed to be_ a constant mean difference (represented by the parameter \(\beta_{1}\) in (4)) across age, which itself is associated with rapid change in the outcome. The assumption of a constant difference across the age range can be checked to some extent, limited by sample size, from the data. The key "outputs" for teaching purposes are the estimate and standard error (or confidence interval) for the coefficient \(\beta_{1}\) in model (4) and in the "unadjusted" model (1). An important aside is that this rationale of improved precision via regression adjustment only applies to continuous outcomes, for which variation is independent of the mean. Leaving aside the potentially bigger issue of differences between sample and population (generalisability bias), it should be noted that poor modelling of the dependence of the outcome on age may lead to a biased estimate of the group difference of interest. This would be an example of trading increased bias for a reduction in variance. In fact, for teaching purposes it might be interesting to explore this by fitting a simpler model in which the age relationship is represented with a simple linear trend. In this regard, the example also provides an opportunity to observe the value of nonlinear functions in a regression expression, with the interesting twist in this example that the authors used a cubic polynomial to represent the age dependence. This was done not because the cubic term in the regression specification was "statistically significant" (it wasn't), but to avoid the unrealistic shape of the simplest alternative to a straight line: the quadratic. Finally, although an example such as the kidney study provides nice opportunities for the teaching of fundamental regression ideas, a full discussion of descriptive research questions should highlight that the role of multivariable regression analysis in this arena is often exaggerated, with frequent examples of "regression adjustment" that is not clearly justified. ### Predictive purpose Analysis for predictive purposes provides the most natural application of multivariable regression models, which are inherently designed to represent or "predict" the expected value of \(Y\) as a function of one or more \(X\)'s. Full clarity in the specification of a prediction question requires definition of the target population, outcome and predictors, with analysis planning requiring consideration of sampling/recruitment and outcome and predictor measurements. Genuine prediction problems invariably involve multiple predictors and seek to develop an algorithm for reliably forecasting the future value of \(Y\) for individuals for whom only the values of the \(X\)'s are available. Formal prediction tasks often (but not always) concern binary outcomes: for example, the question of interest may involve developing an algorithm or formula with which to predict the risk of dying or the risk of relapse (or cure), given a set of available covariates. Multivariable regression models in the form of (3) provide a natural starting point for such tasks, lending themselves in the teaching context to further development of the theory of least-squares estimation (for the mean of a continuous outcome) and, importantly, to models for binary outcome variables, such as logistic regression. In the latter vein, if the aim is to predict a binary outcome from a set of input variables, the essential target parameter of interest is the probability of the outcome (equal to its expected value), so the essential task of a predictive analysis is to find ways of developing a formula (or algorithm) that expresses this probability as a function of input variables. If we start with the classical linear regression formulation (2), we quickly observe that the unbounded linear expression is not well suited to representing the variation in the range-restricted probability. A mathematically natural alternative is to _transform_ the probability to the log-odds or logistic scale (i.e. use model (3) with \(g(\pi)=\text{logit}(\pi)\)), and investigate if linear combinations of predictors can be identified to accurately represent the variation in this parameter across the multidimensional domain of (vector) \(X\) (according to measurements of predictive performance as outlined below). Thus, the purpose-based regression curriculum might well introduce the basic concepts of logistic regression at an early stage. However, it should also be apparent that the standard regression specification of parallel linear terms, with no interactions, should not be expected to perform optimally in many applications. Assuming the same prediction function for males and females (for example) may result in poor predictive performance. In general, it is natural to expect that complex predictor functions, at least including nonlinearities and interactions, but also potentially including splines and extending to other flexible semiparametric models ("generalized additive models" etc) will be useful. The key realisation here is that the regression coefficients and other aspects of the form of model used are not of substantive interest. Instead, what is important is to assess the quality of the predictions produced by the model, in terms of the difference between predicted outcomes and actual outcomes. Given this, a further natural extension is to the world of "machine learning", within which prediction is appropriately seen as an essentially algorithmic problem, amenable to purely computational solutions, once crucial design aspects have been considered (target population and sample, outcome and predictors and their measures). Methodology for assessing the quality of predictions in the context of a continuous outcome can be related to an understanding of residual variance and the classical decomposition of sums of squares: considering the data to which the model has been fitted, how much of the original variability in the outcome can be "removed" by using the prediction model? A harder question is to judge how much "variance explained" is necessary or desirable in order for a prediction model to be useful. This can only be answered in the context of the specific research question. A range of more specific tools is applicable in the common context of predicting binary outcomes.[3] Overall predictive performance can be seen to be a combination of "discrimination" and "calibration". _Discrimination_ measures how well the predicted probabilities distinguish between those who experience the outcome and those who do not, depending on the choice of threshold value used to convert predicted probabilities into binary predictions. _Calibration_ measures how well the predicted distribution matches the observed outcome distribution. Again, context-specific information such as the relative costs and benefits of false positive and false negative predictions are important in the practical evaluation and implementation of prediction models. Furthermore, it should be clear that using the same data for model development and for evaluation of predictive ability (often referred to as validation) will in general lead to over-optimistic performance measures. It is therefore important to use sample-splitting methods such as cross-validation for so-called internal validation, which assesses performance within the same dataset that was used to develop the model. Ideally there should also be a plan for _external_ validation, an assessment of how well the prediction model performs in similar but not identical contexts (whether defined temporally, geographically, or by other aspects of the population definition). A key feature of the use of standard regression methods for prediction is that the estimated regression coefficients are rarely if ever useful, beyond determining the prediction algorithm. It is sometimes suggested that the "importance" of an individual predictor can be gauged by examining the size of its regression coefficient, but such interpretation again begs the question of "importance for what purpose?". If the purpose is prediction, then it is unclear how the size of the coefficient indicates its importance; for that we would need to examine predictive performance measures with and without that predictor. ### Causal purpose Hernan and others have argued persuasively that a majority of epidemiological and clinical research studies have a causal purpose - they seek to answer a causal or "What if..." question.[15, 30, 31] Such questions are ideally answered by experimentation, in which treatments or exposures are assigned by the researcher, but of course this is not always possible. A vast body of literature in epidemiology, social science and elsewhere shows that observational studies can also be used to address such questions, subject to inevitable limitations. And indeed a vast majority of published literature in these fields uses regression modelling to address such questions. How are regression models useful for causal inference purposes? Clearly there is not enough space here to provide a systematic account of causal inference, especially given the various flavours that have developed and are still debated, but we believe the following outline may be useful. It has formed the basis of successful short courses that we have delivered to health researchers and biostatisticians. The randomised experiment or trial provides a powerful paradigm for understanding causal questions and the assumptions required to answer them. When experimental control of treatment allocation is not possible (as in the vast majority of research based on observational data), causal questions can still be framed in terms of the hypothetical "target trial" that would provide an answer if it were possible to conduct.[14, 32] This powerful paradigm can be used to provide a formal definition of a "causal effect", as a contrast (difference or ratio) between average outcomes that _would be observed_ in a population with and without the treatment or intervention of interest. Beginning with this definition one must then consider the assumptions under which a proposed non-randomised study (or indeed an "imperfect" randomised trial subject to non-adherence etc) may be able to provide a valid estimate of the causal effect of interest. Once we are clear on the essential concepts, the resulting "roadmap" (box) provides a framework within which the potential role of regression methods to address causal questions may be defined. **Roadmap to causal inference** Define the estimand or causal effect of interest, best achieved by description of the hypothetical perfect "target trial" that would provide the value of this quantity. Describe how one could emulate the target trial using the study at hand, delineating assumptions that can plausibly be made and minimise the risk of bias posed by departures of the emulation from the target trial. Plan an analysis according to the stated assumptions, potentially using regression methods. The suggested roadmap highlights the importance of beginning with absolute clarity about the estimand or target parameter, which essentially defines the research question we are seeking to answer, before thinking about techniques, including regression models, for analysing the data. Considerations in step 1 of the roadmap include defining the population of interest (by way of eligibility criteria), the treatment or interventions to compare and the primary outcome measure, including the summary effect measure that constitutes the causal effect (e.g. the difference in means for a continuous outcome, the risk ratio for a binary outcome, etc). In Step 2, the delineation of assumptions that are necessary to ensure _identifiability,_ i.e. that the target effect of interest can be appropriately emulated or, more formally, consistently estimated from the observable data, is critical. At least three assumptions are needed, so-called consistency, positivity and exchangeability (given confounders),[33] as well as possibly assumptions about missing data and measurement error.[34]. This step can be facilitated by using a causal diagram or directed acyclic graph (DAG) to depict relationships between the analysis variables and other factors that may be relevant, for instance because of their role either as confounders or as so-called colliders, giving rise to selection bias.[33] Of central importance to the role of regression methods, the delineation of assumptions should produce a list of variables that need to be _adjusted for_ in order to remove confounding biases that would be present in a simple comparison between those who were exposed and those who were not. Statistical adjustment for confounders and other sources of bias is, however, a complex concept. Regression provides a specific approach to adjustment that corresponds to _conditioning on_ the values of the adjustment variables, and then assuming that the causal effect of interest is constant within all subsets of the population defined by combinations of the adjustment variables. In teaching these ideas, it is helpful to begin by pointing out that the simplest possible regression model (1), with \(X\) a binary indicator for treatment or exposure, provides a parametric representation of the causal effect of interest (as \(\beta_{1}\)) in an actual randomised trial with perfect follow-up and adherence. Inference may be performed by fitting this regression in statistical packages, just as we outlined above in the context of a descriptive comparison of two sub-populations. In an observational study, to address confounding bias we then extend the regression specification to include confounding adjustment variables as well as the treatment indicator. Beginning for pedagogical purposes with the case of a continuous outcome \(Y\) in which the difference in means is the causal effect measure of interest and there is a single confounding variable \(Z\), a possible regression model is: \[E(Y|X,Z)=\beta_{0}+\beta_{1}X+\gamma Z. \tag{5}\] _If_ this model provides a reasonable representation of how the outcome actually varies across the population with changing values of the treatment or exposure variable and the covariate (in particular, there is no treatment-covariate interaction), _and_ the single covariate is assumed (from subject-matter knowledge) to be the only source of confounding (or other) bias, along with the other causal assumptions of consistency and positivity, _then_ the coefficient of the treatment variable in this model, \(\beta_{1}\), can be seen to equal the population causal effect of interest. So, fitting this regression model provides an avenue for estimating the causal effect, under the specified assumptions. Pursuing the details a little further, if the single confounder \(Z\) in (5) is dichotomous, the model simply assumes that the treatment effect within each of the subpopulations represented by the values of \(Z\), is the same for both values of \(Z\). If the confounder \(Z\) is continuous, there is not only the same assumption of constant treatment effect irrespective of \(Z\), but also the default assumption of a common, linear relationship with the outcome for each value of treatment. The latter parametric assumption may or may not provide a good approximation to the data. In teaching these ideas, mention can be made of ways in which the linear assumption can be checked, although it is important not to suggest that it can be confirmed or rejected from the data (especially in small sample sizes). The regression formulation readily extends to a larger adjustment set (multiple confounders) and adapts to handling other target estimands. For multiple confounders, the outcome regression specification (5) for estimating a causal mean difference extends to: \[E(Y|X,\mathbf{Z})=\beta_{0}+\beta_{1}X+\gamma_{1}Z_{1}+\cdots+\gamma_{k}Z_{k} \tag{6}\] where initially \(\mathbf{Z}\) might represent a vector of \(k\) independent covariates, but more generally may be developed as a specification involving fewer than \(k\) covariates but also including various "non-linear" terms such as polynomial functions to represent curved relationships and product terms to represent interaction effects. It should be emphasised that full adjustment for confounding will only be achieved when the adjustment component of (6) is sufficiently rich. Note that with this model we are assuming that the causal effect of interest is constant across all the confounder substrata defined by every combination of values of \(Z_{1},...,Z_{k}\). Emphasising the causal constant-effect assumption and the parametric assumptions encoded in the specification of the confounder adjustments provides an opportunity to mention that there are more general approaches to causal effect estimation that require fewer assumptions, such as g-computation and inverse-probability weighting, and the more recent doubly robust methods.[33; 35] Finally, there is the important issue of outcome regression for estimating target estimands for which (6) does not provide a good model. When the outcome is binary, attention often focuses on ratio effect measures such as the risk ratio or odds ratio. It is instructive to demonstrate how standard approaches to these target quantities extend in a simple way from (5) by introducing an appropriate link function. For instance, the risk ratio can be studied under similar assumptions as above, by considering the following modification of (5) obtaining by taking logs: \[\log E(Y|X,Z)=\beta_{0}+\beta_{1}X+\gamma Z. \tag{7}\] The log link recovers the causal effect of interest here, the risk ratio, which is represented by \(\exp\left(\beta_{0}+\beta_{1}\times 1+\gamma Z\right)/\exp\left(\beta_{0}+ \beta_{1}\times 0+\gamma Z\right)=\mathrm{e}^{\beta_{1}}\), highlighting the now-familiar assumption of a constant causal effect for every value of \(Z\). Expression (7) immediately extends to a multiple-confounder version analogous to (6). For practical implementation, we face the challenge of appropriate estimation methods for such models, but these fall into the broad category of "generalised linear models", for which estimation theory and computational tools abound. Of note, however, the log-link binary outcome regression has been used much less often in practice than logistic regression, which in the causal context provides an approach to estimating the odds ratio exposure effect (using a similar development to that outlined above for the risk ratio). Two major areas of discussion for the use of regression in causal inference are opened here. First, why choose one effect measure over another? Students of epidemiology traditionally consider this question in rather more detail than students of biostatistics, but many of the issues involved benefit from a strong grasp of mathematical and statistical issues (e.g. effect heterogeneity [36] and collapsibility across strata [37]. Second, if one is clear about the preference for (say) a risk ratio, or indeed risk difference (over odds ratio), as the target effect, what practical problems may be encountered? A key one is the likelihood of numerical difficulties arising in the fitting of models with standard software. Handy tools for working around these are available [38], while more satisfying alternatives may lie in the use of the more general approaches to causal inference mentioned above. ## Discussion There is no doubt that regression analysis is by far the most important general method in the applied statistician's toolkit, so the importance of establishing strong foundations for its appropriate application seems undeniable. We began this article by highlighting the problem of lack of clarity of purpose in the application of regression methods. Too rarely do researchers or their statistician collaborators explicitly clarify their research question as either descriptive, predictive or causal, and too often do we see examples of multivariable regression being used without a clear purpose at all, with the vague aim of identifying "important risk factors" for an outcome of interest. Even when the purpose is clearly identifiable as one of the three types of question, we observe many examples of misuse of regression. In descriptive questions, a reflexive reliance on regression with insufficient awareness of model assumptions often drives the way in which the data are described, with covariate adjustments that may be unwarranted or unhelpful. In prediction questions, analysts will erroneously ascribe meaning to regression coefficients and related sample-dependent inferences. In causal questions, we often observe a lack of clarity of a range of concepts, beginning with the key one of the target estimand, but also including confounding and the meaning of adjustment (often shrouded in a hazy concept of "isolating independent effects"), resulting in issues such as the Table 2 fallacy. We have argued that this abundance of problems with the use of regression analysis stems from the way in which these methods have been traditionally taught, both to professional statisticians and to non-specialists. Students learn very early that the role of the statistical analyst is to "fit the right model", from which interpretation and conclusions will flow. Much emphasis is placed on "goodness of fit" and on methods for model checking, but this tends to reinforce the "true model myth", the notion that identifying the best model is the primary task. Unless, however, one believes that a "true" model that can be represented by a regression model underlies every problem, which is implausible in health and medical research, then it is scarcely surprising that major problems of usage and interpretation arise. The concerns raised here relate to broader themes in the history of applied statistics. George Box's famous aphorism "all models are wrong, but some models are useful" may be traced to a 1976 paper with the grand title "Science and Statistics" [8], which used the career of R.A. Fisher as an exemplar of how statistics contributes to science by being fully engaged in the substance of scientific enquiry. Fisher worked in the natural sciences, where experiments and their design are central and arguably perhaps models may help to reduce complexity and reveal underlying structure, in an essentially descriptive vein. Paradoxically, as the power of statistical ideas became more evident, largely because of Fisher's leadership in the mid-20\({}^{\text{th}}\) century, a body of techniques started to become codified as widely useful, but in a general sense that lost connection with the specific types of scientific enquiry for which they were relevant. A widely cited reflection on these issues by Leo Breiman (2001)[39] criticised what he identified as a statistical culture that assumed that "the data are generated by a given stochastic data model". In essence, Breiman focused on prediction problems and identified many of the same mistakes and misconceptions that we have described above in this context. More recently, Schmueli (2010)[40] recognised the three types of empirical question and sought to distinguish between statistical modelling for "explaining" and for "predicting", but failed to ground his discussion of causality in clear definitions of target estimands. Unfortunately, causal inference has until recently been something of a taboo topic in (bio)statistics. The truism that "correlation does not imply causation" has been long emphasised and has no doubt led statisticians to shy away from the notion that any useful statements about causation can be made from non-experimental data. However, as outlined here, recent work has highlighted how a formal theory based on "potential" or "counterfactual" outcomes, which can be further distilled into the target trial concept, provides a way around the traditional taboo.[41; 42; 30] Unfortunately, the teaching and practice of mainstream biostatistical methods have largely failed to keep up with these developments, despite considerable advances in the world of epidemiology, in which biostatisticians have played a major role.[43] We referred earlier to a classic textbook on logistic regression but other texts display versions of the same problems, to varying extents. A relatively recent and widely used book providing a broad introduction to regression methods in biostatistics states its purpose as to describe "a family of statistical techniques that we call _multipredictor_ regression modeling"[44], thus immediately preferencing the techniques ahead of potential purposes. The book goes on to distinguish between different types of "application", including prediction and "isolating the effect of a single predictor" (pseudo-causal), but fails to explain the full implications of these different applications for issues of model specification and interpretation. A less traditional text[45] emphasises the importance of purpose and the tentativeness of model specifications but remains ambiguous about where regression models come from - in particular, whether a model specification may precede a purpose. Against this background, we have proposed a substantial rethinking of the way regression analysis is traditionally conceived in statistics. Essentially, we emphasise that a regression model provides a simplification of reality that must be specifically constructed for the purpose at hand. That purpose may be either descriptive, predictive or causal, and the approach to developing and interpreting regression models that are useful within each of these contexts differs. The proposed framework emphasises the commonality between regression methods used for different data types or (in more helpful terms) target parameters; thus all of the usual catalogue of linear regression, logistic regression and other forms of generalised linear model should be seen as relatively minor variations of each other, the development and application of each primarily driven by the purpose or question at hand rather than by the scale of the outcome variable. We note connections between the issues discussed here and other persistent problems in the mainstream use of statistical methods, such as the ubiquitous notion that the essence of statistical analysis is to test (point) hypotheses, resulting in dichotomous declarations of differences found (or not), irrespective of the broader underlying purpose of the research. Many of the papers examined in our brief review exhibit this issue, often even in the descriptive presentation of study groups (typical "Table 1"). In this light, our suggested purpose-focused approach to the teaching of regression methods could well be extended to the teaching of statistical methods more generally. Introductory courses emphasise the distinction between parameter and estimate but spend far too little time on defining the parameter, before a statistical model ("assume the outcome is normally distributed...") is inevitably introduced out of thin air. Parameters need to be defined from research questions, not by way of conventional statistical models. There is invariably no research question underlying the typical Table 1, so there is no role for statistical inference. The suggested approach is radical, in the sense of requiring significant change to long-entrenched practices and course curricula, but unless something like this is attempted it is difficult to see significant improvement in the widespread poor practices that we have described. Additionally, unless those at the coalface of biostatistical teaching and practice join the challenge of reform, we are likely to see a growing gap emerge between the rapid progress of new ideas and new methods for causal inference and prediction modelling, on the one hand, and the mass production of poorly conceived multivariable regression analyses in medical journals, on the other. In summary, we believe it is time the (bio)statistics profession paid more serious attention to the ways in which key statistical methods are used and abused in practice. Reform is essential to ensure our continuing relevance as engaged collaborators in the pursuit of scientific knowledge. ## Acknowledgements This paper has arisen from the President's Invited Speaker lecture delivered by the first author at the International Society for Clinical Biostatistics (ISCB43) conference, Newcastle, U.K., 22 August, 2022. The Murdoch Children's Research Institute is supported by the Victorian Government's Operational Infrastructure Support Program. This work was supported by funding from the Australian National Health and Medical Research Council (Investigator Grant 2009572 to MMB).
2306.01782
Capacity Constrained Influence Maximization in Social Networks
Influence maximization (IM) aims to identify a small number of influential individuals to maximize the information spread and finds applications in various fields. It was first introduced in the context of viral marketing, where a company pays a few influencers to promote the product. However, apart from the cost factor, the capacity of individuals to consume content poses challenges for implementing IM in real-world scenarios. For example, players on online gaming platforms can only interact with a limited number of friends. In addition, we observe that in these scenarios, (i) the initial adopters of promotion are likely to be the friends of influencers rather than the influencers themselves, and (ii) existing IM solutions produce sub-par results with high computational demands. Motivated by these observations, we propose a new IM variant called capacity constrained influence maximization (CIM), which aims to select a limited number of influential friends for each initial adopter such that the promotion can reach more users. To solve CIM effectively, we design two greedy algorithms, MG-Greedy and RR-Greedy, ensuring the $1/2$-approximation ratio. To improve the efficiency, we devise the scalable implementation named RR-OPIM+ with $(1/2-\epsilon)$-approximation and near-linear running time. We extensively evaluate the performance of 9 approaches on 6 real-world networks, and our solutions outperform all competitors in terms of result quality and running time. Additionally, we deploy RR-OPIM+ to online game scenarios, which improves the baseline considerably.
Shiqi Zhang, Yiqian Huang, Jiachen Sun, Wenqing Lin, Xiaokui Xiao, Bo Tang
2023-05-31T07:37:21Z
http://arxiv.org/abs/2306.01782v1
# Capacity Constrained Influence Maximization in Social Networks ###### Abstract. Influence maximization (IM) aims to identify a small number of influential individuals to maximize the information spread and finds applications in various fields. It was first introduced in the context of viral marketing, where a company pays a few influencers to promote the product. However, apart from the cost factor, the _capacity_ of individuals to consume content poses challenges for implementing IM in real-world scenarios. For example, players on online gaming platforms can only interact with a limited number of friends. In addition, we observe that in these scenarios, (i) the initial adopters of promotion are likely to be the friends of influencers rather than the influencers themselves, and (ii) existing IM solutions produce sub-par results with high computational demands. Motivated by these observations, we propose a new IM variant called _capacity constrained influence maximization_ (CIM), which aims to select a limited number of influential friends for each initial adopter such that the promotion can reach more users. To solve CIM effectively, we design two greedy algorithms, MG-Greedy and RR-Greedy, ensuring the \(1/2\)-approximation ratio. To improve the efficiency, we devise the scalable implementation named RR-OPIM+ with \((1/2-\epsilon)\)-approximation and near-linear running time. We extensively evaluate the performance of 9 approaches on 6 real-world networks, and our solutions outperform all competitors in terms of result quality and running time. Additionally, we deploy RR-OPIM+ to online game scenarios, which improves the baseline considerably. + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science and Technology + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science and Technology + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science and Technology + Footnote †: journal: Journal of Computer Science and Technology + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science and Technology + Footnote †: journal: Journal of Computer Science and Technology + Footnote †: journal: Journal of Computer Science + Footnote †: journal: Journal of Computer Science and Technology the APs are more likely to be _the friends of influencers rather than the influencers themselves_, highlighting the importance of choosing influential friends for APs. In contrast, IM and most of its variants assume that each selected influencer unconditionally adopts the promotion from the merchant without relying on friends, which contradicts this insight. Moreover, it is rather challenging to utilize IM and corresponding solutions (Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2019), since independently selecting friends for each AP by the IM solver can incur immense computational overhead, and the result quality remains unclear. To this end, we propose a new IM variant called the _capacity constrained influence maximization_ (CIM) problem. Given a social network \(G\), a diffusion model \(\mathsf{M}\), a set \(A\) of APs and a constant \(k\), CIM aims to find \(k\) influential friends (seeds) for each user in \(A\), such that the number of influenced users starting from all distinct seeds is maximized under \(\mathsf{M}\). To solve CIM, we design a vanilla solution MG-Greedy, which employs a greedy strategy to select the best feasible seed from all neighbors of \(A\) and provides a \(1/2\)-approximation guarantee. In addition, we propose the solution RR-Greedy to select seeds for each user of \(A\) in a round-robin manner, which improves the time complexity of MG-Greedy by a factor of \(|A|\) and ensures at least \(1/2\)-approximation. To improve the efficiency, we further propose the scalable implementation RR-OPIM+, which shares the same framework with the state-of-the-art IM solution OPIM-C (Zhang et al., 2018) but is redesigned carefully to ensure the correctness for CIM. Most notably, RR-OPIM+ achieves \((1/2-\epsilon)\)-approximation in a near-linear running time w.r.t. the network scale. In experiments, we first provide an empirical configuration for CIM based on incentive propagation events. Subsequently, we extensively evaluate the performance of 9 approaches on 6 real-world networks with up to 3 billion relationships. Notably, our proposals outperform all competitors by up to 39% in terms of result quality. Besides, RR-OPIM+ speeds up greedy algorithms by at least 4 orders of magnitude. In addition, we deploy our solution RR-OPIM+ to the online gaming scenario, which improves the baseline by up to 5.39% in the corresponding evaluation metric. To summarize, we make the following contributions in this work: * We conduct an empirical study to verify the difference between in-game incentive propagation and IM. Motivated by these observations, we propose a new IM variant called CIM. (Section 3) * For effectiveness, we propose two CIM solutions MG-Greedy and RR-Greedy with approximation guarantees. (Section 4) * For efficiency, we provide a scalable implementation with rigorous theoretical analysis for these greedy algorithms. (Section 5) * We discover the detailed settings for CIM and conduct experiments to show the superiority of our proposals. (Section 7) * We deploy the proposal to the in-game incentive propagation event, which achieves considerable improvement. (Section 8) ## 2. Preliminaries We abstract a social network as a graph \(G=(V,E)\), where \(V\) is a set of \(n\) nodes (representing users) and \(E\) is a set of \(m\) edges (representing relationships). We assume that \(G\) is a directed graph and each edge \(e_{u,v}\in E\) indicates that \(v\) is a follower of and can be influenced by \(u\). We call \(u\) (resp. \(v\)) the in-neighbor (resp. out-neighbor) of \(v\) (resp. \(u\)). Furthermore, we use \(N_{u}\) to denote the set of out-neighbors of \(u\). For an undirected graph, we replace each undirected edge \(e_{u,v}\) with two directed ones in opposing directions, i.e., \(e_{u,v}\) and \(e_{u,u}\). In the sequel, we elaborate on the background of influence maximization (IM), followed by in-game incentive propagation scenarios. ### Diffusion Models This work focuses on two well-accepted diffusion models, named Independent Cascade (IC) (Kal \(\sigma_{G,M}(v|S)\) for \(O(n)\) nodes in each iteration, Leskovec et al. (Leskovec et al., 2017) exploit the submodularity and propose a practical implementation called CELF, which skips \(v\) while selecting the \(j\)-th seed if \(v\)'s marginal gain is sufficiently small in the iteration prior to \(j\). **RR set sampling.** Borgs et al. (Borgs et al., 2017) propose to estimate the spread by sampling random RR sets, defined as follows. Definition 2.1 (RR Set).: Given a graph \(G\) and a model \(\mathsf{M}\), a random RR set \(R_{G,\mathsf{M}}\) is a set of nodes, generated by (i) first selecting a node \(v\) at random, (ii) then sampling a subgraph \(g\) from \(G\) in terms of \(\mathsf{M}\), (iii) finally preserving the nodes that can reach \(v\) in \(g\). \(\mathcal{R}_{G,\mathsf{M}}\) is denoted as a set of random RR sets. In Definition 2.1, the subgraph \(g\) is sampled based on the influence process of \(\mathsf{M}\). For IC, \(g\) is induced by removing each edge \(e_{u,v}\) in \(G\) with the probability \(1-p_{u,v}\). Borgs et al. (Borgs et al., 2017) prove that \(\sigma_{G,M}(S)=n\cdot\Pr[S\cap R_{G,\mathsf{M}}\neq\emptyset]\). In other words, \(n\cdot\frac{\Lambda_{R_{G,\mathsf{M}}}(S)}{\theta}\) is an unbiased estimator of \(\sigma_{G,\mathsf{M}}(S)\), where \(\mathcal{R}_{G,\mathsf{M}}\) is a set of \(\theta\) random RR sets and \(\Lambda_{\mathcal{R},G,\mathsf{M}}(S)\) is the coverage of \(S\) in \(\mathcal{R}_{G,\mathsf{M}}\), i.e., the number of RR set \(R_{G,\mathsf{M}}\in\mathcal{R}_{G,\mathsf{M}}\) satisfying \(S\cap R_{G,\mathsf{M}}\neq\emptyset\). By this connection, Borgs et al. (Borgs et al., 2017) sample a sufficient number of random RR sets as \(\mathcal{R}_{G,\mathsf{M}}\), and employ the greedy framework in (Leskovec et al., 2017) to iteratively select the next node \(v\) with the largest _marginal coverage_ \[\Lambda_{\mathcal{R},G,\mathsf{M}}(v|S)=\Lambda_{\mathcal{R},G,\mathsf{M}}(S \cup\{v\})-\Lambda_{\mathcal{R},G,\mathsf{M}}(S).\] The related solutions (Gillet and Barabasi, 2007; Borgs et al., 2017; Borgs et al., 2017; Borgs et al., 2017) follow the greedy strategy in Borgs et al. (Borgs et al., 2017) and improve its efficiency by reducing \(\theta\) while ensuring the same approximation ratio. At this front, OPIM-C (Gillet and Barabasi, 2007) is state of the art and is applied to subsequent \(\mathsf{IM}\) solutions and variants (Gillet and Barabasi, 2007; Borgs et al., 2017; Borgs et al., 2017). In addition, a line of solutions (Borgs et al., 2017; Borgs et al., 2017) estimates spread by sampling graph instances, and a line of solutions either leverages centrality scores (Gillet and Barabasi, 2007; Borgs et al., 2017) or simplifies the diffusion model (Bordes et al., 2017) to generate seed heuristically. We refer interested readers to (Kendal et al., 2017) for details. ### In-Game Incentive Propagation **Event procedure.** The online gaming platform designs the incentive propagation event to boost user engagement, whose procedures are as follows. Given a social network \(G=(V,E)\), the service provider first selects a set of users, named _active participants_ (APs), who are more likely to engage in this event. We denote this set of APs as \(A\) with \(|A|=d\ll n\). We say a user \(v\) is a _passive participant_ (PP) if \(v\in V\backslash A\). After \(A\) is chosen, the service provider then distributes this event to \(A\), including the mission details, the incentive \(T\) (e.g., extra credits), and a list of PP friends (named _passive seeds_). \(T\) is initially possessed by \(A\) and is shared by the following two steps, where we say a user is _activated if_ attaining \(T\). * _[leftmargin=*]_ * _Seed activation._ Each AP \(u\) is the initial active user and can invite the recommended seed \(v\), who becomes active automatically after playing with one of AP inviters for the first time. * _Daily contamination._ Starting from activated seeds, \(T\) can be recursively shared from an active PP to an inactive PP friend if they play together for the first time during the event. Notice that the provider can recommend at most \(k\) passive seeds for each AP, since a user has a large number of friends (from gaming and messaging platforms) but only has a limited capacity to play with them. The passive seed set for each AP is defined as follows. Definition 2.2 (Passive Seed Set).: Given a graph \(G\), an AP set \(A\), an AP \(u\), and a constant \(k\), let \(C_{u}=N_{u}\backslash A\) be the set of \(u\)'s passive out-neighbors. The passive seed set of \(u\) is \(S_{u}\subseteq C_{u}\) with \(|S_{u}|\leq k\). **Existing and possible selection approaches.** The linchpin of the incentive propagation event is selecting \(k\) seeds for each AP. Existing strategies can be generalized to a local framework, which independently ranks the passive friends for each AP \(u\in A\) in descending order based on a heuristic score, e.g., degree, PageRank, the number of historical interactions, and so on (Gillet and Barabasi, 2007; Borgs et al., 2017; Borgs et al., 2017; Borgs et al., 2017; Borgs et al., 2017), and select top \(k\) friends with the highest scores as seeds. To improve the number of engaged users, the idea of \(\mathsf{IM}\) could be utilized by invoking an \(\mathsf{IM}\) solver as mentioned in Section 2.2 and selecting \(k\) seeds from \(C_{u}\) for each AP \(u\). However, this local framework yields numerous _overlapping seeds_, each of which is assigned to more than one APs, rendering the compromised spread. In the meantime, although the \(\mathsf{IM}\) solver can output an approximate seed set w.r.t. \(C_{u}\) of each AP \(u\), the quality of the overall seed set is unclear. ## 3. The Cim Problem In this part, we first conduct an empirical study to (i) clarify the difference between \(\mathsf{IM}\) and the in-game incentive propagation scenario (Observation 1), and (ii) verify the drawback of the local framework (Observation 2). Motivated by these observations, we then formulate capacity constrained influence maximization (\(\mathsf{CIM}\)). ### Motivating Insights We collect user logs from an incentive propagation event of a Tencent role-playing game, which follows the procedure in Section 2.3, and call this dataset _TXG_. In particular, the service provider selects the monthly active user as the AP, who is randomly assigned one of the following strategies: PageRank, degree, or the number of historical interactions, to obtain at most \(k=40\) recommended friends. _TXG_ consists of (i) a social network \(G\) with 243.4 thousand users and 5.9 million undirected friendships, (ii) 7.6 thousand APs and 3.9 thousand seeds involved in seed activation, and (iii) 18.5 thousand PPs activated in daily contamination. In the first set of observations, we explore whether influucers tend to participate in the event through the service provider or via invitations from friends. To reduce the bias during the exploration of user influence, we excluded APs and their seeds recommended by the influence-based strategy degree or PageRank. For each AP and seed, we define the one-hop spread as the number of seeds that played with each AP in seed activation and the number of PPs that are directly activated by each seed in daily contamination, Figure 1. The distribution of one-hop spread of APs and their seeds and the number of in-neighbor APs of seeds on _TXG_. respectively. Figure 1(a) reports the distribution of the one-hop spread of the rest APs and their seeds, providing the insight below. Observation 1 (Less-Influential APs).: _The APs of in-game incentive propagation events are less influential than their seeds._ Specifically, the average one-hop spread of seeds is 7% larger than that of APs, and the fraction of passive friends with a one-hop spread larger than 50 is 4\(\times\) more than APs. To demonstrate the existence of overlapping seeds in existing strategies, Figure 1(b) reports the distribution of the number of in-neighbor APs of each seed and the distribution of the number of APs who have invited this seed. We call a seed an overlapping seed if it has more than one in-neighbor AP. As shown in Figure 1(b), since seeds are independently recommended to each AP, 46.3% fraction of seeds are overlapping seeds, which incurs 22.1% fraction of passive seeds invited by more than one APs. However, the engagement likelihood of a seed is weakly related to the frequency of being invited. Specifically, for seeds invited once, twice, and third times, the fraction of engaged passive seeds is 41.5%, 45.4%, and 42.6%, respectively. This leads to the second observation. Observation 2 (Overlapping Seeds).: _Due to the overlapped neighborhood and the independent recommendation strategy, the passive seed can be invited by multiple APs, however, the repeated invitation has a slight impact on the engagement willingness of seeds._ To ensure the robustness of our findings, we expand our exploration to two additional in-game incentive propagation events, which confirm our previous observations. These observations can also find support from Epinions (Stein **RR-Greedy**.: Notice that MG-Greedy yields a 1/2-approximate output by iteratively selecting only one AP-seed pair after evaluating the marginal gain of \(O(|C|)\) pairs. To improve the result quality and reduce invocation times of spread functions, we propose a greedy algorithm called RR-Greedy, ensuring an at least 1/2-approximate result. The main idea is selecting the edge from the local candidate space \(C_{tt}\) for each AP \(u\). Let \(L\subseteq A\) be the set where each \(u\in L\) has selected less than \(k\) seeds. As illustrated in Algorithm 2, at each iteration, RR-Greedy selects the edge \(e_{u,0}\) with the largest marginal gain \(\sigma(e_{u,0}|\mathcal{S})\) from \(C_{tt}\) for each \(u\in L\) (Lines 3-4), where ties are settled arbitrarily. RR-Greedy adds the selected \(e_{u,0}\) to \(\mathcal{S}\), and removes \(u\) from \(L\) if it has enough seeds or no more candidates exist (Lines 5-6). RR-Greedy accesses the spread oracle \(O\left(k\cdot|C|\right)\) times, improving MG-Greedy by \(O(d)\), and its correctness is as follows. **Theorem 4.3**.: _Given a graph \(G\), a model \(\mathsf{M}\), an AP set \(A\), and a constant \(k\), let \(\mathcal{S}^{*}\) be the optimal solution of \(\mathsf{CIM}\). The output \(\mathcal{S}\) of \(\mathsf{RR}\)-Greedy satisfies \(\sigma(\mathcal{S})=\frac{1}{1+\gamma}\cdot\sigma(\mathcal{S}^{*})\), where \(\gamma\) is a constant in \([0,1]\) satisfying \(\sigma(e_{u,0}|\mathcal{T})\geq(1-\gamma)\cdot\sigma(e_{u,0}|\mathcal{S})\), for all \(\mathcal{S}\subseteq\mathcal{T}\) that \(|(\mathcal{T}\backslash\mathcal{S})\cap C_{tt}|\leq k\;\forall u\in A\) and \(e_{u,0}\in C\backslash\mathcal{T}\)._ Hence, RR-Greedy is \(\frac{1}{1+\gamma}\)-approximate, which is superior over MG-Greedy. The constant \(\gamma\) in Theorem 4.3 is also known as _curvature_(Cumm and Stolle, 2009) and is further bounded by \[\gamma\leq\gamma_{max}=1-\min_{e_{u,0}\in C}\frac{\sigma(C)-\sigma(C(e_{u,0}) )}{\sigma(\{e_{u,0}\})}.\] ## 5. Scalable Implementations To estimate the spread, we propose a scalable implementation RR-OPIM+ for RR-Greedy, extending the state-of-the-art OPIM-C (Zhu et al., 2017). In the sequel, we introduce the main idea of it and clarify its difference from OPIM-C, followed by its implementation and analysis. ### Main Idea Given a graph \(G\), an AP set \(\mathsf{A}\) and the induced subgraph \(P\) with \(n_{\mathcal{P}}\) nodes, let \(\mathcal{R}_{P,M}\) be a set of random RR sets constructed from \(P\). Akin to Section 2.2, the connection between a random RR set \(R_{P,M}\in\mathcal{R}_{P,M}\) and the spread \(\sigma_{P,M}(\cdot)\) is \(\sigma_{P,M}(S)=n_{\mathcal{P}}\cdot\Pr[S\cap R_{P,M}\neq\emptyset]\). Hence, the objective in \(\mathsf{CIM}\) can be solved by finding the seed \(S\) with the maximum coverage \(\Lambda_{\mathcal{R},P,M}(S)\) in \(\mathcal{R}_{P,M}\), and the greedy solutions in Section 4 can be efficiently implemented by replacing the evaluation of \(\sigma_{P,M}(v|S)\) with \(\Lambda_{\mathcal{R},P,M}(v|S)\). In analogy to \(\sigma_{P,M}(\cdot)\), we omit the subscripts \(P\) and \(M\) of \(\mathcal{R}_{P,M}(\cdot)\) and \(\Lambda_{\mathcal{R},P,M}(\cdot)\) in the following contexts. Furthermore, the coverage \(\Lambda_{\mathcal{R}}(S)=\Lambda_{\mathcal{R}}(\mathcal{S})\) and marginal coverage \(\Lambda_{\mathcal{R}}(v|S)=\Lambda_{\mathcal{R}}(e_{u,0}|\mathcal{S})\) in the edge notation. The pseudocode of RR-OPIM+ is illustrated in Algorithm 3. Initially, RR-OPIM+ defines the constants \(\theta_{max}\) and \(\theta\) (Lines 1-2), representing the worst-case and the initial number of random RR sets, respectively, and then constructs two sets \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\), both containing \(\theta\) random RR sets (Line 3). After that, it runs in an iterative manner to verify if the selected seed pairs have satisfied the approximation guarantee by using less than \(\theta_{max}\) random RR sets. At each iteration, it first invokes RR-Greedy by using \(\Lambda_{\mathcal{R}_{1}}(\cdot)\) as the evaluation function and selects the set \(\mathcal{S}\) of AP-seed edges (Line 5). To verify if the selected \(\mathcal{S}\) provides the desired approximation guarantee, it then computes the upper bound \(\sigma^{\mu}(\mathcal{S}^{*})\) of \(\sigma(\mathcal{S}^{*})\) (resp. the lower bound \(\sigma^{I}(\mathcal{S})\) of \(\sigma(\mathcal{S})\)) by using \(\mathcal{R}_{1}\) (resp. \(\mathcal{R}_{2}\)) (Lines 6-7). RR-OPIM+ is early terminated with current \(\mathcal{S}\) if \[\frac{\sigma(S)}{\sigma(\mathcal{S}^{*})}\geq\frac{\sigma^{I}(\mathcal{S})}{ \sigma^{\mu}(\mathcal{S}^{*})}\geq\frac{1}{2}-\epsilon.\] Or, it doubles the sizes of \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) and continues (Lines 8-9). ``` 1\(\mathcal{S}\leftarrow\emptyset\); \(L=A\); \(i_{u}\gets 0\), \(\forall u\in A\); 2while\(L\neq\emptyset\)do 3foreach\(u\in L\)do 4\(e_{u,0}\leftarrow\arg\max_{e_{u^{\prime},0}\in C_{u}\backslash\mathcal{S}} \sigma(e_{u^{\prime},0^{\prime}}|\mathcal{S})\); 5\(\mathcal{S}\leftarrow\mathcal{S}\cup\{e_{u,0}\}\); \(i_{u}\gets i_{u}+1\); 6if\(i_{u}\geq k\;\textit{or}\;C_{u}\backslash\mathcal{S}=\emptyset\)then\(L=L\backslash\{u\}\) ; 7 8return\(\mathcal{S}\); ``` **Algorithm 2**RR-Greedy \((G,A,k,\sigma)\) Although RR-OPIM+ and OPIM-C (Zhu et al., 2017) share the framework as shown in Algorithm 3, there remain three challenges while considering \(\mathsf{CIM}\) and RR-Greedy: (i) a new \(\theta_{max}\) requires devising to ensure the approximation guarantee in the worst case; (ii) \(\sigma^{\mu}(\mathcal{S}^{*})\) and \(\sigma^{I}(\mathcal{S})\) are to recompute to secure the approximation for any of \(i_{max}\) early terminations; (iii) the result returned from any of above-said criteria is correct with a high probability. To address these issues, we implement the following three major modifications. ### Detailed Modifications **Computing \(\theta_{max}\)**. As shown in Line 1 of Algorithm 3, RR-OPIM+ first generates a random seed set \(\mathcal{S}\) by assigning each candidate PP to an arbitrary AP friend while ensuring the partition matroid \(\mathcal{I}\). It then records the number of distinct seeds in \(\mathcal{S}\) as \(\chi\), which is a lower bound of the optimal spread \(\sigma(\mathcal{S}^{*})\) and is required for deriving \(\theta_{max}\). The following lemma provides the setting of \(\theta_{max}\), ensuring the correctness of RR-OPIM+ when \(i=i_{max}\). **Lemma 5.1**.: _Let \(\mathcal{R}\) be a set of random RR sets, \(\chi\) be defined as Line 1, and \(\mathcal{S}\) be the result obtained by Line 8 of Algorithm 3. For fixed \(\epsilon\) and \(\delta\), if \(|\mathcal{R}|\geq\theta_{max}\) and_ \[\theta_{max}=\frac{2n_{\mathcal{P}}\left(\frac{1}{2}\sqrt{\ln\frac{\epsilon}{ 3}}+\sqrt{\frac{1}{2}\left(\ln\left(\prod_{w\in A}\left(C_{w}^{(\chi)} \right)\right)+\ln\frac{\epsilon}{3}\right)}\right)^{2}}{\epsilon^{2}\cdot\chi}, \tag{1}\] _then \(\mathcal{S}\) is \((1/2-\epsilon)\)-approximate with at least \(1-\delta/3\) probability._ **Bounding \(\sigma(\mathcal{S}^{*})\) and \(\sigma(\mathcal{S})\)**. We next derive the lower bound \(\sigma^{I}(\mathcal{S})\) of \(\sigma(\mathcal{S})\) and the upper bound \(\sigma^{I}(\mathcal{S}^{*})\) of \(\sigma(\mathcal{S}^{*})\) such that the approximation ratio \(\frac{\sigma(\mathcal{S})}{\sigma(\mathcal{S}^{*})}\geq\frac{\sigma^{I}(\mathcal{S} )}{\sigma^{\mu}(\mathcal{S}^{*})}\). The settings are as follows. **Lemma 5.2**.: _Given a graph \(P\) with \(n_{p}\) nodes, \(\mathcal{R}_{1}\) with \(|\mathcal{R}_{1}|=\theta_{1}\), and \(\mathcal{R}_{2}\) with \(|\mathcal{R}_{2}|=\theta_{2}\), for any \(p_{f}\in(0,1)\), by setting_ \[\sigma^{u}(\mathcal{S}^{*})=\left(\sqrt{2\cdot\Lambda_{\mathcal{R}_{1}}( \mathcal{S})+\frac{\ln(1/p_{f})}{2}}+\sqrt{\frac{\ln(1/p_{f})}{2}}\right)^{2} \cdot\frac{n_{p}}{\theta_{1}}, \tag{3}\] \[\sigma^{l}(\mathcal{S})=\left(\left(\sqrt{\Lambda_{\mathcal{R}_{ 2}}(\mathcal{S})+\frac{2\cdot\ln(1/p_{f})}{9}}-\sqrt{\frac{\ln(1/p_{f})}{2}} \right)^{2}-\frac{\ln(1/p_{f})}{18}\right)\cdot\frac{n_{p}}{\theta_{2}}, \tag{2}\] _we have \(\Pr\left[\sigma(\mathcal{S})<\sigma^{l}(\mathcal{S})\right]<p_{f}\) and \(\Pr\left[\sigma(\mathcal{S}^{*})>\sigma^{u}(\mathcal{S})\right]<p_{f}\)._ In Lemma 5.2, the derivation of \(\sigma^{u}(\mathcal{S}^{*})\) requires an upper bound of the coverage \(\Lambda_{\mathcal{R}_{1}}(\mathcal{S}^{*})\) of the unknown optimal set \(\mathcal{S}^{*}\). To this end, we need the following corollary in terms of Theorem 4.3, since \(\Lambda_{\mathcal{R}}(\cdot)\) is also non-decreasing and submodular. **Corollary 2**.: _Let \(\mathcal{S}^{*}\) be the optimal solution of \(\mathtt{CIM}\). RR-Greedy with \(\Lambda_{\mathcal{R}}(\cdot)\) outputs an \(\mathcal{S}\) with \(\Lambda_{\mathcal{R}}(\mathcal{S})\geq\frac{1}{1+\gamma}\Lambda_{\mathcal{R}} (\mathcal{S}^{*})\geq\frac{1}{2}\Lambda_{\mathcal{R}}(\mathcal{S}^{*})\)._ Accordingly, Lemma 5.2 employs \(2\cdot\Lambda_{\mathcal{R}_{1}}(\mathcal{S})\) as a vanilla upper bound of \(\Lambda_{\mathcal{R}_{1}}(\mathcal{S}^{*})\), which might be loose in practice and motivates us to design a tightened upper bound of \(\Lambda_{\mathcal{R}_{1}}(\mathcal{S}^{*})\) as follows. **Lemma 5.3**.: _For any seed set \(\mathcal{S}\) under partition matroid, i.e., \(|\mathcal{S}_{u}|\leq k,\forall u\in A\) and any set \(\mathcal{R}\) of random RR sets,_ \[\Lambda_{\mathcal{R}}(\mathcal{S}^{*})\leq\Lambda_{\mathcal{R}}^{\phi}( \mathcal{S}^{*})=\Lambda_{\mathcal{R}}(\mathcal{S})+\sum\limits_{u\in A} \sum\limits_{e_{u,e}\in\Phi_{k}(\mathcal{S},u)}\Lambda_{\mathcal{R}}(e_{u,u} |\mathcal{S}),\] _where \(\Phi_{k}(\mathcal{S},u)\) denotes the set of at most \(k\) AP-seed pairs in \(\mathcal{C}_{u}\) with the \(k\) largest coverage gain on \(\mathcal{R}\) w.r.t. \(\mathcal{S}\)._ Accordingly, a tightened upper bound \(\sigma^{u}(\mathcal{S}^{*})\) is \[\sigma^{u}(\mathcal{S}^{*})=\left(\sqrt{\Lambda_{\mathcal{R}_{1}}^{u}( \mathcal{S}^{*})+\frac{\ln 1/p_{f}}{2}}+\sqrt{\frac{\ln 1/p_{f}}{2}}\right)^{2} \cdot\frac{n_{p}}{\theta_{1}}, \tag{4}\] where \[\Lambda_{\mathcal{R}_{1}}^{u}(\mathcal{S}^{*})=\min\left\{2\cdot\Lambda_{ \mathcal{R}_{1}}(\mathcal{S}),\min\limits_{0\leq t<k}\Lambda_{\mathcal{R}_{1} }^{\phi}(\mathcal{S}^{t})\right\}\] and \(\mathcal{S}^{t}\) is the seed set with \(|\mathcal{S}^{t}\cap\mathcal{C}_{u}|=\min(|\mathcal{C}_{u}|,t)\;\forall u\in A\). **Putting it together.** As per Algorithm 3 and Lemma 5.2, by setting \(p_{f}=\delta/(3\cdot_{max})\), we derive an unqualified \(\sigma^{l}(\mathcal{S})>\sigma(\mathcal{S})\) (or \(\sigma^{u}(\mathcal{S})<\sigma(\mathcal{S}^{*})\)) with probability at most \(\delta/(3\cdot_{max})\) in a given iteration \(i\) of \(i_{max}\) iterations. Moreover, as illustrated in Lemma 5.1, the failure probability for the result in the worst case is at most \(\delta/3\). By the union bound, the correctness of RR-OPIM+ is as follows. **Theorem 5.4**.: _Given a graph \(G\), a set of APs \(A\), a model \(\mathtt{M}\), and a constant \(k\), let \(\mathcal{S}^{*}=\bigcup_{u\in A}\mathcal{S}^{*}_{u}\) be the optimal solution of \(\mathtt{CIM}\). For every \(e>0\) and \(\delta>0\), RR-OPIM+ yields a \((\frac{1}{2}-\epsilon)\)-approximate output \(\mathcal{S}=\bigcup_{u\in A}\mathcal{S}_{u}\), with probability at least \(1-\delta\)._ Moreover, we have the following theorem to guarantee the expected running time of RR-OPIM+. **Theorem 5.5**.: _With \(\delta<1/2\), RR-OPIM+ runs in the expected time of \(O\left(\epsilon^{-2}\cdot\left(k\cdot d\cdot\ln|C|+\ln\frac{1}{\delta}\right) \cdot\left(n_{p}+m_{p}\cdot\frac{\sigma(\{\nu^{*}\})}{\sigma(\mathcal{S}^{*})} \right)\right),\) where \(\mathcal{S}^{*}\) is the optimal seed set, \(\mathcal{S}^{*}\) is the node with the largest spread in \(P\)._ ## 6. Additional Related Works In this part, we revisit problems that are germane to our work at first glance and distinguish them from in-game scenarios and \(\mathtt{CIM}\). Amid them, a line of works employs \(\mathtt{IM}\) for link prediction. For example, active friending (Sidney et al., 2016) and \(\mathtt{IM}\) variants based on edge insertion (Sidney et al., 2016; Li et al., 2017; Li et al., 2018) recommend people-you-may-know to increase the acceptance probability and boost the influence spread, respectively. In contrast, the in-game scenario is to sort existing friends for APs, and the objective of \(\mathtt{CIM}\) can be fundamentally treated as the ranking measure. Lu et al. (Lu et al., 2019) focuses on the comparative \(\mathtt{IM}\) problem and considers scenarios where two products are promoted simultaneously, which is beyond the scope of our study. Li et al. (Li et al., 2018) leverages the Hawkes process to infer the diffusion, and it is orthogonal to our work. Prior works in (Li et al., 2018; Li et al., 2018) consider adaptive seeding and assume that the seed selection is a two-stage framework, which first selects a set \(S\) from a given subset of \(V\), followed by selecting another seed set \(T\) from the influenced neighboring nodes of \(S\). The objective is to maximize the expected influence spread of \(T\) under the cardinality constraint of \(|S|+|T|\). Distinct from adaptive seeding, in-game incentive propagation only focuses on the second stage but requires a partition matroid constraint. The self-activation influence maximization (Shen et al., 2017) introduces a concept called self-activated user, which is similar to AP but also fails to consider the capacity of each AP. Another related work is multi-round influence maximization (Shen et al., 2017). It considers the scenario requiring multiple rounds of promotions and may desire to repeatedly select the same influential user, whereas this contradicts Observation 2. Huang et al. (Huang et al., 2019) study influence maximization of online gaming platforms but also consider the summation of the influence of single seeds. In addition, targeted influence maximization (Shen et al., 2017) aims to select seeds to influence more users from a targeted subset, whereas \(\mathtt{CIM}\) describes a reverse problem starting from a subset. ## 7. Experiments We first introduce the experimental settings, followed by conducting an empirical study on \(\mathtt{TXG}\) to explore the configurations for \(\mathtt{CIM}\). At last, we evaluate the performance of the proposed algorithms in terms of quality and efficiency. All experiments are conducted on a Linux machine with Intel Xeon(R) Gold 6240(@2.60GHz CPU and 377GB RAM in single-thread mode. None of the experiments need anywhere near all the memory. Due to space constraints, we refer interested readers to Appendix A for more experiments, e.g., AP selection and sensitivity analysis w.r.t. other constants. ### Experimental Setups **Datasets.** Besides the incentive propagation dataset _TXG_, we include 5 real-world social networks: _DNC_[(24)], _Blog_[(48)], _Twitch_[(40)], _Orkut_[(54)] and _Twitter_[(55)], whose statistics are shown in Table 1. All datasets are collected from KONECT [(24)] and SNAP [(26)], and used in previous IM works [(17; 18; 49)]. **Algorithms and parameters.** We test the performance of 9 algorithms, which are in the following 3 categories: * _Local competitors_: Degree, PageRank, IMM, OPIM-C. We follow the local framework in Section 2.3 to select \(k\) seeds for each AP by degree, PageRank [(36)], IMM [(49)], and OPIM-C [(46)]. * _Greedy solutions_: MG-Greedy, RR-Greedy. We utilize the lazy evaluation trick in CELF for the seed selection and conduct \(r=10,000\) MC simulations for spread estimation [(25)]. * _Scalable solutions_: RR-OPIM+, RR-OPIM, MG-OPIM. To verify the tightened bound in RR-OPIM+, we replace Eq.(4) with Eq.(2) in Line 6 of Algorithm 3 and call it RR-OPIM. To evaluate RR-Greedy in RR-OPIM+, we replace RR-Greedy with MG-Greedy in Line 5 of RR-OPIM and call it MG-OPIM. Notice that results of RR-OPIM and MG-OPIM are also \((1/2-\epsilon)\)-approximate. For algorithms extending OPIM-C, we set \(\epsilon=0.1\) and \(\delta=1/n\)[(17; 18; 46)]. For a fair comparison, all algorithms are implemented in C++ and compiled with \(-03\) optimization, which is available at: [https://github.com/waetr/Capacity_Constrained_IM](https://github.com/waetr/Capacity_Constrained_IM). ### Empirical Configurations for Cim **Selecting \(M\).** First, we bridge the gap between model \(M\) (IC or LT) and the actual dissemination from the engaged seeds. To generate the diffusion among PPs, we use the co-playing logs with tuples \((u,v,T_{u,v})\) representing that an active PP \(u\) played together with the passive friend \(v\) at timestamp \(T_{u,v}\), and clean logs by preserving the earliest timestamp for each distinct co-playing relationship. We construct the _diffusion trees_[(38)] from the co-playing logs. In particular, we treat the seed as the root of each tree and add the directed edge \((u,w)\) to the tree if (i) there exists an edge \((u,v)\) on the tree satisfying \(T_{u,v}<T_{u,v}\) and (ii) the tree is acyclic after insertion. To capture the diffusion of co-playing behaviors, we follow prior works [(22; 49; 50)] to normalize the influence probability and weight in IC and LT by leveraging daily co-playing times between friends. Analogously, the model-predicted diffusion can also be preprocessed into a diffusion tree. Figure 2(a) reports the RMSE between the amount of predicted and true active PPs in each hop \(t\), where the active users have the same shortest distance \(t\) from the seed set \(S\). We find that IC has a better RMSE on each hop, indicating that the actual diffusion is more similar to IC rather than LT and motivating us to leverage IC for CIM. To explain, the inactive \(v\) acquires the incentive automatically once \(v\) plays with one of its active friend \(u\) for the first time, which resembles the process of IC. **Selecting \(k\).** We next explore the setting of \(k\). Specifically, we collect each invited seed \(v\)'s rank (i.e., position) in the recommendation list of an AP \(u\). Figure 2(b) reports the distribution of the ranks of all invited seeds. Notably, the AP prefers inviting the seeds in the top ranks. For instance, 80.1% of seeds are invited when they rank in the _top 20_. We also analyze the rank distribution w.r.t. the overlapped and invited seeds, which is similar to Figure 2(b), indicating the overlapping phenomenon is ubiquitous in the top positions. To summarize, an AP is more likely to invite seeds in the top 20 positions, where seeds are usually overlapping. ### Performance Evaluation In the second set of experiments, we compare the performance of each algorithm in terms of effectiveness and efficiency. Regarding the game dataset _TXG_, we retain the seeds that continued to daily contamination and their preceding APs, resulting in 794 APs and 1.7 thousand seeds. These 1.7 thousand seeds are treated as candidates, and each algorithm is asked to choose 1 seed for each of the 794 APs from their respective candidates. The actual spread of a subset \(S\) of candidates is the number of PPs activated when those seeds are chosen. To configure CIM on remaining public datasets, we only leverage the fraction of APs as explored in Section 3.1, and uniformly sample 5% fraction of users from \(V\) as the AP set \(A\), i.e., \(d/n=5\%\). After determining \(A\), we leverage IC as \(M\) on the derived subgraph \(P\) and choose the constant \(k\) ranging from 2 to 20. We treat \(A\) and the corresponding \(P\) as a query set, and report the average score after repeating on 5 random query sets. We exclude an algorithm if it fails to return in 24 hours. **Effectiveness analysis.** Table 2 shows the actual spread of the \(S\) selected by each method (the timeout one is omitted), where the proposed RR-OPIM+ outperforms other methods by up to 9.87%. Figure 3 reports the spread of each approach on public datasets by fixing \(d/n=5\%\) and varying \(k\). Regarding MG-Greedy and RR-Greedy, RR-Greedy is slightly better than MG-Greedy, and both greedy solutions are superior to other solutions on _DNC_. Notably, the seeds of RR-Greedy infect 26.8% more PPs than Degree when \(k=10\). Regarding the scalable solutions, we find that their seed \begin{table} \begin{tabular}{l c c} \hline \hline **Name** & **\#nodes (n)** & **\#edges (m)** \\ \hline _DNC_ & 0.9\(K\) & 24.2\(K\) \\ _Blog_ & 10.3\(K\) & 668.0\(K\) \\ _Twitch_ & 168.1\(K\) & 13.6\(M\) \\ _TXG_ & 243.4\(K\) & 11.8\(M\) \\ _Orkut_ & 3.1\(M\) & 234.2\(M\) \\ _Twitter_ & 41.7\(M\) & 2.9\(B\) \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset statistics (\(K\)=\(10^{3}\), \(M\)=\(10^{6}\), \(B\)=\(10^{9}\)). \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Algorithm** & RR-OPIM+ & MG-OPIM & RR-OPIM & Degree & PageRank \\ \hline **Spread** & 1,632 & 1,625 & 1,609 & 1,488 & 1,471 \\ \hline \hline \end{tabular} \end{table} Table 2. The actual spread on _TXG_. Figure 2. The RMSE of estimating spreads in each hop. The distribution of the rank of all invited passive seeds on _TXG_. qualities are comparable to greedy solutions and better than local competitors. For instance, RR-OPIM+ can improve the local heuristics by up to 39% on _Orkut_. Regarding the local competitors, they are inferior to all greedy solutions and their scalable versions due to the overlapping of seeds mentioned in Section 2.2. In particular, we report the number of distinct seeds returned by different solutions in Figure 4, where the seed size of RR-OPIM+ is at least \(3\times\) larger than Degree. Furthermore, we observe that OPIM-C and IMM have better results than the rest local solutions on _DNC_, showing the usefulness of approximation in the local framework. We also report the spread by fixing \(k=10\) and varying \(d/n\in\{5,10,20,50\}\%\). As shown in Figure 5, the results under different \(d\) settings have the same tendency as those under different \(k\) settings. **Efficiency analysis.** We next compare the running time of each solution. Here, we ignore the running time of Degree, which can be recorded while reading the graph. As illustrated in Figure 6 and Figure 7, the scalable solution RR-OPIM+ outperforms other solutions in all cases. Most notably, RR-OPIM+ improves RR-OPIM and MG-OPIM by two orders of magnitude on _Twitch_, which signifies the superiority of the proposed tightened bound. Furthermore, we find that RR-OPIM costs less time than MG-OPIM, which spends more time in seed selection and requires more iterations. For instance, MG-OPIM is about \(2\times\) slower than RR-OPIM on _Orkut_ when \(k\geq 10\). For greedy solutions, due to the inefficiency of MC simulations, both solutions are only feasible on _DNC_ but fail on the rest. Notice that the running time of MG-Greedy is slightly better than RR-Greedy. This might be caused by the pruning procedure in CELF. We also evaluate the running time on a smaller co-authorship graph with \(1.5K\) nodes and \(2.7K\) edges (Kalal and Triggs, 2017), where RR-Greedy is up to \(200\times\) faster than MG-Greedy by varying \(d\). Regarding other local competitors, they cost much more time than RR-OPIM+. For instance, OPIM-C is \(3\) to \(4\) orders of magnitude slower than RR-OPIM+ on _DNC_ and _BBg_. PageRank is about one order of magnitude slower than RR-OPIM+ on _Twitch_, _Orkut_, and _Twitter_. Moreover, IMM and OPIM-C raise a timeout on _Twitch_, _Orkut_, and _Twitter_, due to \(O(d)\) times of invocations. **Sensitivity Analysis.** At last, we explore how the error constant \(\epsilon\) affects scalable implementations. Figure 8 reports the time-spread curve by fixing \(k=10,d/n=5\%\) and varying \(\epsilon\in\{0.1,0.05,0.02,0.01\}\). The results are sorted in the descending order of \(\epsilon\), where RR-OPIM and MG-OPIM has a timeout when \(\epsilon=0.01\) in Figure 8(b) and they only have one point when \(\epsilon=0.1\) in Figure 8(c). As shown, the running time and spread of RR-OPIM+ are fast and robust when \(\epsilon\) varies. In contrast, the running time of RR-OPIM and MG-OPIM is more sensitive to \(\epsilon\). In particular, as \(\epsilon\) is halved, the running time of RR-OPIM and MG-OPIM increases by \(2\times-10\times\). ## 8. Deployments We have deployed RR-OPIM+ on an incentive propagation event of Tencent's battle royale game X with 88.2 million quarter-active users and 3.2 billion relationships, whose procedure follows the description in Section 2.3. This deployment is conducted on an in-house cluster consisting of hundreds of machines, each of which has 16GB memory and 12 Intel Xeon Processor E5-2670 CPU cores. In the friendship network of X, the weight of each edge is described by the _intimacy_ score, which records the number of historical interactions from one to the other, e.g., co-playing, gifting, and so on. We implement our proposal by first invoking RR-OPIM+ to select \(k\) passive seeds for each AP and then ranking each seed in the descending order of its intimacy value with the AP. This ensures the interaction willingness between APs and the selected seeds. For the sake of fairness, we compare the performance of our proposal with the strategy Intimacy, which directly ranks friends by intimacy scores and is widely accepted by in-game friend ranking (Shen et al., 2017). Each approach is initially computed based on the subgraph instance ahead of the event and is then updated daily by using the latest graph snapshot. The APs are selected according to the active week, returning 373.46 thousand and 382.52 thousand APs for our proposal and Intimacy, respectively. Due to the network effect, we Figure 4. The seed size by varying \(k\) (\(d/n=50\%\)). Figure 5. The spread by varying \(d\) (\(k=10\)). Figure 3. The spread on various graphs by varying \(k\) (\(d/n=50\%\)). follow (Wang et al., 2018) and partition all users into communities with high connectivity and feature similarity. We then conduct the online A/B testing that randomly assigns the live traffic in the same community to the treatment (i.e., ours) or control (i.e., "Itimacy") group. When the event ended, we evaluate the performance based on the _total spread_, i.e., the number of APs and seeds engaged in seed activation, as well as the activated PPs in daily contamination, which is 60.69 thousand for the treatment group and 58.28 thousand for the control group. This improvement is statistically significant, and we refer interested readers to Appendix A. We also evaluate the playing hours of the total spread and break them down based on roles, i.e., AP, seed, and PP. As shown in Figure 9(a), we can find that the treatment group yields more playing hours for all user roles. Notably, the treatment group improves the control group by 2.33%, 5.39%, and 4.56% for APs, seeds, and PPs, respectively. In addition, we measure the distribution of each active user's playing hours. Figure 9(b) reports the distribution of playing hours ranging from 1 to 10. As shown in Figure 9(b), we can observe that (i) for both the treatment and control groups, the number of active users decreases as the playing hour increases, and (ii) our proposal attracts more users than the baseline in all playing hours. It is worth noting that a similar result can be found for the remaining hours. ## 9. Conclusions Motivated by in-game insights, we present CIM and offer two greedy solutions MG-Greedy and RR-Greedy. For the sake of scalability, we further design an approximated algorithm RR-OPIM+ with near-linear running time. We conduct extensive experiments to demonstrate the superiority of our proposal in terms of effectiveness and efficiency. In addition, we deploy RR-OPIM+ to the in-game incentive propagation scenario, achieving considerable improvement. As a future direction, we will aim to improve the approximation ratio of our proposal and consider the dynamic setting. ## 10. Ethics Statement While the proposed CIM problem and its solutions may boost user engagement and revenue for the online platform, they also pose privacy risks due to the collection of user data and may expose users to negative side effects, such as addiction. As researchers, we recognize the importance of balancing the potential benefits of studying these algorithms with the potential risks to user well-being. To ensure privacy and confidentiality, we have strictly anonymized the data used in our study and have no access to detailed user profiles. Additionally, we have followed the ethical guidelines (Zeng et al., 2018) set forth by Tencent Inc. in conducting this research.
2310.03673
Do Internal Software Metrics Have Relationship with Fault-proneness and Change-proneness?
Fault-proneness is a measure that indicates the possibility of programming errors occurring within a software system. On the other hand, change-proneness refers to the potential for modifications to be made to the software. Both of these measures are crucial indicators of software maintainability, as they influence internal software metrics such as size, inheritance, and coupling, particularly when numerous changes are made to the system. In the literature, research has predicted change- and fault-proneness using internal software metrics that is almost a decade old. However, given the continuous evolution of software systems, it is essential to revisit and update our understanding of these relationships. Therefore, we have conducted an empirical study to revisit the relationship between internal software metrics and change-proneness, and faultproneness, aiming to provide current and relevant insights. In our study, we identified 25 internal software metrics along with the measures of change-proneness and fault-proneness within the wellknown open-source systems from the Apache and Eclipse ecosystems. We then analyzed the relationships between these metrics using statistical correlation methods. Our results revealed that most of the metrics have little to no correlation with fault-proneness. However, metrics related to inheritance, coupling, and comments showed a moderate to high correlation with change-proneness. These findings will assist developers to minimize the higher correlated software metrics to enhance maintainability in terms of change- and fault-proneness. Additionally, these insights can guide researchers in developing new approaches for predicting changes and faults by incorporating the metrics that have been shown to have stronger correlations.
Md. Masudur Rahman, Toukir Ahammed, Kazi Sakib
2023-09-23T07:19:41Z
http://arxiv.org/abs/2310.03673v2
# Do Internal Software Metrics Have Relationship with Fault-proneness and Change-proneness? ###### Abstract. Fault-proneness is a measure indicating programming errors while change-proneness indicates the possibility of changes to a software system. Both of these measures are related to software maintainability which impact internal software metrics such as size, inheritance, coupling, etc. due to making many changes to the system. In the literature, change- and fault-proneness have been predicted using the internal software metrics which are almost one decade earlier. Therefore, as software systems and structures are evolving in nature, we present an empirical study to revisit the relationship of the internal software metrics with change- and fault-proneness to provide up-to-date insights. In particular, we identify 25 internal software metrics, change-proneness and fault-proneness in the well-known open source systems from _Apache_ and _Eclipse_ ecosystems. Then we analyse the relationship based on the statistical correlation method. The results show that almost all of the metrics have no or low relationship with fault-proneness, while inheritance, coupling and comments-related metrics have a moderate or high relationship with change-proneness. These findings will assist developers to minimize the higher related software metrics to enhance maintainability in terms of change- and fault-proneness. In addition, these also help researchers to innovate change and fault prediction approaches by incorporating the higher related metrics. change proneness, fault proneness, internal software metrics + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: Computer Science and Technology + Footnote †: journal: journal: Computer Science and Technology + Related Work There exist several works regarding the effect of internal software metrics with change- and fault-proneness of a system (Lu et al., 2017; Li et al., 2018). In this section, the existing literature is discussed. Lu et al. (Lu et al., 2017) examined the ability of object-oriented (OO) metrics to predict software change-proneness. They showed that size-related metrics have a moderate effect in predicting change-proneness, while coupling-related metrics have a lower effect, and metrics related to inheritance have a poor effect. Generally speaking, our work tries to revisit their findings in the current software systems. Zhou et al. (Zhou et al., 2018) showed that size-related class metrics have a confounding effect on the association between internal OO metrics such as cohesion, coupling, inheritance-related metrics, etc., and change-proneness. After conducting an empirical study on three open source systems, Eski et al. (Eski et al., 2019) showed that there exist relations between metrics and change-proneness. However, significant metrics are different based on software domains. This leads to the necessity to re-investigate the relationship. According to Rahman et al. (Rahman et al., 2017), software metrics are important factors to predict fault-proneness. In this respect, careful choice of metrics is significant to improve the prediction accuracy. However, they showed that internal code-related metrics (e.g., size, complexity) are less useful in fault prediction whereas process-related metrics (e.g., number of changes, number of developers) are significant. Our investigation is to revisit their findings about whether the internal software metrics affect fault-proneness by analysing current software systems. In a systematic literature review, Radjenovic et al. (Radjenovic et al., 2019) showed that object-oriented metrics are reported to be better at predicting fault-proneness than complexity and size metrics. In addition, coupling and size-related metrics are effective in fault prediction models (Bradjenovic et al., 2019). Since different change and fault prediction models use different software metrics, it is necessary to revisit the relationship of internal software metrics with change- and fault-proneness in the current software context. ## 3. Empirical Study Design The study aims to assess the relationship between each of the 25 internal software metrics (also reflected object-oriented (OO) software metrics), and change- and fault-proneness by analysing their occurrences in 11 well-known open-source software systems, to identify the impactful metrics ### Formulating the Research Goal To investigate the relationship, the study particularly aims to answer the following two research questions: **RQ1: _What is the relationship between internal software metrics and change-proneness?_** This research question investigates which internal software metrics have a relationship with change-proneness by analysing their occurrences in the software systems. Software metrics having a higher correlation with change-proneness are considered highly 'impactful metrics' for this RQ1. Therefore, a correlation-based analysis is performed to identify the relationship between them and to show whether these metrics have a relationship with the change-proneness. The finding of this RQ1 will help practitioners to focus on the impactful metrics to detect and minimize the change-proneness. **RQ2: _What is the relationship between internal software metrics and fault-proneness?_** This research question investigates which internal software metrics have a relationship with fault-proneness by analysing their occurrences in the software systems. Similar to RQ1, the finding of RQ2 will help practitioners to focus on the impactful metrics to detect and minimize fault-proneness. ### Systems under Study In order to conduct the empirical study and answer the research questions, we have analysed 11 well-known open-source Java systems which belong to two major ecosystems: Apache1 and Eclipse2. Table 1 summarizes the analysed systems, the latest stable versions, and their size in terms of the number of classes (NOCs), number of methods (NOMs), and lines of codes (LOCs). These systems have been selected because - (i) The systems of these ecosystems are well-known in the software engineering research domain (Bradjenovic et al., 2019; Bradjenovic et al., 2019); (ii) these systems exploit Bugzilla3 or Jirata4 as an issue tracker for identifying fault related information; (ii) these systems are in different sizes (ranges from 75,724 to 1,483,583 LOCs), larger enough having an average NOCs of 4,685; NOMs of 41,488 and LOCs of 489,093. Footnote 1: [https://www.apache.org](https://www.apache.org) Footnote 1: [https://www.eclipse.org](https://www.eclipse.org) Footnote 2: [http://www.bugzilla.org](http://www.bugzilla.org) Footnote 3: [https://www.bugzilla.org](https://www.bugzilla.org) ### Detection and Selection of Internal Software Metrics For this study, we consider 25 internal quality attributes and refer these attributes as internal software metrics. The list of the metrics with a short description is shown in Table 2. We choose these metrics for the study because - (i) these cover six factors of a O \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **System** & **Version** & **\#Classes** & **\#Methods** & **LOCs** \\ \hline Activemq & 5.17.0 & 4,747 & 42,349 & 421,979 \\ \hline Ant & 1.9.0 & 1,595 & 13,768 & 138,391 \\ \hline Cassandra & 4.0.0 & 8,101 & 76,237 & 1,483,583 \\ \hline Cayenne & 4.1 & 6,656 & 41,339 & 485,781 \\ \hline CXF & 3.5.3 & 11,109 & 103,490 & 1,120,148 \\ \hline Drill & 1.10.0 & 6,414 & 60,203 & 533,478 \\ \hline Jackrabbit & 2.9.0 & 3,209 & 29,053 & 332,826 \\ \hline Jena & 4.5.0 & 7,077 & 66,138 & 576,575 \\ \hline Pig & 0.9.2 & 2,222 & 15,433 & 231,371 \\ \hline Poi & 5.2.2 & 4,291 & 40,032 & 413,657 \\ \hline Xerces & 2.9.1 & 800 & 9,809 & 131,326 \\ \hline **Total** & & **56,221** & **497,851** & **5,869,115** \\ \hline **Average** & & **4,685** & **41,488** & **489,093** \\ \hline \end{tabular} \end{table} Table 1. Software systems involved in the study software system, that is, complexity, coupling, cohesion, abstraction, encapsulation, and documentation, and (ii) these factors can measure different aspects of software maintainability (Han et al., 2017). These metrics are computed by a well-known tool designed for research purposes, called Understand5(Han et al., 2017). Footnote 5: [https://www.scitools.com/](https://www.scitools.com/) ### Detection of Change-proneness The change-proneness of a class is computed as the total number of changes between two releases (Han et al., 2017). Both addition and deletion are considered as changes to a class. So, the number of changes of a class \(C_{i}\) is calculated using (1). \[\#changes(C_{i})=added(C_{i})+deleted(C_{i}) \tag{1}\] where \(added(C_{i})\) refers to the number of added lines and \(deleted(C_{i})\) refers to the number of deleted lines between two consecutive releases. To compute \(added(C_{i})\)) and \((deleted(C_{i}))\) of each class, the commit history is analyzed from the version control system. ### Detection of Fault-proneness The fault-proneness of a class is measured as the number of bug-fixing changes in a class, as after fixing a fault, it can be detected from the source code. These changes include both addition and deletion statements in source code through commits which are referred as bug-fixing commits. Bug-fixing commits are identified by searching commits that contain _Issue ID_ or _Bug ID_ in commit messages. The search is performed using regular expression. For example, the following commit message from _Cayenne_ project 6 can be identified with the regular expression "_CAY-\(\mid\)\(d\)+_": Footnote 6: [https://github.com/apache/cayenne](https://github.com/apache/cayenne) "CAY-2732 Exception when creating ObjEntity from a DbEntity" Using _Issue ID_ found in commit messages, the corresponding issue report is extracted from the issue tracker such as _Jira_ and _Bugzilla_. Issues related to bugs are separated, i.e., the type of issue is _bug_, excluding _enhancement_ type issues. To exclude duplicated or false-positive bugs that can bias the result, only bugs that have the status _Closed_ or _Resolved_ and the resolution _Fixed_ are considered (Han et al., 2017). Finally, the number of bug-fixing changes of a class \(C_{i}\) is calculated as the sum of added and deleted lines only through bug-fixing commits using (2). (2) \[\small\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{ \mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit{\mathit Rahman et al. (Rahman et al., 2013) where they showed that internal software metrics have lower fault predictive ability. ## 5. Threats to Validity The potential threats to external validity have been identified in this study while generalizing the findings of the study. First, we have used Java systems in our study, and there is a possibility that the results would be different for other object-oriented languages, like - C#. Second, we have used 25 internal software metrics, and the results cannot be generalized to other software metrics. Finally, we cannot extrapolate our results to other open-source and industrial systems. The potential threats to internal validity concern factors that could influence our observations. We are aware that we cannot claim a direct cause-effect relationship between the internal software metrics, and software change- and fault-proneness. In particular, our observations may be influenced by the different factors related to development phases (e.g., experience of developers, workload, etc.). However, we have focused only on the source code-related metrics of software and other external factors are out of the scope of this study. ## 6. Conclusion We have observed that internal software metrics have no or low impact on fault-proneness except _comments_. On the other hand, inheritance, coupling and comments-related metrics have a moderate or high impact on change-proneness, whereas most of the size-related metrics have a low impact. Our findings help software practitioners to optimize the impactful metrics to enhance software maintainability in terms of change- and fault-proneness. In addition, these findings also help researchers to use impactful internal software metrics to detect these two maintainability metrics. As for future research, we will focus on other maintainability aspects such as understandability, testability, etc. to measure their impact on the change- and fault-proneness.
2309.08645
Hawking temperature of black holes with multiple horizons
There are several well-established methods for computing thermodynamics in single-horizon spacetimes. However, understanding thermodynamics becomes particularly important when dealing with spacetimes with multiple horizons. Multiple horizons raise questions about the existence of a global temperature for such spacetimes. Recent studies highlight the significant role played by the contribution of all the horizons in determining Hawking temperature. Here we explore the Hawking temperature of a rotating and charged black hole in four spacetime dimensions and a rotating BTZ black hole. We also find that each horizon of those black holes contributes to the Hawking temperature. The effective Hawking temperature for a four-dimensional rotating and charged black hole depends only on its mass. This temperature is the same as the Hawking temperature of a Schwarzschild black hole. In contrast, the effective Hawking temperature depends on the black hole mass and angular momentum for a rotating BTZ hole.
Chiranjeeb Singha, Pritam Nanda, Pabitra Tripathy
2023-09-15T09:29:15Z
http://arxiv.org/abs/2309.08645v1
# Hawking temperature of black holes with multiple horizons ###### Abstract There are several well-established methods for computing thermodynamics in single-horizon spacetimes. However, understanding thermodynamics becomes particularly important when dealing with spacetimes with multiple horizons. Multiple horizons raise questions about the existence of a global temperature for such spacetimes. Recent studies highlight the significant role played by the contribution of all the horizons in determining Hawking's temperature. Here we explore the Hawking temperature of a rotating and charged black hole in four spacetime dimensions and a rotating BTZ black hole. We also find that each horizon of those black holes contributes to the Hawking temperature. The effective Hawking temperature for a four-dimensional rotating and charged black hole depends only on its mass. This temperature is the same as the Hawking temperature of a Schwarzschild's black hole. In contrast, the effective Hawking temperature depends on the black hole's mass and angular momentum for a rotating BTZ hole. ## 1 Introduction Even after a considerable period since its discovery [1], Hawking radiation has retained its significance and relevance. Its importance stems from various reasons. Not only has its existence been observed in systems that are far from resembling black holes, but it has also raised several crucial questions pertaining to black holes. The original derivation by Hawking demonstrated that a black hole emits radiation similar to that of a perfect black body, with a temperature directly proportional to the surface gravity of its outer horizon. As the whole calculation of Hawking is too global, attempts are made for a local calculation without the knowledge of the future geometry of spacetime. One such approach is tunneling formalism [2], which considers the creation of particle-antiparticle pairs near or inside the horizon. As one particle tunnels across the horizon, the other escapes to infinity, with the negative energy of the particle falling into the black hole balanced by the positive energy of the escaping particle. Subsequently, the tunneling probability exhibits an exponential fall in energy, giving rise to a temperature compared to the Boltzmann probability distribution. Derivation of tunneling probability involves evaluating the imaginary component of the action for the classically forbidden emission of s-wave particles across the horizon, and the nonzero contribution comes from the pole that occurs at the horizon. The contribution was solely considered from the outer horizon in the original calculation conducted in [2]. This leads us to question what would occur if we included contributions from all the physical horizons in a multi-horizon spacetime. Recently, the existence of a global temperature for multi-horizon spacetimes has been proposed [3, 4, 5, 6, 7, 8, 9, 10]. Contributions from all horizons determine this global temperature. Previous works primarily focused on scalar particle tunneling to compute Hawking's temperature in such spacetimes. Here, we investigate the tunneling of a Dirac particle to determine whether Hawking's temperature depends on the contributions of both horizons. We consider the tunneling of a Dirac particle from a rotating and charged black hole in four dimensions of spacetime and a rotating BTZ black hole. We also find that a global temperature can indeed exist for these black holes. Interestingly, the global temperature only depends on its mass for a rotating and charged black hole in four spacetime dimensions. It does not depend on its angular momentum and charge. Thus, in four spacetime dimensions, all rotating and charged black holes with the same mass have the same global temperature, regardless of their differing angular momenta and charges. Moreover, we show that the effective temperature is the same as the Hawking temperature of Schwarzschild's black hole [1]. In a recent study, it has been demonstrated that the effective Hawking temperature for a charged black hole in four dimensions of spacetime, _i.e._, the Reissner-Nordstrom black hole, depends only on the black hole's mass [3]. It is independent of its charge. Interestingly, in this scenario, the effective temperature also matches the Hawking temperature of Schwarzschild's black hole. In contrast, for rotating BTZ black holes, the global temperature depends not only on the black hole's mass but also on its angular momentum. In this article, we consider the tunneling of a Dirac particle from a rotating and charged black hole in four dimensions of spacetime and a rotating BTZ black hole. Thus, in Sec. 2, we briefly review the Dirac equation in the curved spacetime. Using the Dirac equation, in Sec. 3, we derive the Hawking radiation for a rotating and charged black hole in four spacetime dimensions. Similarly, in Sec. 4, we derive the Hawking radiation for a rotating BTZ black hole. We discuss our results in Sec. 5. We will set \(c=G=\hbar=1\) in our calculations. ## 2 Dirac particle in a curved background In this section, we provide a concise overview of the behavior of a Dirac particle in curved spacetime, closely following the framework outlined in references [11, 12, 13]. Here we also consider a gauge field \(A_{\mu}\) coupled to gravity also with the Dirac field. The Dirac equation in curved spacetime extends the original Dirac equation formulated in flat Minkowski spacetime. Dirac equation in flat space-time governs the dynamics of spinor fields. In Minkowski's spacetime field theory, the spin of a field can be categorized based on how the field's properties change under infinitesimal Lorentz transformations. We aim to extend these considerations to curved spacetime, which refers to a general Lorentzian manifold \((\mathcal{M},g)\) while maintaining the connection with the Lorentz group locally. This can be accomplished by utilizing the tetrad (\(e_{\alpha}=e^{\mu}_{\alpha}\partial_{\mu}\)) and co-tetrad (\(w^{\alpha}=e^{\alpha}_{\mu}dx^{\mu}\)), also known as the vierbein formalism. The fundamental principle of this approach is to establish a system of normal coordinates, denoted as \(e^{\alpha}_{\mu}(p)\), at every point \(p\) in spacetime such that when considering a more general coordinate system, the metric tensor becomes more intricate; nevertheless, it remains connected to flat space-time metric \(\eta_{\alpha\beta}\) through the following specific relationship, \[g_{\mu\nu}=e^{\alpha}_{\mu}e^{\beta}_{\nu}\eta_{\alpha\beta}\ ;\ \ \ \ \ \eta_{\alpha\beta}=e^{\mu}_{\alpha}e^{\nu}_{\beta}g_{\mu\nu}\, \tag{1}\] where \((e^{\alpha}_{\mu},e^{\beta}_{\nu})\) are the vielbein. Index \((\alpha,\beta)\) are related to the local Lorentz frame index, and \((\mu,\nu)\) is related to space-time indices. In a d-dimensional Riemannian manifold, the metric tensor \(g_{\mu\nu}\) possesses \(d(d+1)/2\) degrees of freedom, whereas the vielbein \(e^{\mu}_{\alpha}\) has \(d^{2}\) degrees of freedom. Numerous non-coordinate bases yield the same metric, g, with each base being interconnected to others through local orthogonal rotations \(w^{\alpha}=\Lambda^{\alpha}_{\beta}w^{\beta}\). This transformation induces a transformation in vielbein as \(e^{\alpha}_{\mu}=\Lambda^{\alpha}_{\beta}e^{\beta}_{\mu}\). By considering these facts, we can derive the transformation rule for the connection one-form \(\omega^{\alpha\beta}_{\mu}\) from the definition of torsion two forms (\(T^{\alpha}=dw^{\alpha}+\omega^{\alpha}_{\beta}w^{\beta}\)) as follows, \[\omega^{\alpha}_{\mu\beta}=\Lambda^{\alpha}_{\gamma}\omega^{\gamma}_{\mu\delta }\big{(}\Lambda^{-1}\big{)}^{\delta}_{\beta}+\Lambda^{\alpha}_{\gamma}\big{(} \partial_{\mu}\Lambda^{-1}\big{)}^{\gamma}_{\beta}. \tag{2}\] Now as we know, the presence of gamma matrices in the Dirac equation is crucial because they ensure that the equation retains its symmetry under Lorentz transformations. Also, the inclusion of gamma matrices in the Dirac equation is essential for accounting for the phenomenon of spin. These matrices establish a connection between a spinor's different components and a particle's momentum and energy. This relationship between spin and the gamma matrices is a fundamental aspect of quantum field theory; however, when dealing with curved spacetime, we need to construct a modified version of gamma matrices that maintains covariance, and we can achieve this using a normal coordinate system. In curved space-time, we can define gamma matrice as \(\gamma^{\mu}=e_{a}^{\mu}\gamma^{\alpha}\) where \(\gamma^{\alpha}\) is the usual flat space Dirac matrices. In flat space-time, gamma matrices satisfy the following relation, \[\{\gamma^{\alpha},\gamma^{\beta}\}=2\eta^{\alpha\beta}\mathbf{I}\;, \tag{3}\] where the gamma matrices are, \[\gamma^{0} = \begin{pmatrix}i&0\\ 0&-i\end{pmatrix},\;\;\gamma^{1}=\begin{pmatrix}0&\sigma^{3}\\ \sigma^{3}&0\end{pmatrix},\;\;\gamma^{2}=\begin{pmatrix}0&\sigma^{2}\\ \sigma^{2}&0\end{pmatrix},\] \[\gamma^{3} = \begin{pmatrix}0&\sigma^{1}\\ \sigma^{1}&0\end{pmatrix}\;. \tag{4}\] Now using the definition of \(\gamma^{\mu}\), the above relation can be generalized to a curved space-time \((\mathcal{M},g)\) as, \[\{\gamma^{\mu},\gamma^{\nu}\}=2g^{\mu\nu}\mathbf{I}\;. \tag{5}\] Under local Lorentz transformation \(\Lambda\), Dirac spinor at a point \(\mathrm{p}\) (\(p\in\mathcal{M}\)) transform as, \[\psi(p)\rightarrow\mathcal{R}(\Lambda)\psi(p),\;\;\;\;\;\bar{\psi}(p) \rightarrow\bar{\psi}(p)\mathcal{R}(\Lambda)^{-1}\;, \tag{6}\] where \(\bar{\psi}(p)=\psi(p)^{\dagger}\gamma^{0}\) and \(\mathcal{R}(\Lambda)\) is the spinor representation of Lorentz transformation. In order to formulate an invariant action, we aim to find a covariant derivative that acts as a local Lorentz vector and undergoes spinor-like transformations as, \[\mathcal{D}_{\alpha}\psi(p)=\mathcal{R}(\Lambda)\Lambda_{\alpha}^{\beta} \mathcal{D}_{\beta}\psi(p)\;, \tag{7}\] where \(\mathcal{D}_{\alpha}=\nabla_{\alpha}+i\frac{q}{\hbar}A_{\alpha}\). Here \(A_{\alpha}=e_{\alpha}^{\mu}A_{\mu}\) is the gauge field. If we identify such a covariant derivative, we can express an invariant Lagrangian as follows, \[\mathcal{L}=\bar{\psi}(-i\hbar\gamma.\mathcal{D}+m)\psi\;. \tag{8}\] Now one can check that the quantity \(e_{\alpha}^{\mu}\partial_{\mu}\psi(p)\) transform under \(\mathcal{R}(\Lambda)\) as follows, \[\begin{split} e_{\alpha}^{\mu}\partial_{\mu}\psi(p)& \rightarrow\Lambda_{\beta}^{\alpha}e_{\alpha}^{\mu}\partial_{\mu} \big{(}\mathcal{R}(\Lambda)\psi(p)\big{)}\\ &=\Lambda_{\beta}^{\alpha}e_{\alpha}^{\mu}\big{(}\partial_{\mu} \mathcal{R}(\Lambda)\psi(p)+\mathcal{R}(\Lambda)\partial_{\mu}\psi(p)\big{)} \;.\end{split} \tag{9}\] Here we choose covariant derivative as, \[\nabla_{\alpha}\psi(p)=e^{\mu}_{\alpha}\big{(}\partial_{\mu}+\varPi_{\mu}\big{)} \psi(p)\, \tag{10}\] where \(\varPi_{\mu}\) is the connection necessary to make the derivative covariant. By utilizing equations (6) and (8), we can ascertain that \(\varPi_{\mu}\) satisfies the following transformation, \[\varPi_{\mu}=\mathcal{R}(\varLambda)\varPi_{\mu}\mathcal{R}(\varLambda)^{-1}- \mathcal{R}(\varLambda)^{-1}\partial_{\mu}\mathcal{R}(\varLambda). \tag{11}\] To determine the specific form of \(\varPi_{\mu}\), we examine an infinitesimal local Lorentz transformation given by \(\varLambda_{\beta}^{\alpha}=\delta_{\beta}^{\alpha}+\epsilon_{\beta}^{\alpha}\). Under this transformation, the Dirac spinor transforms as, \[\psi(p)=\exp\left(\frac{i}{2}\epsilon^{\alpha\beta}\varSigma_{\alpha\beta} \right)\psi(p)\approx\left(1+\frac{i}{2}\epsilon^{\alpha\beta}\varSigma_{\alpha \beta}\right)\psi(p). \tag{12}\] Here, we define \(\varSigma_{\alpha\beta}=\frac{i}{4}[\gamma_{\alpha},\gamma_{\beta}]\), representing the spinor representation of the Lorentz transformation generators. The quantity \(\varSigma_{\alpha\beta}\) satisfies the following Lie algebra, \[[\varSigma_{\alpha\beta},\varSigma_{\gamma\delta}]=\eta_{\gamma\beta} \varSigma_{\alpha\delta}-\eta_{\gamma\alpha}\varSigma_{\beta\delta}+\eta_{ \delta\beta}\varSigma_{\gamma\alpha}-\eta_{\delta\alpha}\varSigma_{\gamma \beta}. \tag{13}\] Under the same Lorentz transformation, \(\varPi_{\mu}\) undergoes the following transformation, \[\varPi_{\mu} \rightarrow\left(1+\frac{i}{2}\epsilon^{\alpha\beta}\varSigma_{ \alpha\beta}\right)\varPi_{\mu}\left(1-\frac{i}{2}\epsilon^{\gamma\delta} \varSigma_{\gamma\delta}\right) \tag{14}\] \[-\frac{i}{2}\left(\partial_{\mu}\epsilon^{\alpha\beta}\right) \varSigma_{\alpha\beta}\left(1-\frac{i}{2}\epsilon^{\gamma\delta}\varSigma_{ \gamma\delta}\right)\] \[=\varPi_{\mu}+\frac{i}{2}\epsilon^{\alpha\beta}[\varSigma_{ \alpha\beta},\varPi_{\mu}]-\frac{i}{2}\big{(}\partial_{\mu}\epsilon^{\alpha \beta}\big{)}\varSigma_{\alpha\beta}\.\] Now considering the transformation of connection one form under infinitesimal Lorentz transformation (infinitesimal version of equation (4)) and transformation rule of \(\varPi_{\mu}\) along with the equation (13), one can show, \[\varPi_{\mu}=\frac{i}{2}\omega_{\mu}^{\alpha\beta}\varSigma_{\alpha\beta}. \tag{15}\] Here we arrive at the Lagrangian, which possesses scalar properties under both coordinate transformations and local Lorentz rotations. \[\begin{split}\mathcal{L}&=\bar{\psi}\left(-i\hbar \gamma.\mathcal{D}+m\right)\psi\\ &=\bar{\psi}(p)\big{(}-i\hbar\gamma^{\alpha}\nabla_{\alpha}- \gamma^{\alpha}qA_{\alpha}+m\big{)}\psi(p)\\ &=\bar{\psi}(p)\Bigg{[}-i\hbar\gamma^{\alpha}e^{\mu}_{\alpha} \Bigg{(}\partial_{\mu}+\frac{i}{2}\omega_{\mu}^{\gamma\delta}\varSigma_{ \gamma\delta}+\frac{iq}{\hbar}A_{\mu}\Bigg{)}+m\Bigg{]}\psi(p)\.\end{split} \tag{16}\] Taking the variation of the Lagrangian with respect to the Dirac field, we get the Dirac equation as, \[\bigg{[}-i\hbar\gamma^{\alpha}e^{\mu}_{\alpha}\bigg{(}\partial_{\mu}+\frac{i}{2} \omega^{\gamma\delta}_{\mu}\Sigma_{\gamma\delta}+\frac{iq}{\hbar}A_{\mu}\bigg{)} +m\bigg{]}\psi(p)=0. \tag{17}\] ## 3 Hawking radiation in Kerr-Newman space time Here we start by considering a rotating and charged black hole in four spacetime dimensions. The Kerr-Newman metric is considered for the rotating and charged black hole spacetime in four-dimension. The Kerr-Newman metric in Boyer-Lindquist coordinates is provided as follows [14, 15, 16, 17], \[\begin{split} ds^{2}=-\left(1-\frac{2Mr-Q^{2}}{\rho^{2}}\right) dt^{2}&-\frac{2(2Mr-Q^{2})a\sin^{2}\theta}{\rho^{2}}dtd\phi\\ &+\frac{\Sigma}{\rho^{2}}\sin^{2}\theta d\phi^{2}+\frac{\rho^{2} }{\varDelta}dr^{2}+\rho^{2}d\theta^{2}\,\end{split} \tag{18}\] where, \[\begin{split}&\rho^{2}=r^{2}+a^{2}\cos\theta^{2}\\ &\varDelta=r^{2}-2Mr+a^{2}+Q^{2}\\ &\varSigma=(r^{2}+a^{2})^{2}-a^{2}\varDelta\sin^{2}\theta\.\end{split} \tag{19}\] Here \(M\) is the mass of the black hole, \(a\) is the angular momentum per unit mass, and \(Q\) is the charge of the black hole. The spacetime metric (18) has to coordinate singularity at \(r=r_{\pm}\), defining the horizon of the rotating and charged black hole in four spacetime dimensions, where, \[r_{\pm}=M\pm\sqrt{M^{2}-a^{2}-Q^{2}}. \tag{20}\] The electromagnetic field tensor for Kerr-Newman spacetime is given by, \[\begin{split} F&=\frac{1}{2}F_{\mu\nu}dx^{\mu}\wedge dx ^{\nu}\\ &=\frac{Q(r^{2}-a^{2}\cos^{2}\theta)}{\rho^{4}}dr\wedge(dt-a\sin^ {2}\theta d\phi)\\ &-\frac{2Qar\sin\theta\cos\theta}{\rho^{4}}d\theta\wedge(dt-(r^ {2}+a^{2})d\phi)\.\end{split} \tag{21}\] The vector potential responsible for this field tensor is \[A=A_{\mu}dx^{\mu}=-\frac{Qr}{\rho^{2}}(dt-a\sin^{2}\theta d\phi). \tag{22}\] From the above metric (18), we find out the tetrads in Kerr-Newman spacetime. The four tetrads in this spacetime are given by, \[\begin{split} e^{\mu}_{0}&=\left(\sqrt{\frac{\varSigma} {\rho^{2}\varDelta}},0,0,\frac{(2Mr-Q^{2})a}{\rho\sqrt{\varDelta\varSigma}} \right)\\ e^{\mu}_{1}&=\left(0,\frac{\sqrt{\varDelta}}{\rho},0,0\right)\\ e^{\mu}_{2}&=\left(0,0,\frac{1}{\rho},0\right)\\ e^{\mu}_{3}&=\left(0,0,0,\frac{\rho}{\sqrt{\varSigma \sin\theta}}\right)\,.\end{split} \tag{23}\] Now we apply the assumption for the spin-up spinor \(\psi\) field in the following manner [18, 19, 20], \[\psi=\begin{pmatrix}\alpha(t,r,\theta,\phi)\\ 0\\ \beta(t,r,\theta,\phi)\\ 0\end{pmatrix}e^{\frac{i}{\varDelta}\varGamma(t,r,\theta,\phi)}. \tag{24}\] Please note that we will focus solely on the spin-up scenario since the spin-down case is analogous. To employ the WKB (Wentzel-Kramers-Brillouin) approximation, we can insert the proposed form for a spinor field into the general covariant Dirac equation (equation (17)). By dividing the equation by the exponential term and disregarding terms involving \(\hbar\), we obtain the following set of four equations (for more details, see appendix A), \[\begin{split}\alpha\bigg{\{}i(e^{t}_{0}\partial_{t}+e^{\phi}_{0} \partial_{\phi})\mathcal{I}+ie^{t}_{0}qA_{t}+ie^{\phi}_{0}qA_{\phi}\bigg{\}}+ \beta e^{r}_{1}\partial_{r}\mathcal{I}&=0\\ \beta(ie^{\theta}_{2}\partial_{\theta}\mathcal{I}+e^{\phi}_{3} \partial_{\phi}\mathcal{I}+qe^{\phi}_{3}A_{\phi})&=0\\ \alpha e^{r}_{1}\partial_{r}\mathcal{I}-\beta\bigg{\{}i(e^{t}_{0} \partial_{t}+e^{\phi}_{0}\partial_{\phi})\mathcal{I}+ie^{t}_{0}qA_{t}+ie^{ \phi}_{0}qA_{\phi}\bigg{\}}&=0\\ \alpha(ie^{\theta}_{2}\partial_{\theta}\mathcal{I}+e^{\phi}_{3} \partial_{\phi}\mathcal{I}+qe^{\phi}_{3}A_{\phi})&=0\.\end{split} \tag{25}\] Please be aware that here \(\alpha\) and \(\beta\) are not constant, their derivatives and the components of spin connections all have a factor of \(\hbar\). Hence, in the WKB approximation, these terms can be neglected to the lowest order. Since we only consider the Dirac field outside the event horizon, the above equations always fulfill the \(\varDelta>0\) condition. The second and fourth equations indicate that a nontrivial solution is only possible when \((\alpha,\ \beta)\neq 0\). Then from the second and fourth equations, we get, \[ie^{\theta}_{2}\partial_{\theta}\mathcal{I}+e^{\phi}_{3}\partial_{\phi} \mathcal{I}+qe^{\phi}_{3}A_{\phi}=0. \tag{26}\] By examining the first and third equations, it becomes evident that these two equations possess a non-trivial solution for \(\alpha\) and \(\beta\) only when the determinant of the coefficient matrix becomes zero. Subsequently, we can obtain, \[\left(e_{0}^{t}\partial_{t}\mathcal{I}+e_{0}^{\phi}\partial_{\phi}\mathcal{I}+e _{0}^{t}qA_{t}+e_{0}^{\phi}qA_{\phi}\right)^{2}-\left(e_{1}^{r}\partial_{r} \mathcal{I}\right)^{2}=0. \tag{27}\] Since the Kerr-Newman spacetime contains two Killing vectors, \((1,0,0,0)\) and \((0,0,0,1)\), we can employ variable separation for \(\mathcal{I}\) in the following manner, \[\mathcal{I}\left(t,r,\theta,\phi\right)=-\omega t+\mathcal{J}\phi+\mathcal{R} (r,\theta)\, \tag{28}\] where \(\omega\) and \(\mathcal{J}\) are the Dirac particle's energy and angular momentum. Now by substituting the given expression for \(\mathcal{I}\left(t,r,\theta,\phi\right)\) into equation (27), we can derive the following result, \[\left(e_{0}^{t}\omega-e_{0}^{\phi}\mathcal{J}-e_{0}^{t}qA_{t}-e_{0}^{\phi}qA_ {\phi}\right)^{2}-\left(e_{1}^{r}\partial_{r}\mathcal{R}\right)^{2}=0. \tag{29}\] Now we solve the above equation for \(\theta=\frac{\pi}{2}\) and get, \[\begin{split}\mathcal{R}_{\pm}&=\pm\int\frac{ \left(e_{0}^{t}\omega-e_{0}^{\phi}\mathcal{J}-e_{0}^{t}qA_{t}-e_{0}^{\phi}qA_{ \phi}\right)}{e_{1}^{r}}dr\\ &=\pm\int\frac{1}{\mathcal{I}}\left(\sqrt{\Sigma}(\omega-qA_{t}) -\frac{(2Mr-Q^{2})a}{\sqrt{\Sigma}}(\mathcal{J}+qA_{\phi})\right)dr\\ &=\pm\int\frac{1}{\mathcal{I}}\left(\sqrt{\Sigma}\left(\omega+q \frac{Q}{r}\right)-\frac{(2Mr-Q^{2})a}{\sqrt{\Sigma}}\left(\mathcal{J}+q\frac {Qa}{r}\right)\right)\.\end{split} \tag{30}\] As \(\mathcal{I}=r^{2}-2Mr+a^{2}+Q^{2}=(r-r_{+})(r-r_{-})\), the integrand has two poles at the inner and outer horizons. First, we consider the pole \(r=r_{+}\). Then the imaginary part of \(\mathcal{R}_{\pm}\) is given by, \[Im\mathcal{R}_{\pm}=\pm\pi\left(\frac{r_{+}^{2}+a^{2}}{r_{+}-r_{-}}\left( \omega+\frac{qQ}{r_{+}}\right)-\frac{a}{r_{+}-r_{-}}\left(\mathcal{J}+\frac{ qQa}{r_{+}}\right)\right). \tag{31}\] Similarly, if we consider the other pole, i.e., pole at \(r=r_{-}\), then the imaginary part of \(\mathcal{R}_{\pm}\) is, \[Im\mathcal{\tilde{R}}_{\pm}=\pm\pi\left(\frac{r_{-}^{2}+a^{2}}{r_{-}-r_{+}} \left(\omega+\frac{qQ}{r_{-}}\right)-\frac{a}{r_{-}-r_{+}}\left(\mathcal{J}+ \frac{qQa}{r_{-}}\right)\right). \tag{32}\] Using the Hamilton-Jacobi method of tunneling [21] now, we can calculate the tunneling probability. The probabilities of Dirac particles to cross the outer horizon from inside to outside and from outside to inside are respectively \(\mathcal{P}^{+}_{out}\) and \(\mathcal{P}^{+}_{in}\), where, \[\begin{split}\mathcal{P}^{+}_{out}&=exp\left[-\frac{2} {\hbar}Im\mathcal{I}\right]=exp\left[-\frac{2}{\hbar}Im\mathcal{R}_{+}\right]\\ \mathcal{P}^{+}_{in}&=exp\left[-\frac{2}{\hbar}Im \mathcal{I}\right]=exp\left[-\frac{2}{\hbar}Im\mathcal{R}_{-}\right]\.\end{split} \tag{33}\] Similarly, \(\mathcal{P}^{-}_{out}\) and \(\mathcal{P}^{-}_{in}\) are, respectively, probabilities of crossing the inner horizon towards outward and inward. Then we can write, \[\begin{split}\mathcal{P}^{-}_{out}&=exp\left[-\frac{2 }{\hbar}Im\mathcal{I}\right]=exp\left[-\frac{2}{\hbar}Im\mathcal{\tilde{R}}_{+ }\right]\\ \mathcal{P}^{-}_{in}&=exp\left[-\frac{2}{\hbar}Im \mathcal{I}\right]=exp\left[-\frac{2}{\hbar}Im\mathcal{\tilde{R}}_{-}\right]\.\end{split} \tag{34}\] The probability that a Dirac particle emits when it is incident on the outer and inner horizon from inside, respectively, \[\begin{split}\Gamma_{1}&=exp\left[-\frac{4}{\hbar} Im\mathcal{R}_{+}\right]\\ \Gamma_{2}&=exp\left[-\frac{4}{\hbar}Im\mathcal{ \tilde{R}}_{+}\right]\.\end{split} \tag{35}\] The total probability of particle emission via tunneling from two horizons is given by, \[\begin{split}\Gamma=\Gamma_{1}\Gamma_{2}&=exp\left[- \frac{4}{\hbar}\bigg{(}Im\mathcal{R}_{+}+Im\mathcal{\tilde{R}}_{+}\bigg{)} \right]\\ &=exp\left[-\frac{4\pi}{\hbar}\bigg{(}\omega(r_{+}+r_{-})+qQ \bigg{)}\right]\\ &=exp\left[-\frac{4\pi(r_{+}+r_{-})}{\hbar}\bigg{(}\omega+\frac{ qQ}{(r_{+}+r_{-})}\bigg{)}\right]\\ &=exp\left[-\frac{8M\pi(\omega-\omega_{0})}{\hbar}\bigg{]}\, \end{split} \tag{36}\] where \(\omega_{0}=-\frac{qQ}{r_{+}+r_{-}}=qV_{em}\). This probability function can be compared with Boltzmann distribution, and one can extract the corresponding temperature as, \[T_{H}=\frac{\hbar}{8\pi M}. \tag{37}\] Here, \(T_{H}\) denotes the effective Hawking temperature, considering contributions from both horizons. We observe that this effective temperature depends only on the mass of the black hole. It does not depend on the black hole's charge and angular momentum. Also, we show that the effective temperature is the same as the Hawking temperature of a Schwarzschild's black hole [1]. ## 4 Hawking radiation from a rotating BTZ black hole In this section, we consider a rotating black hole in three spacetime dimensions, specifically the rotating BTZ black hole spacetime. The metric describing the rotating BTZ black hole spacetime is given by [22, 23, 24, 25, 26, 27], \[ds^{2}=-\mathcal{N}^{2}dt^{2}+\frac{1}{\mathcal{N}^{2}}dr^{2}+r^{2}(d\phi+ \mathcal{N}^{\phi}dt)^{2}\, \tag{38}\] \(\mathcal{N}^{2}\) be defined as \(\left(-M+\frac{r^{2}}{l^{2}}+\frac{J^{2}}{4r^{2}}\right)=\frac{(r^{2}-r_{\pm} ^{2})(r^{2}-r_{-}^{2})}{l^{2}r^{2}}\), and \(\mathcal{N}^{\phi}\) as \(-\frac{J}{2r^{2}}\). Here, M represents the mass of the black hole, a dimensionless quantity, while J denotes the angular momentum. Additionally, the cosmological constant \(\Lambda\) is related to the AdS radius \(l\) as \(\Lambda\equiv-(1/l^{2})\). The spacetime given by the metric (38) exhibits two coordinate singularities located at \(r=r_{\pm}\), which define the horizons of the rotating BTZ black hole, where, \[r_{\pm}=\sqrt{\frac{Ml^{2}}{2}\left(1\pm\left[1-\frac{J^{2}}{M^{2}l^{2}}\right] ^{\frac{1}{2}}\right)}. \tag{39}\] Here also we compute the tetrads. The tetrads for this spacetime are given by [25], \[\begin{split} e_{0}^{\mu}&=\left(\frac{1}{\mathcal{ N}},0,-\frac{\mathcal{N}^{\phi}}{\mathcal{N}},\right)\\ e_{1}^{\mu}&=\left(0,\mathcal{N},0\right)\\ e_{2}^{\mu}&=\left(0,0,\frac{1}{r}\right)\.\end{split} \tag{40}\] Similar to the four-dimensional spacetime, we get a set of two Dirac equations (for more details, see appendix B) \[\begin{split}\alpha e_{2}^{\phi}\partial_{\phi}\mathcal{I}+ \beta(e_{1}^{r}\partial_{r}\mathcal{I}+e_{0}^{t}\partial_{t}\mathcal{I}+e_{0}^ {\phi}\partial_{\phi}\mathcal{I})=0\\ \alpha(e_{1}^{r}\partial_{r}\mathcal{I}-e_{0}^{t}\partial_{t} \mathcal{I}-e_{0}^{\phi}\partial_{\phi}\mathcal{I})-\beta e_{2}^{\phi} \partial_{\phi}\mathcal{I}=0\.\end{split} \tag{41}\] It becomes apparent that these two equations have a non-trivial solution for \(\alpha\) and \(\beta\) only when the determinant of the coefficient matrix equals zero. Consequently, we can derive, \[-(e_{2}^{\phi}\partial_{\phi}\mathcal{I})^{2}-(e_{1}^{r}\partial_{r}\mathcal{I} )^{2}+(e_{0}^{t}\partial_{t}\mathcal{I}+e_{0}^{\phi}\partial_{\phi}\mathcal{I}) ^{2}=0. \tag{42}\] Here, we apply a similar procedure of Kerr-Newman spacetime to calculate the effective temperature. At first, we calculate the \(\mathcal{R}\) and get, \[\begin{split}\mathcal{R}_{\pm}&=\pm\int\frac{ \sqrt{\left((\omega+\mathcal{J}\mathcal{N}^{\phi})^{2}-\frac{\mathcal{J}^{2} \mathcal{N}^{2}}{r^{2}}\right)}}{\mathcal{N}^{2}}dr\\ &=\pm\int\frac{l^{2}r^{2}\sqrt{\left((\omega-\mathcal{J}\frac{J}{ 2r^{2}})^{2}-\frac{\mathcal{J}^{2}\mathcal{N}^{2}}{r^{2}}\right)}}{(r^{2}-r_{ +}^{2})(r^{2}-r_{-}^{2})}dr\.\end{split} \tag{43}\] Then we calculate the imaginary part of \(\mathcal{R}_{\pm}\) both the pole at \(r=r_{+}\) and \(r=r_{-}\) similarly. We now define as \(Im\mathcal{R}_{+}+Im\mathcal{\bar{R}}_{+}\) (equation (36)) \(Im\mathcal{R}^{eff}\). Then for rotating BTZ black hole spacetime \(Im\mathcal{R}^{eff}\) is given by, \[\begin{split} Im\mathcal{R}^{eff}&=\frac{\pi}{2} \left[\frac{l^{2}r_{+}(\omega-\mathcal{J}\frac{J}{2r_{+}^{2}})}{(r_{+}^{2}-r_{ -}^{2})}+\frac{l^{2}r_{-}(\omega-\mathcal{J}\frac{J}{2r_{-}^{2}})}{(r_{-}^{2}- r_{+}^{2})}\right]\\ &=\frac{\pi l^{2}}{2(r_{+}+r_{-})}\left(\omega+\frac{J}{2r_{+}r_ {-}}\mathcal{J}\right)\\ &=\frac{\pi l^{2}}{2(r_{+}+r_{-})}\left(\omega+\mathcal{Q}_{eff} \mathcal{J}\right)\,\end{split} \tag{44}\] where \(\mathcal{Q}_{eff}=\frac{J}{2r_{+}r_{-}}\) is the effective angular velocity of the two horizons. Using equation (36), it is shown that the total probability of particle emission via tunneling from two horizons is given by, \[\Gamma=exp\left[-\frac{2\pi}{\hbar}\frac{l^{2}}{(r_{+}+r_{-})}\left(\omega+ \mathcal{Q}_{Heff}\mathcal{J}\right)\right]. \tag{45}\] This probability function can be likened to the Boltzmann distribution, allowing one to derive the associated temperature as follows: \[T_{H}=\frac{\hbar\ (r_{+}+r_{-})}{2\pi l^{2}}. \tag{46}\] Here, \(T_{H}\) represents the effective Hawking temperature, considering contributions from both horizons. From the above equation (46) and equation (39), it is clearly shown that this temperature depends on the mass as well as the angular momentum of the black hole. Conclusion In this article, we have calculated the tunneling of a Dirac particle from black holes with multiple horizons. This calculation allows us to comment on the Hawking temperature for those black holes. Here, we studied two types of black hole spacetimes: a rotating and charged black hole in four spacetime dimensions, described by the Kerr-Newman metric, and a rotating black hole in three dimensions, described by the rotating BTZ black hole metric. We have shown that the effective Hawking temperature for the rotating and charged black hole in four spacetime dimensions depends only on the black hole's mass. It is independent of the charge and angular momentum of the black hole. Interestingly, this effective temperature matches with the Hawking temperature of a Schwarzschild's black hole. On the other hand, the effective Hawking temperature depends on the black hole's mass and angular momentum for the rotating black hole in three spacetime dimensions. It would be interesting to extend this formalism to higher-dimensional charged and rotating black holes and check whether the effective Hawking's temperature depends on the angular momentum and charge of the black hole or not. These we leave for the future. ## acknowledgments CS thanks the Saha Institute of Nuclear Physics (SINP) Kolkata for financial support. We thank the reviewer for all the valuable comments and suggestions that helped us to improve the manuscript's quality. ## Appendix A Derivation of equation (25) Equation (17) is the exact Dirac equation in curved space-time. Now to solve the equation we apply the Hamilton-Jacobi method for that we take the limit \(\hbar\to 0\) and consider the equation upto \(O(\hbar)\). Here, also we consider a mass-less charged particle, so in our case, \(m=0\). Now upon substituting the ansatz (24) into the equation (17), it becomes evident that within an approximation up to \(O(\hbar)\), we can neglect the spin coefficient \(\omega_{\mu}^{\alpha\beta}\). So, we start with an approximated Dirac equation by neglecting the spin coefficient, \[-i\hbar\gamma^{\alpha}e_{\alpha}^{\mu}\bigg{(}\partial_{\mu}+\frac{iq}{\hbar }A_{\mu}\bigg{)}\psi=0. \tag{47}\] If we consider only nonzero tetrad, then the above equation reduces to the following \[-i\hbar\bigg{(}\gamma^{0}e_{0}^{t}\partial_{t}+\gamma^{0}e_{0}^{ \phi}\partial_{\phi}+\gamma^{1}e_{1}^{r}\partial_{r}+ \gamma^{2}e_{2}^{\theta}\partial_{\theta}+\gamma^{3}e_{3}^{\phi}\partial_{\phi }+\gamma^{0}e_{0}^{t}\frac{iq}{\hbar}A_{t} \tag{48}\] \[+\gamma^{0}e_{0}^{\phi}\frac{iq}{\hbar}A_{\phi}+\gamma^{3}e_{3}^{ \phi}\frac{iq}{\hbar}A_{\phi}\bigg{)}\psi=0\.\] Four Gamma matrices are \[\gamma^{0}= \begin{pmatrix}i&0&0&0\\ 0&i&0&0\\ 0&0&-i&0\\ 0&0&0&-i\end{pmatrix}\quad\gamma^{1}= \begin{pmatrix}0&0&1&0\\ 0&0&0&-1\\ 1&0&0&0\\ 0&-1&0&0\end{pmatrix} \tag{49}\] \[\gamma^{2}= \begin{pmatrix}0&0&0&-i\\ 0&0&i&0\\ 0&-i&0&0\\ i&0&0&0\end{pmatrix}\quad\gamma^{3}= \begin{pmatrix}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix}\,.\] By inserting the values of gamma matrices, we can write the equation (48) as, \[-i\hbar\begin{pmatrix}A&0&B&C\\ 0&A&D&-B\\ B&C&-A&0\\ D&-B&0&-A\end{pmatrix}\begin{pmatrix}\alpha(t,r,\theta,\phi)\\ 0\\ \beta(t,r,\theta,\phi)\\ 0\end{pmatrix}e^{\frac{i}{\hbar}\mathcal{I}\,(t,r,\theta,\phi)}= \begin{pmatrix}0\\ 0\\ 0\\ 0\end{pmatrix}\,, \tag{50}\] where A, B, and C are \[A=i\bigg{(}e_{0}^{t}\partial_{t}+e_{0}^{t}\frac{iq}{\hbar}A_{t} +e_{0}^{\phi}\partial_{\phi}+e_{0}^{\phi}\frac{iq}{\hbar}A_{\phi}\bigg{)}, \tag{51}\] \[B=e_{1}^{r}\partial_{r},\] \[C=-ie_{2}^{\theta}\partial_{\theta}+e_{3}^{\phi}\partial_{\phi}+ e_{3}^{\phi}\frac{iq}{\hbar}A_{\phi}\.\] \[D=ie_{2}^{\theta}\partial_{\theta}+e_{3}^{\phi}\partial_{\phi}+ e_{3}^{\phi}\frac{iq}{\hbar}A_{\phi}\] Now, using the expression of A, B, C, and D in equation (50), we get \[\begin{pmatrix}\alpha\bigg{\{}i(e_{0}^{t}\partial_{t}+e_{0}^{\phi} \partial_{\phi})\mathcal{I}+ie_{0}^{t}qA_{t}+ie_{0}^{\phi}qA_{\phi}\bigg{\}}+ \beta e_{1}^{r}\partial_{r}\mathcal{I}+o(\hbar)\\ \beta(ie_{2}^{\theta}\partial_{\theta}\mathcal{I}+e_{3}^{\phi}\partial_{\phi} \mathcal{I}+qe_{3}^{\phi}A_{\phi})+o(\hbar)\\ \alpha e_{1}^{r}\partial_{r}\mathcal{I}-\beta\bigg{\{}i(e_{0}^{t}\partial_{t}+ e_{0}^{\phi}\partial_{\phi})\mathcal{I}+ie_{0}^{t}qA_{t}+ie_{0}^{\phi}qA_{\phi} \bigg{\}}+o(\hbar)\\ \alpha(ie_{2}^{\theta}\partial_{\theta}\mathcal{I}+e_{3}^{\phi}\partial_{\phi} \mathcal{I}+qe_{3}^{\phi}A_{\phi})+o(\hbar)\end{pmatrix}e^{\frac{i}{\hbar} \mathcal{I}\,(t,r,\theta,\phi)}= \begin{pmatrix}0\\ 0\\ 0\\ 0\end{pmatrix}\,.\] Thus, we arrive at the following four equations, \[\alpha\Bigg{\{}i(e_{0}^{t}\partial_{t}+e_{0}^{\phi}\partial_{\phi}) \mathcal{I}+ie_{0}^{t}qA_{t}+ie_{0}^{\phi}qA_{\phi}\Bigg{\}}+\beta e_{1}^{r} \partial_{r}\mathcal{I} =0 \tag{53}\] \[\beta(ie_{2}^{\theta}\partial_{\theta}\mathcal{I}+e_{3}^{\phi} \partial_{\phi}\mathcal{I}+qe_{3}^{\phi}A_{\phi}) =0\] \[\alpha e_{1}^{r}\partial_{r}\mathcal{I}-\beta\Bigg{\{}i(e_{0}^{t} \partial_{t}+e_{0}^{\phi}\partial_{\phi})\mathcal{I}+ie_{0}^{t}qA_{t}+ie_{0}^{ \phi}qA_{\phi}\Bigg{\}} =0\] \[\alpha(ie_{2}^{\theta}\partial_{\theta}\mathcal{I}+e_{3}^{\phi} \partial_{\phi}\mathcal{I}+qe_{3}^{\phi}A_{\phi}) =0\.\] ## Appendix B Derivation of equation (41) Here, we apply a similar procedure for writing an approximated Dirac equation for a rotating BTZ black hole. The approximated Dirac equation for the rotating BTZ black hole spacetime is then given by, \[-i\hbar\gamma^{\alpha}e_{\alpha}^{\mu}\partial_{\mu}\psi =0 \tag{54}\] \[\implies-i\hbar\left(\gamma^{0}e_{0}^{\mu}\partial_{\mu}+\gamma^{ 1}e_{1}^{\mu}\partial_{\mu}+\gamma^{2}e_{2}^{\mu}\partial_{\mu}\right)\psi =0\.\] There are three gamma matrices in three dimensions \(\gamma^{i}=(i\sigma^{2},\sigma^{1},\sigma^{3})\). Where \((\sigma^{1},\sigma^{2},\sigma^{3})\) are the three spin Pauli matrices. Now, considering the nonzero tetrads for BTZ black hole, we can write equation (54) as, \[-i\hbar\begin{pmatrix}e_{2}^{\phi}\partial_{\phi}&(e_{0}^{t} \partial_{t}+e_{0}^{\phi}\partial_{\phi})+e_{1}^{r}\partial_{r}\\ e_{1}^{r}\partial_{r}-(e_{0}^{t}\partial_{t}+e_{0}^{\phi}\partial_{\phi})&-e_{ 2}^{\phi}\partial_{\phi}\end{pmatrix}\begin{pmatrix}\alpha(t,r,\phi)\\ \beta(t,r,\phi)\end{pmatrix}e^{\frac{i}{\hbar}\mathcal{I}\left(t,\theta,\phi \right)}=\begin{pmatrix}0\\ 0\end{pmatrix} \tag{55}\] \[\implies\begin{pmatrix}\alpha e_{2}^{\phi}\partial_{\phi} \mathcal{I}+\beta(e_{1}^{r}\partial_{r}\mathcal{I}+e_{0}^{t}\partial_{t} \mathcal{I}+e_{0}^{\phi}\partial_{\phi}\mathcal{I})+o(\hbar)\\ \alpha(e_{1}^{r}\partial_{r}\mathcal{I}-e_{0}^{t}\partial_{t}\mathcal{I}-e_{ 0}^{\phi}\partial_{\phi}\mathcal{I})-\beta e_{2}^{\phi}\partial_{\phi} \mathcal{I}+o(\hbar)\end{pmatrix}e^{\frac{i}{\hbar}\mathcal{I}\left(t,\theta, \phi\right)}=\begin{pmatrix}0\\ 0\end{pmatrix}\.\] So we get a set of two equations as follows, \[\alpha e_{2}^{\phi}\partial_{\phi}\mathcal{I}+\beta(e_{1}^{r} \partial_{r}\mathcal{I}+e_{0}^{t}\partial_{t}\mathcal{I}+e_{0}^{\phi}\partial_ {\phi}\mathcal{I})=0 \tag{56}\] \[\alpha(e_{1}^{r}\partial_{r}\mathcal{I}-e_{0}^{t}\partial_{t} \mathcal{I}-e_{0}^{\phi}\partial_{\phi}\mathcal{I})-\beta e_{2}^{\phi}\partial_ {\phi}\mathcal{I}=0\.\]
2309.15172
Unbalanced Job Approximation using Taylor Series Expansion and Review of Performance Bounds
Unbalanced Job Approximation - UJA is a family of low-cost formulas to obtain the throughput of Queueing Networks - QNs with fixed rate servers using Taylor series expansion of job loadings with respect to the mean loading. UJA with one term yields the same throughput as optimistic Balanced Job Bound - BJB, which at some point exceeds the maximum asymptotic throughput. The accuracy of the estimated throughput increases with more terms in the Taylor series. UJA can be used in parametric studies by reducing the cost of solving large QNs by aggregating stations into a single Flow-Equivalent Service Center - FESCs defined by its throughput characteristic. While UJA has been extended to two classes it may be applied to more classes by job class aggregation. BJB has been extended to QNs with delay servers and multiple jobs classes by Eager and Sevcik, throughput bounds by Eager and Sevcik, Kriz, Proportional Bound - PB and Prop. Approximation Bound - PAM by Hsieh and Lam and Geometric Bound - GB by Casale et al. are reviewed.
Alexander Thomasian
2023-09-26T18:13:34Z
http://arxiv.org/abs/2309.15172v1
# Unbalanced Job Approximation using Taylor Series Expansion and Review of Performance Bounds ###### Abstract Unbalanced Job Approximation - UJA is a family of low-cost formulas to obtain the throughput of Queueing Networks - QNs with fixed rate servers using Taylor series expansion of job loadings with respect to the mean loading. UJA with one term yields the same throughput as optimistic Balanced Job Bound - BJB, which at some point exceeds the maximum asymptotic throughput. The accuracy of the estimated throughput increases with more terms in the Taylor series. UJA can be used in parametric studies by reducing the cost of solving large QNs by aggregating stations into a single Flow-Equivalent Service Center - FESCs defined by its throughput characteristic. While UJA has been extended to two classes it may be applied to more classes by job class aggregation. BJB has been extended to QNs with delay servers and multiple jobs classes by Eager and Sevcik, throughput bounds by Eager and Sevcik, Kriz, Proportional Bound - PB and Prop. Approximation Bound - PAM by Hsieh and Lam and Geometric Bound - GB by Casale et al. are reviewed. ## 1 Introduction to Queueing Net _QuWalkMordeAling_ used to model the realistic situation where customers (resp. jobs) require processing at more than one service station (resp. device at computer system). QNs have been used in many application domains such as manufacturing systems, computer communication networks, and multiprogrammed computer systems. While efficient solution methods have been developed for a subcategory of QN models known as product-form, the solution cost increases with the number of stations, number of job classes and the number of jobs. QN models can be categorized to open, closed, and mixed. In open QNs there are external job arrivals and completed jobs leave the system, while in closed QNs completed jobs are immediately replaced with new jobs, as if there is an infinite backlog of jobs. We consider product-form QNs, which lend themselves to an exact solution and can be analyzed relatively efficiently Kleinrock 1975 [24]. The analysis of open product-form QNs with Poisson arrivals is trivial in that each station can be analyzed separately Jackson 1957 [22]. Closed PF QNs require the degree of concurrency and similarly to open QNs service demands or loadings, which is the mean total time jobs spend being served at service stations. _Unbalanced Job Approximation - UJA_ Thomasian and Nadji 1981 [38] is a family of approximations based on Taylor series expansions of job loading with respect to the average loading \(X_{0}\). UJA starts with an upper bound on throughput, which equals the optimistic throughput of _Balanced Job Bounds - BJBs_ Zahorjan et al. 1982 [43]. UJA obtains more accurate throughput estimates by using a Taylor series expansion with respect to \(X_{0}\). UJA was presented with BJB at SIGMETRICS'81 Zahorjan et al. 1981 [42], but unlike BJB which was publicized in Lazowska et al. 1984 [28], UJA is little known and hence this publication. UJA was applied in the analysis of the QN model of a computer system running CAD applications originated by users of _Time-Sharing Option - TSO_ on an IBM mainframe with the MVS operating system. A performance study based on BEST/1 capacity planning tool [5] is reported by Silvester and Thomasian 1981 [36]. Given large number of disks measurement results were used to aggregate disks with almost balanced utilizations, while treating disks with very low utilizations as delay servers [28]. The paper is organized as follows. Product-form QNs are described in Section 2. The GF method for analyzing closed QNs is presented in Section 3. Aggregation of balanced stations to a _Flow Equivalent Service Center-FESC_ is described in Section 3.1. This is followed UJA for a single and two job classes in Section 4 and Section 5 based on Nadji and Thomasian 1981/1984 [38, 31] Conclusions are drawn in Section 6. Appendix I presents _Performance Bound Hierarchies - PBH_ by Eager and Sevcik 1983,86 [13, 14] Appendix II presents extensions to BJB by Kriz 1984 [26]. Appendix III discusses asymptotic expansions with multiple bottlenecks by George et al. 2012 [15]. Appendix IV presents Geometric Bounds - GB by Casale et al. [?] and compares them with Proportional Bounds - PB by Hsieh and Lam [19]. Appendix V present Proportional Approximation Method - PAM for multichain QNs by Hsieh and Lam 19989 [21]. We have preserved the notation of the original papers in the Appendices to make it easier for readers to refer to them. ## 2 Closed QNs With Product-Form Solutions Now stations with single and multiple servers with exponential service times and FCFS scheduling Jackson 1957 [22]. Arrivals to open QNs are Poisson and jobs are routed from station to station according to probabilistic routing. The analysis of such QN models is inexpensive, since each station can be analyzed separately as if it is subjected to Poisson arrivals. The arrival rates of jobs to the \(N\) stations is given by: \[\underline{\gamma}=(\gamma_{1},\gamma_{2},\ldots,\gamma_{N})\] Job arrival rates to the \(N\) stations, denoted by \(\lambda_{n},1\leq n\leq N\), are obtained by solving the set of linear equations: \[\underline{\lambda}=\underline{\gamma}+\underline{\lambda}P\text{ hence } \underline{\lambda}=[I-P]^{-1}\underline{\gamma}. \tag{1}\] Server utilization factor at the \(n^{th}\) station is \(\rho_{n}=\lambda_{n}\bar{x}_{n}/m_{n}\), where \(\bar{x}_{n}\) is the mean service time and \(m_{n}\) the number of servers. In the case \(m_{n}=1\) and exponential service times \(\bar{x}_{n}=1/\mu_{n}\), the mean residence time at the station is that of an M/M/1 queue. \[R_{n}=\bar{x}_{n}/(1-\rho_{n})=(\mu_{n}-\lambda_{n})^{-1}\text{ for }\rho_{n}<1\] The formula for M/M/m queues is given in [24]. #### Central Server Model - CSM CSM is a closed QN model of a multiprogrammed computer system proposed by Buzen 1973 [4], which consists of a CPU and multiple disks. Jobs alternate between CPU and disk processing until they are completed. An efficient solution method known as the _Convolution Algorithm - CA_ is presented in [4] to analyze CSM and closed product-form QNs in general. The CPU is designated as the central station \(\mathcal{S}_{1}\) and the \(N-1\) disks as peripheral stations \(\mathcal{S}_{n},2\leq n\leq N\). Given the state transition probabilities \(\mathcal{S}_{i}\xrightarrow{p_{i,j}}\mathcal{S}_{j}\) the following transitions are applicable to CSM. \[p_{1,n},\ \ 2\leq n\leq N,\ \ \ p_{n,1}=1,\ \ 2\leq n\leq N\] The self-transition \(p_{1,1}=1-\sum_{n=2}^{N}p_{1,n}\) implies the completion of a job in a closed QN (or a job that leaves the system in an open QN). The number of visits to the CPU is given by the geometric distribution [24]. \[q_{k}=p_{1,1}(1-p_{1,1})^{k-1},k\geq 1\ \ =\bar{k}=v_{1}=1/p_{1,1}.\] The relative number of visits to the stations is obtained by solving \[\underline{\text{v}}=\underline{\text{v}}\mathbf{P}\] It follows \(v_{n}=p_{1,n}v_{1}=p_{1,n}/p(1,1),2\leq n\leq N\). Given mean service time at \(\mathcal{S}_{n}\) per visit is \(\bar{x}_{n}\), the mean loading per job is \(X_{n}=v_{n}\bar{x}_{n},1\leq n\leq N\). Inputs required for the analysis of closed QNs are the number of jobs or the degree of concurrency, probabilistic routings among stations which lead to the mean number of visits to the stations and mean service times per visit, which yields loadings. Performance metrics of interest in modeling closed QNs are system throughput, mean residence time, device utilizations and mean queue-lengths. Jackson theorem which only allowed service stations with exponential servers with FCFS scheduling was extended by the BCMP theorem to four types of stations Baskett et al. 1975 [3]. \[F_{n}(k)=\begin{cases}X_{n}^{k}&\text{FCFS, PS, LCFSPR}\\ X_{n}^{k}/k&\text{Infinite-Server - IS}\\ X_{n}^{k}/\prod_{j=1}^{k}a(j)&\text{Queue-dependent}\end{cases} \tag{2}\] In a queue-dependent station the service rate varies with the number of jobs: \[F_{n}(k)=X_{n}^{k}/a(k),\text{ where }a(k)=\mu(k)/\mu(1)\] \(a(k)\) is the ratio of the service rate with \(k\) versus one job at the station. Multiserver and IS or delay server queues are special cases with \(a(k)=k,1\leq k\leq m\) for \(m\)-server. For IS or delay servers \(m=\infty\). General service times are allowed for _Processor Sharing - PS_ and IS service stations. Buzen 1973 [4] developed the _Convolution - CA_ algorithm to analyze closed QNs and applied it to CSM. We consider mostly fixed rate single server and delay servers in our discussion. The solution of open product-form QN models is trivial. Given the arrival rate to each station its mean residence time can be obtained by solving the corresponding station independently. Consider the processing of \(K\) jobs in a closed QN with a single jobs with \(N\) stations. Job service demands or loadings are the products of mean service time \(\bar{x}_{n}\) and the mean number of visits (\(v_{n}\)) to station \(\mathcal{S}_{n}\), i.e., \(X_{n}=v_{n}\bar{x_{n}}\). The steady-state state probability in a closed QN with \(K\) jobs with \(N\) fixed-rate single server-stations with job distribution: \[\underline{k}=(k_{1},k_{2},\ldots,k_{N})\text{ with }\sum_{n=1}^{N}=K.\] \[P[\underline{k}]=\frac{X_{1}^{k_{1}}X_{2}^{k_{2}}\ldots X_{N}^{k_{N}}}{G(K)} \ G(K)=\sum_{\underline{k}\in\mathcal{K}}\prod_{n=1}^{N}X_{n}^{k_{n}}. \tag{3}\] The number of terms grows rapidly with the size of the network such that a direct summation of multinomial expressions may be computationally intractable. Multiple job classes or chains yield a more realistic representations of QN models, where each class has its own routings probabilities and service times per visit and hence different loadings at the stations. Given \(R\) job classes and \(k_{r},1\leq r\leq R\) jobs in class \(r\), the number of states is. \[\prod_{r=1}^{R}\binom{K_{r}+N-1}{N-1}\] There are several efficient computational methods to analyze product-form closed QNs. The _Convolution Algorithm - CA_ was proposed in the context of CSM with a single job class Buzen 1973 [4] and was extended as part of the BEST/1 capacity planning package by Buzen et al. [5]. IBM's Reiser and Kobayashi 1975 [32] used generating functions, see e.g., [24], in extending BCA to multiple job classes. The application of GFs to analyze closed QNs is presented in a tutorial manner in Williams and Bhandiwaad 1976 [41], Thomasian and Nadji 1981 [38], and Trivedi 2002 [40], Generating Functions for Analyzing Product-Form QNs With the Gap of the Role of Mass \(\leq n\leq N\) is given by: \[x_{n}(t)=\sum_{k=0}^{\infty}F_{n}(k)t^{k}, \tag{4}\] where \(F_{n}(k)\) was given by Eq. (2). For stations with fixed-rate single servers \(F_{n}(k)=X_{n}^{k}\). \[x_{n}(t) =\sum_{k\geq 0}(X_{n}t)^{k}=1+(X_{n}t)+(X_{n}t)^{2}+\ldots \tag{5}\] \[=(1-X_{n}t)^{-1},|X_{n}t|<1.\] If the \(n^{th}\) station is IS: \[x_{n}(t)=1+X_{n}t+\frac{(X_{n}t)^{2}}{2!}+\cdots=e^{X_{n}t}. \tag{6}\] The GF for the QN is the product of the GFs for individual stations. \[g(t)=\prod_{n=1}^{N}x_{n}(t). \tag{7}\] which can be rewritten as: \[g(t)=1+G(1)t+G(2)t^{2}+\ldots+G(K)t^{K}+\ldots, \tag{8}\] In the case of single-server stations with \(N=2\) stations as \(K\) increases: \[\begin{cases}G(1)=X_{1}+X_{2},\\ G(2)=X_{1}^{2}+X_{2}^{2}+X_{1}X_{2},\\ G(3)=X_{1}^{3}+X_{1}^{2}X_{2}+X_{1}X_{2}^{2}+X_{2}^{3},\\ \vdots\end{cases}\] Given \[g_{n}(t)=g_{n-1}(t)x_{n}(t),1\leq n\leq N\] and noting that \(x_{n}(t)=(1-X_{n}t)^{-1}\) it follows \[g_{n}(t)=g_{n-1}(t)+X_{n}g_{n}(t),\ \ \ \ \ 1\leq k\leq K,1\leq n\leq N\] Equating the coefficients of \(t^{k}\) on both sides we have: \[\boxed{G_{n}(k)=G_{n-1}(k)+X_{n}G_{n}(k-1),} \tag{9}\] \[1\leq k\leq K,1\leq n\leq N.\] The utilization of a single server at station \(n\) is the sum of state probabilities with at least one job at the station: \[\boxed{U_{n}(K)=X_{n}T(K),} \tag{10}\] The system throughput follows from Little's formula [24]: \[\boxed{T(K)=\frac{U_{n}(k)}{X_{k}}=\frac{G(K-1)}{G(K)}.} \tag{11}\] If \(X\)s are expressed in seconds \(T(K)\) with be jobs/second. The mean queue length at the \(n^{th}\) single-server station is: \[\boxed{Q_{n}(K)=\frac{1}{G(K)}\sum_{k=1}^{K}X_{n}^{k}G(K-k).} \tag{12}\] CA to compute the normalization constant \(G(K)\) in a single class QN is: Initializations: \[\begin{cases}G_{0}(k)=0&1\leq k\leq K.\\ G_{n}(0)=1&0\leq n\leq N\end{cases}\] for \(k=1\) to \(K\) do for \(n=1\) to \(N\) do \(G_{n}(k)=G_{n-1}(k)+X_{n}G_{n}(k-1)\) end end ### Aggregation of Balanced Stations have balanced loads, i.e., \(X_{n}=X,1\leq n\leq M\), it follows from Eq. (7) that: \[v(t)=\left[1+Xt+(Xt)^{2}+\ldots\right]^{M}=\left(1-Xt\right)^{-M} \tag{13}\] Applying the Binomial Theorem we have: \[v(t)=\sum_{k=0}^{\infty}\binom{M+k-1}{k}(Xt)^{k}. \tag{14}\] The GF for the aggregate station can be written as: \[v(t)=\sum_{k=0}^{\infty}(MXt)^{k}/\prod_{j=1}^{k}a(j),\ \ a(j)=\frac{jM}{M+j-1}. \tag{15}\] The throughput of the using Eq. (11) is: \[T(k)=\frac{\binom{M+k-2}{k-1}X^{k-1}}{\binom{M+k-1}{k}X^{k}} \tag{16}\] It follows that job throughout with \(j\) jobs is: \[\boxed{T(k)=\frac{k}{M+k-1}\frac{1}{X}} \tag{17}\] As \(k\rightarrow\infty\) the maximum throughput is \(T_{max}=1/X\) We can use the symmetry due to balancedness to obtain the mean residence time at single-server stations using the key equation in _Mean Value Analysis - MVA_ Reiser and Lavenberg 1980 [33] which is based on the Arrival Theorem Lavenberg and Reiser 1980 [27]. In a QN with \(k\) jobs a job arriving at a station sees a mean queue-length with one job less. The \(k-1\) jobs are equally distributed among \(M\) stations, hence: \[r_{n}(k)=\bar{x}_{k}\left(1+\frac{k-1}{M}\right)\] Multiplying both sides by \(v_{k}\): \[\boxed{R_{n}(k)=X_{n}\left(1+\frac{k-1}{M}\right).} \tag{18}\] The mean residence time in the system is obtained by multiplying by \(M\). \[R(k)=Mr(k)=(M+k-1)X,\] from which the throughput \(T(k)=k/R(K)\) as given by Eq. (16) follows. The aggregation of IS or delay station in a QN is particularly simple, so that all such stations whose indices are in the set \(\mathcal{I}\) can be replaced by a single IS station with service demand \[\boxed{X_{I}=\sum_{i\in\mathcal{I}}X_{i}.}\] Aggregation of unbalanced systems in a single IS based stations as a starting point to obtain the throughout of an FESC with unbalanced stations. Defining the mean service demand over \(M\) fixed-rate single-servers stations: \[X_{0}=\sum_{n=1}^{M}X_{n}/M.\] The deviation of service demands from the mean and its moments are as follows: \[e_{n}=\frac{X_{n}-X_{0}}{X_{0}},\ \ 1\leq n\leq M\ \text{and}\ E_{j}=\sum_{n=1}^{M}e_{n}^{j}. \tag{19}\] The GF of the \(n^{th}\) station can be given as its Taylor series expansion as follows: \[x_{n}(t)=x_{0}(t)+(X_{n}-X_{0})x_{0}^{(1)}(t)+ \tag{20}\] \[\frac{1}{2!}(X_{n}-X_{0})^{2}x_{0}^{(2)}(t)+\ldots,\] Making the substitution \((X_{n}-X_{0})=e_{n}X_{0}\) and noting that that the \(j^{th}\) derivative with respect to \(t\): \[x_{0}^{(j)}(t)=j!t^{j}[x_{0}(t)]^{j+1}\] \[x_{n}(t)=x_{0}(t)+(e_{n}X_{0}t)x_{0}^{(2)}(t)+ \tag{21}\] \[(e_{n}X_{0}t)^{2}x_{0}^{(3)}(t)+(e_{n}X_{0}t)^{3}x_{0}^{(4)}(t)+\ldots\] Substituting \(x_{n}(t)\) into Eq. (13) and noting that \(E_{1}=0\) we have \[v(t)=x_{0}^{M}(t)+\frac{1}{2}(X_{0}t)^{M+2}E_{2}x_{0}^{M+2}(t)+ \tag{22}\] \[\frac{1}{3}(X_{0}t)^{M+3}E_{3}x_{0}^{M+3}(t)+\ldots\] In the above expression \(x_{o}^{(}M)(t)\) represents the GF for \(M\) balanced stations. It follows from the definition of \(E_{j}\)'s that this series will converge when the stations are not highly unbalanced and \(e_{n}\)s are small. The coefficient of \(t^{k}\) in Eq. (22) is given as: \[V(k)=V_{0}(k)+\frac{X_{0}^{2}}{2}V_{0}(k-2)E_{2} \tag{23}\] \[+\frac{X_{0}^{3}}{3}V_{0}(k-3)E_{3}+\ldots\] \(V_{0}(k=\binom{M+k-1}{k}X_{0}^{k}\) is the coefficient of \(t^{k}\) in \(X_{0}^{M}(t)\). We can rewrite Eq. (23) as \[V(k)=V_{0}(k)[1+R_{0}(k)],\ \text{where} \tag{24}\] \[R_{0}(k)\sum_{j=2}^{\infty}\frac{E_{j}}{j}\left[\prod_{i=0}^{j-1} \frac{k-i}{M+i}\right]\] The throughput of the subnetwork with \(k\) jobs is given as \[T(k)=\frac{V(k-1)}{V(k)}=T_{0}(K)\frac{1+R_{0}(k-1)}{1+R_{0}(k)} \tag{25}\] where \(T_{0}(k)\) is the throughput of the \(M\) balanced stations. We can obtain a family of approximations for \(T(k)\) according to the number of terms in the summation for \(R_{0}(k)\). Maintaining multiple terms in Eq. (23): \[V(k)=V_{2}(k)+V_{0}(k)R_{2}(k) \tag{26}\] \[V_{2}(k)=V_{0}(k)\left[1+\frac{1}{2}\frac{k(k-1)}{M(M-1)}E_{2}+\frac{1}{3} \frac{k(k-1)(k-2)}{M(M+1)(M+2)}E_{3}\right].\] \[R_{2}(k)=\sum_{j=4}^{\infty}\frac{E_{j}}{j}\prod_{i=0}^{j-1}\frac{k-i}{M+i}\] The expression for throughput with with one and two summation terms are: \[T_{1}(k)=T_{0}(k)\left[\frac{1-\frac{k-1}{M+1}E_{2}}{1+\frac{k(k-1)}{2(M+1)}E_{2 }}\right] \tag{27}\] \[T_{2}(k)=T_{0}(k)\left[1-\frac{\frac{k-1}{M(M+1)}\left(E_{2}+\frac{k-2}{M+2}E_ {3}\right)}{1+\frac{k(k-1)}{M(M+1)}\left(\frac{1}{2}E_{2}+\frac{1}{3}\frac{k-2 }{M+2}E_{3}\right)}\right] \tag{28}\] Given the coefficient of variation \(c\), which is the standard deviation divided by the mean and the coefficient of skewness, \(\beta\) which is the third central moment divided by \(c^{3}\) we have \(E_{2}=Mc^{2}\) and \(E_{3}=M\beta c^{3}\). 1 Eq. (28) and Eq. (27) can be rewritten using this notation as: Footnote 1: [https://grapherhelp.goldensoftware.com/WTOPICS/WKS_Skew.htm](https://grapherhelp.goldensoftware.com/WTOPICS/WKS_Skew.htm) \[\boxed{T_{1}(k)=T_{0}(k)\left[1-\frac{\frac{k-1}{M+1}c^{2}}{1+\frac{k(k-1)}{2(M +1)}c^{2}}\right]} \tag{29}\] \[T_{2}(k)=T_{0}(k)\left[1-\frac{\frac{k-1}{M+1}c^{2}\left(1+\frac{k-2}{M+2}c \beta\right)}{1+\frac{k(k-1)}{M+1}c^{2}\left(\frac{1}{2}+\frac{k-2}{3(M+2)}c \beta\right)}\right] \tag{30}\] In [39] we determine the relative errors: \(T_{0}(k)/T_{2}(k)-1\) (resp. \(T_{1}(k)/T_{2}(k)-1\)) for varying \(c\) (resp. \(c\) and \(\beta\)), with respect to \(T_{2}(k)\) since it is quite accurate. \(T_{0}(k)\) which is the throughput based on the average service demand \(X_{0}\) yields the maximum throughput as shown in [43]. Given that there are \(m_{n}\) servers at station \(n\) with mean service demands \(X_{n}\) it follows from the fact the utilization factor should not exceed one: \[U_{n}(k)=\frac{T(k)X_{n}}{m_{n}}<1\] It is shown in [30] that the maximum throughput of a closed QN is given as the minimum of service station throughputs, hence as \(k\) is increased \(T(k)\) is bounded as: \[T_{max}=\min\left(\frac{m_{1}}{X_{1}},\frac{m_{2}}{X_{2}},\ldots,\frac{m_{N}} {X_{N}}\right)\geq T(k),k\geq 1. \tag{31}\] Since we are concerned with single-server queues \(m_{n}=1,\forall n\) and \(T_{max}=1/D_{max}\). In fact the throughput for lower values of \(k\) the throughput is bounded by a line that connects the origin and \(T(1)=[\sum_{n=1}^{N}X_{n}]^{-1}\). Throughput convexity is shown by Dowdy et al. 1984 [11] for QN's with fixed rate and delay servers. Optimistic (resp. pessimistic) _Balanced Job Bound - BJB_ analysis of QNs utilize the average (resp. maximum) service demand yielding an upper (resp. lower) bounds to throughput [43],[28] Since the optimistic BJB bound may exceed \(T_{max}\) it should only be used up to the point it intersects \(T_{max}\). \[\boxed{K\over(K+N-1)D_{max}}\leq T(k)\leq\] \[\min\left(\frac{1}{D_{max}},\frac{K}{(K+N-1)D_{avg}}\right).\] It follows easily from Eq. (31) that when the sum of service demands is fixed then the throughput is maximized when the service demands at the stations equal the mean. From a pragmatic viewpoint there is a drop from the maximum throughput as the _MultiProgramming Level - MPL_ is increased in virtual memory systems. This is due to increased paging which may ensue in a severe performance degradation. The frequency of paging is given by the lifetime curve, which is the time between page faults and the number of page frames dedicated to a program (see e.g., Figure 9.5 in [28]). As MPL is increased less page frames can be allocated to each program A QN model is used to obtain the throughput characteristic vs the MPL, which show a drop beyond a certain maximum throughput. Increasing the MPL further may result in a severe degradation in throughput referred to as thrashing Denning 1970 [9]. ## 5 AggREGATION OF BALANCED STATIONS WITH LIWO CLASSES In the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the process of the In the case of IS stations \[F_{n}(k,\ell)=\frac{(X_{n}t)^{k}}{k!}\frac{(Y_{n}u)^{\ell}}{\ell!}\] \[h_{n}(t,u)=\sum_{k=0}^{\infty}\sum_{\ell=0}^{\infty}\frac{(X_{n}t)^{k}}{k!}\frac{ (Y_{n}u)^{\ell}}{\ell!}= \tag{35}\] \[\sum_{m=0}^{\infty}(X_{n}t+Y_{n}u)^{m}=e^{X_{n}t+Y_{n}u}.\] The GF for the \(M\leq N\) stations to be aggregated is \[v(t,u)=\prod_{i=1}^{M}h_{i}(t,u).\] When the QN is balanced \(X_{n}=X,Y_{n}=Y,1\leq n\leq N\). The GF for the \(M\) stations to be aggregated is: \[v(t,u)=[x(t,u)]^{M}=(1-Xt-Yu)^{-M} \tag{36}\] \[\sum_{j=0}^{\infty}\binom{M+j-1}{j}(X_{0}t+Y_{0}u)^{j}.\] \[v(t,u)=\sum_{k=0}^{\infty}\sum_{\ell=0}^{\infty}V(k,\ell)(MXt)^{k}(MYu)^{\ell}. \tag{37}\] Equating the coefficients of \(t^{k}u^{\ell}\) of \(v(t,u)\) \[V(k,\ell)=\binom{M+k+\ell-1}{k+\ell}\binom{k+\ell}{k}. \tag{38}\] The throughputs in the two classes are given as: \[T^{(1)}(K,L)=\frac{V(K-1,L)}{V(K,L)}=\frac{K}{M+K+L-1}.\frac{1}{X_{0}} \tag{39}\] \[T^{(2)}(K,L)=\frac{V(K,L-1)}{V(K,L)}=\frac{L}{M+K+L-1}.\frac{1}{ Y_{0}}\] If the QN is not balanced we expand the GF for \(\mathcal{S}_{n}\) around the mean loadings \(X_{0}\) and \(Y_{0}\): \[h_{n}(t,u)=\sum_{j=0}^{\infty}\frac{1}{j!} \tag{40}\] \[\left[\left((X_{n}-X_{0})\frac{\partial}{\partial X_{n}}+(Y_{n}- Y_{0})\frac{\partial}{\partial Y_{n}}\right)^{j}h_{n}(t,u)\right]=\] \[\sum_{j=0}^{\infty}\left[(X_{n}-X-0)t+(Y_{n}-Y_{0})u\right]^{j}h_ {0}^{j+1}(t,u).\] \[v(t,u)/h_{0}^{M}(t,u)= \tag{41}\] \[1+\frac{1}{2}h_{0}^{2}(t,u)\sum_{n=1}^{M}[(X_{n}-X_{0})t+(Y_{n}- Y_{0})u]^{2}+\] \[\frac{1}{3}h_{0}^{3}(t,u)\sum_{n=1}^{M}[(X_{n}-X_{0})t+(Y_{n}-Y_{ 0})u]^{3}+\ldots\] Equating the coefficients of \(t^{k}u^{\ell}\) \[V(K,L)=V_{0}(K,L)+ \tag{42}\] \[X_{0}^{K}Y_{0}^{L}\sum_{i=2}^{\infty}\frac{1}{i}\sum_{j=0}^{i}\left[\binom{M+K+ L-1}{K+L-1}\binom{K+L-i}{K-j}\binom{i}{j}E_{j,i-j}\right]\] \[E_{i,j}=\sum_{n=1}^{M}\left(\frac{X_{n}-X_{0}}{X_{0}}\right)^{i}\left(\frac{Y _{n}-Y_{0}}{Y_{0}}\right)^{j}\] The first order approximation can be expressed as: \[V_{1}(K,L)=V_{0}(K,L)[1+\frac{K(K-1)}{2M(M+1)}E_{2,0}+ \tag{43}\] \[\frac{L(L-1)}{2M(M+1)}E_{0,2}+\frac{KL}{M(M+1)}E_{1,1}].\] Given that \(c_{X}\), \(c_{Y}\), and \(c_{X,Y}\) are the coefficients of variation and correlation for \(X_{n}\) and \(Y_{n}\) then: \[E_{2,0}=Mc_{X}^{2}\ \ E_{0,2}=Mc_{Y}^{2}\ \ E_{1,1}=Mc_{X,Y}c_{X}c_{Y}.\] \[T_{1}^{(1)}(K,L)=T_{0}^{(1)}(K,L) \tag{44}\] \[\frac{1-\frac{1}{2(M+1)^{2}}[K(K-1)(K-2)c_{X}^{2}+L(L-1)c_{X}^{2}+2(K-1)Le_{X}c _{Y}c_{X},Y]}{1+\frac{1}{2(M+1)^{2}}[K(K-1)c_{X}^{2}+L(L-1)c_{X}^{2}+2KLe_{X}c _{Y}c_{X},Y]}\] ### Accuracy of Unbalanced Job Approximation A small experiment to assess the accuracy of UJA is given below. The service demands at the \(N=5\) stations are as follows: \[\underline{X}=\{0.25,0.23,0.19.17,0.15\},\] such that \(X_{0}=0.2\). \(c=0.179\), and \(\beta=0.079\). Exact results were obtained using CA. A larger experiment with 100,000 random service demands in a QN with \(N=18\) stations is reported in [31]. More than 99% of the results obtained by \(T_{1}\) and \(T_{2}\) throughputs are within 10% accuracy for the range of number of jobs considered. \(T_{0}\) is accurate within 15% in 97% of the examples when average utilization was 40%. An experiment similar to Table 1 with two job classes is reported in [31]. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(k\) & \(T_{0}(k)\) & \(T_{1}(k)\) & \(T_{2}(k)\) & \(T(k)\) \\ \hline 1 & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\ 2 & 1.6667 & 1.6578 & 1.6578 & 1.6578 \\ 3 & 2.1429 & 2.1204 & 2.1203 & 2.1203 \\ 4 & 2.5000 & 2.4612 & 2.4611 & 2.4610 \\ 5 & 2,7778 & 2,7215 & 2.7212 & 2.7209 \\ 6 & 3.0000 & 2.9259 & 2.9254 & 2.9245 \\ \hline \end{tabular} \end{table} Table 1: Comparison of approximate and exact throughputs for a single job class. Uj. Koncalisisisisapproximation - UJA is a family of low-cost methods starting with an upper bound for the throughput based on the mean service demand \(X_{0}\), but more accurate throughput estimates are obtained using additional terms in a Taylor series expansion, Given a large number of stations, instead of aggregating all stations at once, aggregation can be applied to groups of single-server stations with close utilizations. Given that UJA is restricted to two classes and that it is difficult to extend to more classes, more job classes can be dealt with by clustering multiple chains into one, see e.g., deSouza et al. 1986 [10]. The error introduced by applying such methods is discussed in Cheng and Muntz 1986 [8]. Performance bound hierarchies for single and multiple job classes are given by Eager and Sevcik [13, 14]. Kerola 1896 [23] is a less influential work on this topic. ## Appendix I: Performance Bound Hierarchies ### Pkh Eager and Sevcik 1983 [13] extended BJB to _Performance Bound Hierarchies - PBH_ Consider a closed QN with \(K\) stations, \(N\) job and loadings \(L_{k},1\leq j\leq K\) According to the arrival theorem [27] \[R_{k}(N)=L_{k}[1+\bar{n}_{k}(N-1)] \tag{45}\] where \(\bar{n}_{k}(N-1)\) is the mean number of jobs at station \(k\). It follows the mean residence time of jobs is \(R(N)=\sum_{k=1}^{K}R_{k}(N).\) An application of Little's result yields. \[\bar{n}_{k}(N-1)=\frac{R_{k}(N_{1})}{Z+R(N-1)}(N-1) \tag{46}\] Substituting \(n_{k}(N-1)\) in Eq. (45) \[R_{k}^{(i)}(N)=L_{k}[1+\frac{R_{k}^{(i)}(N_{1})}{Z+R^{(i-1)}(N-1)}](N-1) \tag{47}\] The optimistic hierarchy starts with the initialization: \[R_{k\text{ opt}}^{(0)}=\frac{1}{K}\text{max}[NL_{B}-Z,1] \tag{48}\] where \(b\) is the index of the bottleneck station. It follows: \[R_{k\text{ opt}}^{(0)}=\frac{1}{K}\text{max}[NL_{b}-Z,1] \tag{49}\] Using \(a(N)=\text{max}[NL_{b}-Z,1]\) \[R_{\text{opt}}^{(1)}=1+\frac{1}{K}\left(\frac{a(N-1)}{Z+a(N-1)}\right)(N-1). \tag{50}\] When \(Z=0\) and \(S=\sum_{k=1}^{K}L_{k}^{2}\) \[R_{\text{opt}}^{(2)}(N)=1+S(N-1). \tag{51}\] For the pessimistic hierarchy we have \[R_{k\text{pess}}^{(0)}=\begin{cases}N\text{ for }k=b\\ 0\text{ for }k\neq b\end{cases} \tag{52}\] \[R_{\text{pess}}^{(0)}(N)=N, \tag{53}\] For \(k\neq b\)\(R_{k\text{pess}}^{(1)}=L_{a}\) and for \(k=b\) \[R_{k\text{pess}}^{(1)}(N)=1+L_{b}\left(\frac{N-1}{Z+N-1}\right)(N-1) \tag{54}\] When \(Z=0\) the bound corresponds to BJB pessimistic bound Level 2 pessimistic bound for \(Z=0\) is \[R_{pess}^{(2)}(N)=1+\left(\frac{L_{b}^{2}(N-2)}{1+L_{b}(N-2)}\right)(N-1) \tag{55}\] A measure of the error magnitude is \[\frac{R^{(i)}(N)-R^{(i)}(N)}{R^{(i)}(N)+R^{(i)}(N)}\times 100\% \tag{56}\] Eager and Sevcik 1986 [14] is an extension of the single class PBH bounding algorithm to QNs with multiple classes of customers. The resulting algorithm is applicable to all of the types of product-form QNs that are commonly used in computer system and computer-communication network applications. _Asymptotic Expansions - AE_ algorithm proposed by McKenna and Mitra 1994 [29] is also capable of providing reasonably tight bounds for a practically useful subset of multiple class closed QNs. AE algorithm can only treat QNs in which each class has a significant service demand at one or more IS servers. Both algorithms offer a hierarchy of bounds with differing accuracy levels and computational cost requirements. ## Appendix II: Kriz's Extension to BJB The discussion follows Kriz 1984 [26] and preserves its notation. The number of jobs is \(N\) Single-server or waiting stations are denoted by (\(\mathcal{W}\)). IS or delay stations are denoted \(\mathcal{D}\) and the sum of their service demands by \(Z\). The set of all servers is denoted by \(\mathcal{Q}\) with \(M=|\mathcal{Q}|\). The loading at server \(m\) is \(t_{m}=v_{m}s_{m}\), where \(v_{m}\) is the number of visits and \(s_{m}\) the service time. The sum of all loading at the \(\mathcal{W}\) servers is \(T\) and the sum of all loadings is \(r=Z+T\). The maximum and average loading at \(\mathcal{W}\) servers is \(t^{\prime}\) and \(t^{"}\). \(L_{m}(N)\) is the queue-length, \(R_{m}(N)\) the response time, and \(X_{m}(N)\) the throughput at \(m^{th}\) station. The system throughput is \(X(N)=X_{m}(N)/v_{m}\) and the mean residence time in the system is \(R(N)=N/X(N)\). The throughput ABA and BJB can be expressed succinctly as: \[X(N)\leq\min(N/r,1/t^{\prime})\] \[\frac{N}{r+(N-1)t^{\prime}}\leq X(N)\leq\frac{N}{r+(N-1)t^{"}}\] Given that that load at \(\mathcal{W}\) stations is balanced with loadings given by \(t\), \[X(1)=1/r,\quad X(n)=n/[r+(n-1-ZX(n-1)t],n\geq 2\] With initialization \(\underline{Y}_{0}(N)=0\) we have: \[\underline{Y}_{0}(N)=N[r+(N-1-ZY_{i-1}(N-1))t^{\prime}]^{-1},1\leq i\leq N\] Given \(\bar{Y}_{0}(N)=\min\{N/r,1/t^{\prime}\}\) \[\bar{Y}_{i}(N)=\min\left(N[r+(N-1-Z\bar{Y}_{i-1}(N-1)t^{\prime}]^{-1},1/t^{ \prime}\right)\] Then \(\bar{Y}_{i-1}(N)\leq\bar{Y}_{i-1}(N)\leq X(N)\leq\bar{Y}_{i}(N)\leq\bar{Y}_{i- 1}(N)\) ## Appendix III: Asymptotic Expansions with Multiple Bottlenecks Considered in George et al. 2012 [15] is a closed QN with \(N\) jobs and \(M\) stations with \(\mathcal{M}\) denoting the set of stations. \(\mathcal{S}\) denotes single-servers, \(\mathcal{L}\) multiservers, and \(\mathcal{I}\) IS stations. Given the routing matrix \(P=[p_{i,j}]\), the relative throughputs are: \[\pi=\sum_{j\in\mathcal{M}}\pi_{j}p_{j,i}.\] For any sequence \(\{f(n),n\geq 0\}\) its z-transform in alternate form is: \[\bar{f}(z)=\sum_{n=0}^{\infty}f(n)z^{-n}\] Given three theorems regarding the properties of \(\bar{f}(z)\) and an assumption is followed by a theorem. Assumption 1. We assume that the relative throughputs \(\pi_{i},\in\mathcal{M}\) are chosen such that \(\max\{\rho_{i}=1\}\) stations are labeled by their relative utilization, such that: \[1=\rho_{1}\geq\rho_{2}\geq\ldots\geq\rho_{M}\] The set of bottleneck stations is defined as \(\mathcal{B}:=\{i\in\mathcal{M}:\rho_{i}=1\}\). Clearly \(|B|\geq 1\). The main theorem that characterizes the asymptotic behavior of the normalization constant \(G(N)\) in exact order. Theorem 4. For any \(M\) station closed QN with \(N\) jobs the normalization constant \(G(N)\) satisfies the exact asymptotics: \[G(N)\sim C_{B}N^{|B|-1}\text{ where }\] \[C_{B}=\frac{1}{(|B|-1)!}\prod_{i=1}^{|B|}\frac{\gamma_{i}^{s^{i}}}{s_{i!}} \prod_{j=|B|+1}^{M}\tilde{f}_{j}(1) \tag{57}\] As \(N\) grows the rate at which \(G(N)\) grows is on the order \(N^{|B|-1}\). The actual throughput \(\Lambda_{i}(N)\) and utilization \(U_{i}(m)\) satisfy the following exact-order asymptotics: \[\Lambda_{i}(N)\sim\pi_{i}(1-\frac{1}{N})^{|B|-1},\forall i\in\mathcal{M}\] \[U_{i}(N)\sim\rho_{i}(1-\frac{1}{N})^{|B|-1},\forall i\in\mathcal{M}^{\prime}\] where \(\mathcal{M}\) is the set of all stations and \(\mathcal{M}^{\prime}\) excludes IS stations \((calI)\). An upper bound to system throughput is given as: \[\Lambda(N)\leq\min\left[\frac{N}{D+Z},s_{1}\mu_{1}\right]:=\Lambda^{ABA}(N)\] where \(D=\sum_{i\in\mathcal{M}^{\prime}}\pi_{i}/\pi_{1}\mu_{i}\) and \(Z=\sum_{i\in\mathcal{I}}\pi_{i}/\pi_{1}\mu_{i}\). As \(N\) increases a saturation point is reached where the bottleneck resource reaches 100% utilization. Noting that \(\Lambda_{i}(N)=\pi_{i}/\pi_{1}\lambda(N)\) \[\min\left[\frac{N}{D+Z},s_{1}\mu_{1}(1-\frac{1}{N})^{|B|-1}\right] \tag{58}\] Numerical examples are given why the AE bound is more accurate than others. We note that in the case of a network with a single bottleneck, Eq. (58) reduces to the previous ABA bound and the proposed approximation provides the greatest improvements ## Appendix IV: Proportional and Geometric Bounds The Geometric Bounds - GBs proposed by Casale et al. 2006/08 [6, 7] are fast and accurate noniterative bounds on closed QN metrics. Compared to BJB GB achieves higher accuracy at similar computational cost, limiting the worst-case bounding error typically within 5-13% while for BJB the error is usually in the range of 15-35%. 2 Footnote 2: Application of GB as an approximation to fork-join processing in closed QNs is beyond the scope of this discussion. We restate the steps for MVA [33] in a closed QN with \(M\) queues with service demands or loadings \(L_{i},1\leq i\leq M\) with \(L=\sum_{i=1}^{M}L_{i}\). The delay at the delay server is \(Z\). The network throughout with \(N\) jobs is \(X(N)\). The utilization at the \(i^{th}\) queue is \(U_{i}(N)\) and the queue-length \(Q_{i}(N)\). It follows from Little's result \[U_{i}(N)=L_{i}X(N),1\leq i\leq M\] According to the arrival theorem [27] the mean residence time at the \(i^{th}\) station is: \[W_{i}(N)=L_{i}[1+Q_{i}(N-1)],1\leq i\leq M\] The residence time in the network is: \[R(N)=\sum_{i=1}^{M}W_{i}(N)\] The network throughput follows from Little's result: \[X(N)=N/(Z+R(N)\] The mean number of jobs at the \(i^{th}\) queue is: \[Q_{i}=X(N)W_{i}(N)=U_{i}(N)[1+Q_{i}(N-1)],1\leq i\leq M\] 3 Footnote 3: The paper uses \(R_{i}(N)\) which is undefined, it should be \(W_{i}(N)\). By recursively expanding the above equation: \[Q_{i}(N)=U_{i}(N)+U_{i}(N)U_{i}(N-1)+\] \[U_{i}(N)U_{i}(N-1)U_{i}(N-2)+\] \[\prod_{i=0}^{N-1}U_{i}(N-n)\] The geometric summation yields: \[\sum_{i=1}^{N}y^{i}=\frac{y(1-y^{N})}{1-y} \tag{59}\] The throughput is bounded as follows where \(X_{max}\) is the maximum throughput obtained by ABA: \[N/(Z+LN)\leq X(N)\leq \tag{60}\] \[\min(N/(Z+L),X_{Max})\text{ ABA}\] The throughput for BJBs is: \[N/(Z+L+L_{max}(N-1-ZX^{-1})\leq X(N)\leq \tag{61}\] \[N/(Z+L+L(N-1)ZX^{+})/M)\text{ BJB}\] The throughput for _Proportional Bounds - PB_ Hsieh and Lam 1987 [19]. \[N/\left(Z+L+\frac{\sum_{i=1}^{M}L_{i}^{N}(N-1-ZX^{-})}{\sum_{j=1 }^{M}L_{j}^{N-1}}\right)\leq X(N) \tag{62}\] \[\leq N/\left(Z+L+\frac{\sum_{i=1}^{N}L_{i}^{2}(N-1-ZX^{+})}{\sum _{j=1}^{M}L_{j}}\right)\text{ PB}\] BJB always offers greater accuracy than the ABA. The bounds hold true for any \(X^{+}\) and \(X^{-}\) such that \[X(N-1)\leq X^{+}\leq X_{max},\ \ \ \ \ 0\leq X^{-}\leq X(N-1)\] The following formula is exact: \[X(N)=N/[Z+L+L_{max}(N-1)-ZX(N-1))-D(N)] \tag{63}\] where \[D(N)=\sum_{i=1}^{N}(L_{max}-L_{i})Q_{i}(N-1).\] **Theorem 4.**\(X(n)\) for \(Z>0\) is bounded by \[2N/\left(b(N)+\sqrt{b^{2}(N)-4ZL_{max}(N-1)}\right)\leq X(N)\leq \tag{64}\] \[2N/\left(b(N)+\sqrt{b^{2}(N)-4ZL_{max}N}\right)\] \[b(N)=Z+L+L_{max}(N-1)-\sum_{i:L_{i}<L_{max}} \tag{65}\] \[(L_{max}-L_{i})Q_{i}(N-1).\] \(Q_{i}\) is bounded by \(Q_{i}^{-}\) and \(Q_{i}^{+}\) given by Eq. (66 and Eq. (66), respectively. **Theorem 1.** The queue length of station is bounded from below by \[Q_{i}^{-}(N)=\begin{cases}\frac{y_{i}(N)-y_{i}(N)^{N+1}}{1-y_{i}(N)}\text{ if }L_{i}<L_{max},\\ \frac{1}{m_{max}}\left(N-ZX^{+}-\sum_{k:L_{k}<M_{max}}Q_{k}^{+}(N)\right)\\ \text{ if }L_{i}=L_{max}\end{cases}\] For any \(X^{+}\) such that \(X(N)<X^{+}\) and where \[y_{i}(N)=L_{i}N/(Z+L+L_{max}N)\] is the ratio of the underlying sum in Eq. (59), \(M_{max}\) is the number of queues with service demand \(L_{max}\), and \(Q_{k}^{+}\) is the upper bound in Theorem 2 for \(L_{i}<L_{max}\) **Theorem 2.** The queue-length \(Q_{i}(N)\) is bounded above as follows: \[Q_{i}^{+}(N)=\begin{cases}\frac{Y_{i}(N)[1-Y_{i}(N)^{N}]}{1-Y_{i}(N)}\text{ if }L_{i}<L_{max}\\ \frac{1}{M_{max}}\left(N-ZX^{-}-\sum_{k:L_{k}<L_{max}}Q_{k}^{+}(N)\right)\\ \text{ if }L_{i}=L_{max}.\end{cases}\] For any \(X^{+}\) and \(X^{-}\) such that \(X(N)\leq X^{+}\leq X_{max}\) and \(0\leq X^{-}\leq X(N)\) and where \(Y_{i}(N)=L_{i}^{+}\) is the ratio \(y\) of the underlying geometric sum. \[\frac{N-1}{N}X(N)\leq X(N-1)\leq X(N)\] \(X^{-}\) and \(X^{+}\) are used to represent lower and upper bounds. The it Geometric Square Bound - GSB is obtained by replacing queue=lengths by \(Q_{i}^{-}(N-1)\). Two classes of performance bounds one for single chain and the other for multi-chain QNs are proposed in [19]. _Proportional Bounds - PBs_ assume that mean queue-lengths are proportional to server loads. PBs are more accurate than BJBs in that individual server loads are retained as parameters in bounds formula. Several tables are given in [7] comparing the accuracy of various methods. GSB is shown to be the most accurate even when one or two iterations are allowed for BJB and PB. ## Appendix V: Proportional Approximation Method - PAM _Proportional Approximation Method - PAM_ is that latest bounding method for closed multichain QNs by Hsieh and Lam 1988/89 [20, 21]. Approximate MVA algorithms for separable queueing networks are based upon an iterative solution of a set of modified MVA formulas. Let \(M\) denotes the number of queues and \(K\) the number of chains or job classes. Each iteration has a computational cost of O(\(MK^{2}\) or less, many iterations are typically needed to attain convergence. Presented are faster approximate noniterative solution algorithms, which are suitable for the analysis and design of communication networks requiring thousands of closed chains. Three PAM algorithms of increasing accuracy are presented. Two of them have time and space requirements of O(\(MK\)). The third algorithm has a time requirement of O(\(MK^{2}\)) and a space requirement of O(\(MK\)). Three PAMs developed in this study consider QNs with fixed-rate and delay servers. Let \(\tau_{mk}\) denote the loading of chain \(k\) at queue \(m\). **Algorithm PAM_BASIC** **Step 1.** Calculate proportional approximations of mean queue lengths. \[\gamma_{mk}=\tau_{mk}/\sum_{i=1}^{M}\tau_{ik}\] \[\text{for }m=1,\ldots,M,\text{ and }k=1,\ldots,K\] \[q^{\prime}_{\text{ }mh}(\underline{\text{N}})=\gamma_{nh}N_{h}\] \[q^{\prime}_{\text{ }mh}(\underline{\text{N}}-\underline{1}_{K})=\begin{cases}q^{ \prime}_{\text{ }mh}(\underline{\text{N}})\text{ if }h\neq k.\\ q^{\prime}_{\text{ }mh}(\underline{\text{N}})-\gamma_{mh}\text{ if }h=k.\end{cases}\] \[\text{for }m=1,\ldots,M,\text{ and }k=1,\ldots,K\] **Step 2:** Calculate approximate mean delay of chain \(k\) at server \(m\) and approximate throughput of chain \(k\) \[D_{mk}(\underline{\text{N}})=\begin{cases}\tau_{mk}[1+\sum_{k=1}^{K}q^{ \prime}_{\text{ }mh}(\underline{\text{N}})-\underline{1}_{k})]\text{ if fixed rate}\\ \tau_{mk}\text{ if delay server}\end{cases}\] \[T_{k}(\underline{\text{N}})=\frac{N_{k}}{\sum_{m=1}^{M}D_{mk}(\underline{ \text{N}})}\text{ for }k=1,\ldots K\] The accuracy improvement of PAM_IMPROVED over PAM_BASIC is obtained by a simple scaling operation to ensure that server utilizations do not exceed one. Algorithm PAM_IMPROVED calculates server utilizations and finds the the server with maximum utilization visited by chain \(k\). **Step 3.** \[U_{m}(\underline{\text{N}})=\sum_{k=1}^{K}T_{k}(\underline{\text{N}}),\quad m =1,\ldots,M\] **Step 4.** Find the largest utilization \(S_{k}\) visited by chain \(k\). \[S_{k}=\max\{U_{m}(\underline{\text{N}})\}\quad k=1,\ldots,K\] **Step 5.** Scale down the throughputs of individual chains in necessary for \(k=1,\ldots,K\) \[\text{If }S_{k}>1\text{ then }T_{k}(\underline{\text{N}})=T_{k}(\underline{ \text{N}})/S_{k},\] **Step 6.** Calculate total throughput and utilizations \[T(\underline{\text{N}})=\sum_{k=1}^{K}T_{k}(\underline{\text{N}})\] \[U_{m}(\underline{\text{N}})=\sum_{k=1}^{K}\tau_{ml}T_{k}(\underline{\text{N} }),\quad m=1,\ldots,M\] **Algorithm PAM_II** The accuracy of the PAM algorithm is improved by executing the last two steps of the MVA recursion instead of just the last step using the proportional approximation to get initial mean queue length estimates. The reader is referred to [20, 21] Relative errors in chain throughputs for 500 networks calculated by PAM_Improved and PAM_TWO are 2.3% vs 0.8% for average error and 40.3 vs 30.8% for the maximum. The additional accuracy of PAM_TWO is obtained by executing the final two steps of the MVA recursion instead of just the last step. The computational time requirements are O(MK) for PAM_BASIC and PAM_IMPROVED, and O(\(M^{2}\)) for PAM_TWO. All three PAM algorithms have space requirements of O(MK). ## Acknowledgement The discussion of UJA is based on joint publications with Dr Behzad (Brad) Nadji at the EE-Systems Dept. at USC.
2303.00524
Semi-decentralized Inference in Heterogeneous Graph Neural Networks for Traffic Demand Forecasting: An Edge-Computing Approach
Prediction of taxi service demand and supply is essential for improving customer's experience and provider's profit. Recently, graph neural networks (GNNs) have been shown promising for this application. This approach models city regions as nodes in a transportation graph and their relations as edges. GNNs utilize local node features and the graph structure in the prediction. However, more efficient forecasting can still be achieved by following two main routes; enlarging the scale of the transportation graph, and simultaneously exploiting different types of nodes and edges in the graphs. However, both approaches are challenged by the scalability of GNNs. An immediate remedy to the scalability challenge is to decentralize the GNN operation. However, this creates excessive node-to-node communication. In this paper, we first characterize the excessive communication needs for the decentralized GNN approach. Then, we propose a semi-decentralized approach utilizing multiple cloudlets, moderately sized storage and computation devices, that can be integrated with the cellular base stations. This approach minimizes inter-cloudlet communication thereby alleviating the communication overhead of the decentralized approach while promoting scalability due to cloudlet-level decentralization. Also, we propose a heterogeneous GNN-LSTM algorithm for improved taxi-level demand and supply forecasting for handling dynamic taxi graphs where nodes are taxis. Extensive experiments over real data show the advantage of the semi-decentralized approach as tested over our heterogeneous GNN-LSTM algorithm. Also, the proposed semi-decentralized GNN approach is shown to reduce the overall inference time by about an order of magnitude compared to centralized and decentralized inference schemes.
Mahmoud Nazzal, Abdallah Khreishah, Joyoung Lee, Shaahin Angizi, Ala Al-Fuqaha, Mohsen Guizani
2023-02-28T00:21:18Z
http://arxiv.org/abs/2303.00524v2
Semi-decentralized Inference in Heterogeneous Graph Neural Networks for Traffic Demand Forecasting: An Edge-Computing Approach ###### Abstract Prediction of taxi service demand and supply is essential for improving customer's experience and provider's profit. Recently, graph neural networks (GNNs) have been shown promising for this application. This approach models city regions as nodes in a transportation graph and their relations as edges. GNNs utilize local node features and the graph structure in the prediction. However, more efficient forecasting can still be achieved by following two main routes; enlarging the scale of the transportation graph, and simultaneously exploiting different types of nodes and edges in the graphs. However, both approaches are challenged by the scalability of GNNs. An immediate remedy to the scalability challenge is to decentralize the GNN operation. However, this creates excessive node-to-node communication. In this paper, we first characterize the excessive communication needs for the decentralized GNN approach. Then, we propose a semi-decentralized approach utilizing multiple cloudlets, moderately sized storage and computation devices, that can be integrated with the cellular base stations. This approach minimizes inter-cloudlet communication thereby alleviating the communication overhead of the decentralized approach while promoting scalability due to cloudlet-level decentralization. Also, we propose a heterogeneous GNN-LSTM algorithm for improved taxi-level demand and supply forecasting for handling dynamic taxi graphs where nodes are taxis. Extensive experiments over real data show the advantage of the semi-decentralized approach as tested over our heterogeneous GNN-LSTM algorithm. Also, the proposed semi-decentralized GNN approach is shown to reduce the overall inference time by about an order of magnitude compared to centralized and decentralized inference schemes. GNN, netGNN, taxi demand forecasting, taxi supply forecasting, ITS, decentralized inference. ## I Introduction Intelligent transportation system (ITS) is an essential item in modern city planning. A key component of an ITS is the means of public transportation such as taxis, buses, and ride-hailing vehicles. As the importance of these services grows, there is a corresponding need for accurately and efficiently forecasting the travel needs of passengers and the corresponding available supplies by vacant taxis ready to serve them. This forecasting enables efficient management of transportation resources and allows for dynamic allocation of taxis such that customer waiting time is minimized and the taxi occupancy times are maximized. It can also help optimizing routes, urban development, traffic flow, and public transportation planning. Taxi demand forecasting has been receiving increasing amounts of attention in the recent transportation engineering literature [1, 2, 3, 4, 5, 6, 7, 8, 9]. Similar to other prediction problems, approaches to taxi demand and supply forecasting can be broadly categorized into two main categories; first is model-based approaches where a statistical model for traffic patterns is employed. Examples along this line include integrated auto-regressive moving average (ARIIMA) [3] and linear regression [10] models. Despite their simplicity, these methods focus only on temporal dependencies and overlook exploiting spatial dependencies for the prediction. The other category is deep learning (DL)-based methods where data-driven techniques are shown to well exploit spatiotemporal correlations for improved prediction. Along the DL line, recurrent neural networks (RNN) such as long short-term memory (LSTM) are especially important for taxi demand and supply forecasting as they can well address time dependency. Accordingly, there is a series of works on using RNN for taxi demand and supply forecasting [11, 12, 13, 7, 14]. Recently, there has been a growing interest in the use of graph neural networks (GNNs) for taxi demand and supply prediction. This approach models city regions as nodes in a graph and their relations as the edges linking these nodes. Along this line, several works have shown the advantage of GNNs in utilizing local region information and the relationships across non-Euclidean regions in improving the forecasting performance [2, 5, 6, 8, 9, 14, 15]. Despite the promising success of GNNs for taxi demand and supply forecasting, there are still outstanding challenges hindering their potential. First, as for graph representation, it is advantageous to simultaneously expose several node and relation types in the representation learning of transportation graphs [17]. Specifically, the existence of multiple types of nodes and edges in current transportation graphs calls for adopting a heterogeneous information network (HIN) approach to seamlessly exploit them. This requires the development of corresponding heterogeneous GNNs (hetGNNs) to handle their representation learning. Second, inferring taxi demand and supply predictions at the level of the whole transportation system incurs a huge amount of computation as it is necessary to utilize the existing salient relationships. This calls for developing solutions to improve the scalability of the GNN approach to cope with city-wide or even larger graphs [18]. Based on the above discussion, in this paper, we propose a hetGNN-LSTM-based algorithm for taxi demand and supply prediction. Compared to the existing GNN-based approaches, our algorithm defines taxis as nodes in a graph. This allows taxi graphs to be dynamic as opposed to existing approaches assuming nodes as fixed geographical regions. On the other hand, this allows for predicting the demands and supplies for each taxi. Operating this algorithm in a centralized way is computationally intensive. Therefore, we develop a decentralized GNN inference approach. However, we show theoretically and through experiments that this decentralized approach has a huge amount of message passing delay which grows quadratically with the number of communication hops. To reduce this delay, we propose a semi-decentralized approach. This approach uses multiple cloudlet devices each handling a subgraph of the transportation graph. A cloudlet is a moderate computing capability device placed at a base station (BS) and able to communicate with the taxis in its coverage area. From now on, we refer to the cloudlet with its network as a cloudlet network (CLN). As for the BS, we assume a 5G small cell such as the architecture of eNodeB BS detailed in [19]. The contributions of this paper can be summarized as follows. * We consider taxi demand and supply forecasting at a taxi level. Predicting on a taxi-node level provides drivers with immediate information on the availability of pick-ups and supply of other taxis in a region surrounding them. * We propose a novel heterogeneous graph-based algorithm for taxi demand and supply prediction utilizing hetGNNs. We model the transportation system as a heterogeneous graph of taxis as nodes linked with three relationship types derived from road connectivity, location proximity, and target destination proximity. The proposed algorithm exploits these relationships to improve the prediction. * We propose a _semi-decentralized_ approach to GNN-based traffic prediction. This approach is proposed to mitigate the scalability of centralized GNN inference and the huge message-passing delay in decentralized GNN inference. * We propose an adaptive node-CLN assignment to minimize inter-CLN communication. We develop a heuristic protocol for this assignment in a distributed manner across cloudlet devices. Experiments on real-world taxi data show a high-accuracy prediction of the proposed taxi demand and supply forecasting algorithm compared to the state-of-the-art, represented by DCRNN [14], Graph WaveNet [15], and CCRNN [16] being leading GNN-based approaches. Experiments also show that the inference time delay in decentralized GNN operation grows quadratically with the number of message-passing hops. Also, the proposed semi-decentralized GNN approach is shown to reduce the overall inference time by about 10 times compared to centralized and decentralized inference. The source code and datasets used in this paper are available on [https://github.com/mahmoudkanazal/SemidecentralizedhetGNNLSTM](https://github.com/mahmoudkanazal/SemidecentralizedhetGNNLSTM). The rest of this work is organized as follows. Section II revises related works. The proposed taxi demand and supply algorithm and semi-decentralized GNN approach are detailed in Section Section III. Section IV presents experiments and results, with the conclusions in Section V. ## II Related Work ### _DL methods for traffic demand forecasting_ Similar to their use in other application areas, DL models achieve performance gains in a variety of traffic forecasting problems. This is due to their ability to leverage dependencies among training data. Along this line, recurrent neural networks such as LSTM [20] are used to exploit time correlations [21], whereas convolutional neural networks (CNNs) utilize spatial dependencies [22]. An LSTM is particularly well-suited for prediction tasks that involve temporal dependency. This is because LSTM networks can remember and use previous information over extended periods. A more recent research trend combines LSTMs and CNNs to exploit spatiotemporal correlations in what is known as ConvLSTM [7, 23]. However, these DL approaches share a common restriction; they overlook existing and potentially useful relationships across entities in a traffic system, such as taxis, roads, and customers. Such relationships model other types of dependencies such as road connectivity and proximity. For instance, the demands at two railway stations in the same city are very likely to be correlated [6], even though the two stations may be distant. ### _GNN models for traffic demand forecasting_ Graph data structures appear naturally in many fields such as user accounts in a social network and vehicles in a traffic system [24]. GNNs extend DL to graph data by combining graph structure and node information through message passing and aggregation [25, 26]. This combination enables GNNs to produce node embeddings to serve multiple downstream graph tasks such as node classification (inferring the class label of a node), link prediction (estimating the likelihood of a link to exist between given nodes), and graph classification (inferring a property of the graph as a whole). Each node in a graph has a computational graph composed of its \(L\)-hop neighboring nodes. Node embeddings are obtained by alternating between message passing, i.e., communicating local information across nodes, and aggregation where received messages along with previous node information are used to obtain an updated node embedding. Message passing is done according to the topology of the graph, whereas aggregation is done by the neural network layers of the GNN model obtained by training over graph data [27, 28]. As DL methods overlook useful relationships, recent literature considers modeling the traffic system as a graph and applying a GNN approach to problems such as taxi demand and supply forecasting and flow prediction [29]. A city area is divided into many regions and each region is represented by a node in the graph. Several works assume different edges linking these nodes such as having a common origin-destination relationship between two regions if there is a taxi moving from one region (origin) to the other (destination) [6]. A more recent example work [8] considers dividing a city into non-uniform regions serving as the graph nodes linked with edges representing their road connectivity. Despite the clear advantages of GNNs, the existing research body on their usage for taxi demand and supply forecasting assumes homogeneous graphs. So, (homogeneous) GNNs along with other DL models such as LSTM are mainly used to perform the forecasting. Still, when modeling the traffic system as a graph, it may contain different types of nodes with different relation types. So, restricting the use of GNNs to homogeneous models overlooks this richness of relation types and limits the potential of GNNs. Therefore, it is advantageous to model traffic systems as HINs processed by hetGNNs [30] where the prediction can be improved by incorporating multiple relationships between the graph nodes. The main challenge hetGNNs face is handling heterogeneity. While some primitive hetGNNs project the HIN on the graph space to eliminate its heterogeneity [31], others use the metapath concept [32]1 to maintain and utilize the heterogeneity [33, 34, 35]. This is achieved by decomposing the HIN into multiple metapaths encoded to get node representations under graph semantics. hetGNNs have been shown to outperform their GNN ancestors in many applications such as malicious payment detection [36], drug/illegal trade detection [37], and network intrusion attack detection [38]. Footnote 1: A meta path is a composition of relations linking two nodes. ### _Decentralizing GNN inference_ Training and testing of GNN models over large graphs require huge memory and processing costs. This is because graph nodes are mutually dependent. Thus, they can not be arbitrarily divided into smaller subgraphs. Techniques such as neighborhood sampling [39] may ease the problem to some extent. Still, even a sampled computational graph and the associated features may not fit in the memory of a single device [40]. Thus, centralized GNN operation faces a scalability limitation [40, 41]. To mitigate the scalability limitation of centralized GNNs, decentralized (peer-to-peer) node inference has been applied to a few GNN applications like robot path control [42, 43] and resource optimization in wireless communication [18]. Decentralization naturally promotes GNNs' scalability. Still, it requires excessive communication overhead between nodes [40]. In turn, this communication delay significantly slows down the operation of a GNN because the progression of calculations across GNN layers needs to wait for a 2-way message passing to deliver messages from \(L\)-hop neighbors. Another disadvantage of decentralized inference is the difficulty of coordinating and synchronizing the operation of all nodes. Another less significant disadvantage is the need for each node to maintain and operate a copy of the GNN model. While the literature considers either centralized or decentralized GNNs, it came to our knowledge while developing this work that there is another work [41] on distributed GNN operation. [41] proposes adaptive node to cloud server assignment minimizing a general cost function with the servers of different computational powers. However, this is done in a centralized manner; a solver needs to know the graphs of nodes and cloud servers to do the assignment. In our work, we optimize the assignment in a distributed manner at the cloudlet level. Also, according to [41], any node may be assigned to any cloud server, while our work focuses on the boundary nodes at each CLN and assigns them to their CLN or an adjacent one taking the geometry into account. Also, while our work focuses on minimizing the communication delay, [41] adopts a general cost function where the challenge is mapping nodes to cloud servers of varying computational capabilities while optimizing the other costs including the delay. It is also noted that [41] does not compare centralized and decentralized GNN implementations or consider their trade-offs. ## III The proposed work ### _System model_ The system model considered in this paper is a taxi service system composed of many taxis operating in a certain region/city as shown in Fig. 1-a. We set the objective of the system as providing future predictions for the demand (e.g., passengers) and supply (e.g., vacant taxis) in the vicinity around each taxi represented by the red circles in this figure. The approach assumed in this work is a GNN approach based on a graph representation of the taxis as shown in Fig. 1-b. To this end, we study and compare the following approaches to GNN inference with the taxi graph. * A fully centralized approach: as represented in Fig. 2-a, a server or cloud is placed at a BS with a communication range converging the entire operation area. Taxis upload their local messages to the server which also keeps track of their locations. The server uses this information to obtain nodes' computational graphs and uses a local GNN to obtain updated messages. According to the graph structure, the server performs message passing by computation instead of communication. Next, the server sends the updated node messages to their taxi nodes. For taxi-server communication, a communication network under the ITS-G5 standard [44] is assumed. * Our fully decentralized approach: taxis have GNN models locally and can only communicate with the taxis in their network's coverage area, as represented in Fig. 2-b. This way, each taxi forms its computational graph. Taxi-to-taxi Figure 1: Taxis in a city (a) and their corresponding graph representation (b). communication is done through a wireless ad-hoc network such as the one in [45]. * Our proposed _semi-decentralized_ approach as shown in Fig. 2-c: This approach uses cloudlets centered at CLNs. Taxis in the coverage area of each CLN use it to send their messages to its cloudlet device. Similar to the centralized setting, the cloudlet uses the uploaded taxi messages along with their locations to obtain node computational graphs and uses a local GNN to obtain updated messages. Message passing for the edges in the CLN is done by computation instead of communication as represented by the dashed lines in Fig. 2-c. However, the messages between connected taxis in adjacent cloudlets are shared through cloudlet-cloudlet communication, as represented by the solid lines in Fig. 2-c. ### _Graph construction and problem formulation_ We represent the transportation system as a HIN composed of taxis as its nodes linked with edges of three types. First, is a _connectivity_ edge [8] representing road connectivity. Second, is a _proximity_ edge [9] linking nearby taxis. Third, we define a _destination-similarity_ edge linking taxis going to nearby destinations. Accordingly, a _connectivity_ adjacency matrix is as follows [8]. \[\mathbf{A}_{c}[i,j]=\begin{cases}1&\text{if there is a road connecting nodes $i$ and $j$},\\ 0&\text{otherwise.}\end{cases} \tag{1}\] A _proximity_ adjacency matrix is as follows [9]. \[\mathbf{A}_{p}[i,j]=\begin{cases}\operatorname{dist}(p_{i},p_{j})&\text{if } \operatorname{dist}(p_{i},p_{j})<th_{p},\\ 0&\text{otherwise.}\end{cases} \tag{2}\] where \(\operatorname{dist}(p_{1},p_{2})\) is a function of the Euclidean distance between taxi positions \(p_{1}\) and \(p_{2}\), and \(th_{p}\) is a certain threshold. Next, our proposed _destination-similarity_ adjacency matrix is as follows. \[\mathbf{A}_{d}[i,j]=\begin{cases}\operatorname{dist}(d_{i},d_{2})&\text{if } \operatorname{dist}(d_{i},d_{2})<th_{d},\\ 0&\text{otherwise.}\end{cases} \tag{3}\] where \(\operatorname{dist}(d_{1},d_{2})\) is a measure of the Euclidean distance between destinations \(d_{1}\) and \(d_{2}\), and \(th_{d}\) is a prescribed threshold. The structure of the proposed HIN is represented in the network schema in Fig. 3-a. We denote the HIN at a time instant \(t\) by \(G^{t}=(\mathcal{V}^{t},E_{c}^{t},E_{p}^{t},E_{d}^{t})\), or equivalently, \(G^{t}=(\mathcal{V}^{t},\mathbf{A}_{c}^{t},\mathbf{A}_{p}^{t},\mathbf{A}_{d}^{t})\) where \(\mathcal{V}^{t}\) is the node-set, \(E_{c}^{t},E_{p}^{t}\), and \(E_{d}^{t}\) represent the connectivity, proximity, and destination-similarity edges, respectively, \(\mathbf{A}_{c}^{t},\mathbf{A}_{p}^{t},\) and \(\mathbf{A}_{d}^{t}\) denote the Figure 3: The network schema of the proposed HIN in (a) and the proposed system architecture in (b). Figure 2: Three possible computation settings. (a) centralized: taxis upload their messages to a central server that performs the computations and returns the results to the taxis. (b) decentralized: taxis exchange messages with their neighbors and perform computations locally. (c) semi-decentralized: taxis in a CLN upload their messages to a cloudlet. The cloudlet performs the computations and returns them to taxis. connectivity, proximity, and destination-similarity adjacency matrices, respectively. For simplicity, we represent the operation of the system on a time slot basis and assume the graph is fixed during a time slot. At a time step \(t\), each taxi knows the \(P\)-step historical demand and supply values of its current region of dimensions \(m\times n\) taxi positions, which serve as the node message (attributes). The objective at each node is to predict the demand and supply values for the next \(Q\) time slots. In this sense, each taxi driver will be informed on both the availability of future passengers and other vacant taxi in an \(m\times n\) vicinity around their taxi. At a time instant \(t\), the graph \(G^{t}\) has an overall feature matrix \(\mathbf{X}_{t}\in\mathbb{R}^{N\times d}\) where \(d=m\times n\) is the input feature dimension and \(N^{t}\) is the number of nodes. So, we intend to obtain a mapping function \(\mathcal{F}\) as follows. \[\left[\mathbf{X}_{t-P+1:t},G^{t}\right]\overset{\mathcal{F}}{ \longrightarrow}\mathbf{X}_{t+1:t+Q}, \tag{4}\] where \(\mathbf{X}_{t+1:t+Q}\in\mathbb{R}^{Q\times N^{t}\times d}\) and \(\mathbf{X}_{t-P+1:t}\in\mathbb{R}^{P\times N^{t}\times d}\). ### _A heGNN-LSTM algorithm for taxi demand and supply prediction in a semi-decentralized approach_ #### Iii-C1 The proposed hetGNN-LSTM algorithm To incorporate multiple edge types and time dependency in the prediction, we propose a hetGNN-LSTM algorithm as described in Fig. 3-b. For simplicity, we first present its operation in a centralized setting and then in a decentralized setting. After discussing the shortcomings of the centralized and decentralized settings, we present its use in the proposed semi-decentralized setting. In the centralized approach, first, the server constructs a HIN according to the network schema in Fig. 3-a. For each node, messages are shared across the HIN; then the hetGNN layer outcomes are calculated accordingly, and the process continues. Eventually, the final node embedding is obtained after \(L\)-hop messages are exchanged from the neighbors to the node in question. After that, these embeddings are fed to an LSTM model to produce the eventual demand predictions. The predictions are then sent to their respective nodes. However, this centralized operation lacks scalability due to the huge amount of computation done at the server. This suggests decentralizing the operation. In the decentralized setting, represented in Fig. 2-b, each taxi maintains a copy of the hetGNN-LSTM model and it is assumed to do two main tasks. The first task is establishing the connection to receive the messages shared from its \(L\)-hop taxi nodes and using them along with its local information to obtain its final predictions, whereas the second task is sending its messages to these neighbors so that they can operate their GNNs. Due to the absence of a central server, a node needs to identify its \(L\)-hop neighbors and communicate with them. However, this is restricted to the communication abilities of these nodes and may not fully make use of the HIN structure between distant nodes. More importantly, the node-node message passing delay limits the number of achievable communicating hops at a reasonable inference delay. Intuitively, the delay in decentralized GNN inference mainly depends on the message passing delay which significantly increases with the number of communication hops. To quantify this dependency, in Theorem 1, we derive approximate bounds for the overall inference time delay in decentralized GNNs. We show that this delay increases quadratically with the number of communication hops. Furthermore, this increase is determined by the topology of the computational graph of a node. The lower and upper bounds of GNN inference delay are determined mainly by the maximal degree2 of nodes in each communication hop, and the summation of node degrees in each communication hop, respectively. Footnote 2: A node’s degree is the number of edges connected to it [46]. **Theorem 1**.: _In decentralized GNN inference, the overall inference delay with \(L\)-hop message passing, denoted by \(\Delta\), is within the following topology-dependent bounds with quadratic growth with \(L\)._ \[2t_{r}\sum_{l=1}^{L}(L-l+1)[ld_{i}+\max_{x\in N_{l}(i)}\{d_{x} \}+(l+1)t_{p}]\\ \leq\Delta\leq\\ 2t_{r}\sum_{l=1}^{L}(L-l+1)[ld_{i}+\sum_{x\in N_{l}(i)}d_{x}+(l+ 1)t_{p}] \tag{5}\] _where \(L\) is the number of message passing hops (equivalently, the number of GNN layers), \(t_{r}\) is the packet transmission delay of the wireless network used for message passing, \(d_{i}\) is the degree of node \(i\), \(N_{l}(i)\) is the set of \(l\)-hop-away neighbors to node \(i\), and \(t_{p}\) is the GNN layer processing delay._ The proof of Theorem 1 is in the Appendix. To investigate the relationship between the number of hops and the overall GNN inference delay in real-world scenarios, we present the following experiment. A total of 255 taxis in a city region are considered to work in a decentralized GNN setting. We assume an ad hoc wireless network connecting taxi nodes. We calculate the overall inference time as specified in Section IV. For \(L\) hop values ranging from 1 to 5, an \(L\)-hop computational graph of a node is obtained, and the overall message passing delay is calculated. We also calculate the GNN inference delay bounds presented in Theorem 1. We repeat this experiment for 10 trials and plot average values of the overall GNN inference delay and its bounds versus hops in Fig. 4. As shown in this figure, the overall inference delay grows with increasing communication hops at a quadratic proportionality. Also, the actual delay is within the bounds specified in Theorem 1. This result shows the message passing delay bottleneck in decentralized GNN inference. To resolve the limitations of scalability and excessive message passing delay in the centralized and decentralized schemes, respectively, we propose a semi-decentralized approach. As shown in Fig. 2-c, this approach uses a set of CLNs that span the work area of the taxis. Each CLN is associated with a certain city area and thus establishes a sub-HIN accordingly. Then, taxis in each CLN's region (sub-HIN) communicate their messages to the cloudlet, which performs predictions using its copy of the hetGNN-LSTM model, and then it communicates these predictions back to their respective taxis. It is noted that some boundary taxi nodes may have edges connecting them to taxi nodes in an adjacent CLN. In this case, the CLNs serving these taxis need to exchange messages about these nodes for each GNN level. Adjacent CLNs exchange information on their boundary taxis that have edges across these CLNs, as represented by the solid lines in Fig. 2-c. The main steps of the operation in the semi-decentralized approach are outlined in Algorithm 1. ``` 1:Initial CLN region boundaries for each cloudlet, a trained GNN-LSTM model at each cloudlet. 2:The next \(Q\)-future predictions of taxi demand and supply in an \(m\times n\) vicinity region around each taxi node 3:Each cloudlet \(u\) senses the existing \(n_{u}\) taxis in its CLN region. 4:Each cloudlet \(u\) obtains a sub-HIN \(G_{u}^{t}:(\mathbf{A}_{u,\mathbf{c}}^{t},\mathbf{A}_{\mathbf{u},\mathbf{p}}^{t},\mathbf{A}_{\mathbf{u}, \mathbf{d}}^{t}\in\mathbb{R}^{n_{u}\times n_{u}})\) based on the taxis in its CLN region. 5:Each cloudlet \(u\) obtains the feature matrix of the nodes in its CLN its nodes \(\mathbf{X}_{\mathbf{u}}^{t}\in\mathbb{R}^{n_{u}\times k^{t}})\) 6:Each cloudlet \(u\) determines the boundary nodes in its CLN connected to nodes in an adjacent cloudlet \(v\). 7:Each cloudlet \(u\) computes the messages passed across its nodes. 8:Each cloudlet \(u\) sends the messages of its boundary nodes to the CLNs containing their connected nodes. 9:Each cloudlet \(u\) receives the messages from the nodes connected to its boundary nodes through their CLN. 10:The updated node messages are aggregated and fed to the \(l\)-th GNN layer to produce new messages. 11:Each cloudlet \(u\) sends the eventual embeddings to their respective nodes in the CLN. ``` **Algorithm 1** Semi-decentralized GNN inference. #### Iii-C2 Adaptive node-CLN assignment In semi-decentralized operation, inter-CLN communication needs to happen for boundary nodes at each GNN level. The way the nodes are assigned to CLNs governs the number of inter-CLN edges. Therefore, an adaptive assignment can be employed to minimize the number of inter-CLN edges. We propose using minimum edge-cut graph partitioning to divide the shared subgraph between each pair of adjacency CLNs to serve this goal. As an example, in Fig. 5-a, uniformly assigning the nodes to CLNs results in 3 edges across the CLNs (denoted by the red dashed lines). However, applying minimum-cut graph partitioning to the nodes in the shared boundary region (\(\mathcal{V}_{b}\)) enclosed by the dashed lines in Fig. 5-b, a new assignment may add nodes \(v_{7}\), \(v_{8}\), and \(v_{9}\) to CLN 1, as seen in Fig. 5-c. This means that inter-CLN message passing needs to happen only across one node pair (\(v_{8}\)-\(v_{10}\)). This partitioning has to be done in a distributed manner between the CLNs. To achieve this, CLN 1 and CLN 2 share information on nodes and their positions in the boundary subgraph \(\mathcal{V}_{b}\). Nodes in this 2-\(L\) region are then to be partitioned between the two CLNs in a distributed 2-way manner. This means that one CLN will do the partitioning and instruct the other. For example, let us assume it is CLN 2. So, the partitioning problem is to optimize an assignment operator \(\Psi\) that assigns each node in the shared boundary subgraph (\(\mathcal{V}_{b}\)) to belong either to the set of nodes assigned to CLN 1 (\(\mathcal{V}^{1}\)) or to that of CLN 2 (\(\mathcal{V}^{2}\)), minimizing inter-set edges. The node assignment problem can be formulated as follows. \[\begin{array}{ll}\operatorname{argmin}_{\Psi}&\sum_{u}\sum_{\mathcal{V}} \mathds{1}\{\mathcal{V}_{u}^{1},\mathcal{V}_{v}^{2}\}\\ \text{subject to}&(\mathcal{V}^{1},\mathcal{V}^{2})=\Psi(\mathcal{V}_{b})\\ &\mathcal{V}^{1}\cap\mathcal{V}^{2}=\varnothing\end{array}, \tag{6}\] where the indicator function \(\mathds{1}\{\mathcal{V}_{u}^{1},\mathcal{V}_{v}^{2}\}\) is 1 if there is an edge between the \(i\)-th node in \(\mathcal{V}^{1}\) and the \(j-th\) node in \(\mathcal{V}^{2}\), and is 0 otherwise. The node assignment problem in (6) can be solved as a minimum edge-cut graph partitioning process. The overall graph is divided into multiple subgraphs with minimum edge cuts, and each subgraph is assigned to a CLN. The graph partitioning problem is known to be NP-complete [47]. However, multi-level graph partitioning as in the METIS [48] algorithm provides suitable solutions by operating in three stages; coarsening the graph into smaller graphs, bisecting the smallest graph, and gradually projecting nodes on each graph partition. Still, the nodes in the taxi graph are aligned in a two-dimensional space. This eases the problem of graph partitioning by restricting partitioning to boundary nodes in neighboring CLNs. So, in this work, we apply k-means clustering between each pair of adjacent CLNs to partition their boundary nodes in a distributed manner. We propose a protocol for achieving adaptive assignment between each pair of adjacent CLNs in a distributed manner in Algorithm 2. #### Iii-C3 Computational complexity analysis The proposed algorithm assumes taxi nodes and generates predictions in the \(m\times n\) vicinity of each node, whereas the existing approaches predict aggregate demand and supply values for each region treated as a node. Assuming a city having \(N\) taxis operating in \(K\) regions, the proposed algorithm retrieves \(2m\times n\times P\times Q\) data values for \(N\) taxis whereas the existing approaches retrieve \(2\times P\times Q\) data values for \(K\) regions. It is interesting to compare the inference time complexity of the proposed algorithm to that of the existing approaches. Fig. 4: Actual values of overall GNN inference delay versus the number of communication hops and their corresponding bounds of Theorem 1. The inference time complexity of the existing approaches, depends on the following factors. * Time complexity of passing 2\(PQ\)-dimensional messages across \(K\) (region) nodes within \(L\) GNN layers: assuming a message packet delay \(t_{r}\), this is proportional to \(P,Q,K\), and \(L^{2}\) (as concluded in the proof of Theorem 1). * The time complexity of processing 2\(PQ\)-dimensional messages for \(L\) layers per node: assuming a processing delay \(\tau\) for each message, this is proportional to \(P,Q,L,K\), and \(\tau\). Therefore, the inference time complexity of existing approaches is \(O(KPQ(L^{2}t_{r}+L\tau).\) The inference time complexity of the proposed algorithm in the semi-decentralized setting is calculated at a cloudlet level since cloudlets operate concurrently. Let us assume that the number of nodes per CLN is \(N/K\), on average. The cloudlet inference time depends on the following factors. * Message passing by computation of \(mnPQ\) messages for \(L\) GNN layers per node. Let us denote by \(t_{c}\) the time to pass an \(mnPQ\)-dimensional message by communication. So, the time complexity due to this delay is proportional to \(N/K,m,n,P,Q,t_{c}\) and \(L^{2}\). * Layer processing time of \(mnPQ\) messages for \(L\) layers per node. Again, let us denote this processing time by \(\tau\). So, the time complexity due to this delay is proportional to \(L\tau\). * Cross-CLN MP for boundary nodes: let us assume a faction \(\gamma\) of cloudlet nodes being boundary nodes connected to nodes in other CLNs, and let us denote by \(t_{CLN}\) the message packet transmission delay across CLNs. Then, this delay will be \(\gamma N/KL^{2}t_{CLN}\). So, the overall time per-cloudlet time complexity is \(O(mnPQt_{c}N/K+mnPQN/KL\tau+\gamma N/KL^{2}t_{CLN})\). ## IV Experiments and Performance Analysis We present experiments on real-world data to analyze the operation of the proposed hetGNN-LSTM taxi demand and supply forecasting algorithm and the semi-decentralized GNN approach from the perspectives of prediction accuracy and GNN inference delay. ### _The setup and dataset_ The dataset used in this work is adopted from the NYC dataset [4]. This dataset consists of 35 million taxi trip records in New York City from April 1st, 2016 to June 30th, 2016. For each trip, the following information is recorded; pick-up time, drop-off time, pick-up longitude, pick-up latitude, drop-off longitude, and trip distance. As for the hetGNN, we use a heterogeneous graph convolutional network (HeteGCN). We implement the models using PyTorch Geometric (PyG) [49] version 2.0.3 built on PyTorch version 1.10.0. Experiments are conducted on a Lambda GPU Workstation with 128 GB of RAM, two GPUs each of 10 GB RAM, and an I9 CPU of 10 cores at a clock speed of 3.70 GHz. Table I lists key hyperparameters of the hetGNN-LSTM model. ### _Performance of the proposed hetGNN-LSTM forecasting algorithm_ In this experiment, we compare the accuracy of taxi demand and supply values predicted by the proposed hetGNN-LSTM algorithm with the following approaches representing the state-of-the-art. * DCRNN [14]: uses a combination of diffusion convolutional layers and RNNs to make predictions. * Graph WaveNet [15]: uses a combination of graph convolutional layers and dilated causal convolutional layers to capture the complex patterns in the data and uses an adaptive adjacency matrix. * CCRNN [16]: uses a GCN model with coupled layer-wise graph convolution, which allows for more effective modeling of the spatial-temporal dependencies in transportation demand data with learnable adjacency matrices. Comparisons are conducted in terms of the following quality metrics. * Root mean squared error (RMSE) \[\mathrm{RMSE}(\mathbf{x},\hat{\mathbf{x}})=\sqrt{\frac{1}{n}\sum_{i}\left(x_{i}-\hat{ x}_{i}\right)^{2}}\] * Mean absolute percentage error (MAPE) \[\mathrm{MAPE}(\mathbf{x},\hat{\mathbf{x}})=\frac{1}{n}\sum_{i}\left|\frac{x_{i}-\hat{ x}_{i}}{x_{i}}\right|\] * Mean absolute error (MAE) \[\mathrm{MAE}(\mathbf{x},\hat{\mathbf{x}})=\frac{1}{n}\sum_{i}\left|x_{i}-\hat{x}_{i}\right|\] where \(\mathbf{x}=x_{1},\cdots,x_{n}\) denotes the ground truth values of taxi demand and supply, and \(\hat{\mathbf{x}}=\hat{x}_{1},\cdots,\hat{x}_{n}\) represents their predicted values. There are two major differences between the operation of the proposed algorithm and the existing algorithms in the literature [14, 15, 16, 2, 2, 5, 6]. First, our hetGNN-LSTM algorithm considers dynamically evolving graphs since its node granularity is at the taxi level. Thus, taxis can enter or leave the system generating a dynamic graph. In contrast, the existing approaches model city regions as graph nodes, and therefore, have static graphs. Secondly, our algorithm considers predicting the demands and supplies in a vicinity surrounding each taxi node whereas the existing approaches only predict at the locations of nodes (since each node is a region). Therefore, it is unfair to directly compare our algorithm to the existing approaches as this will compare different quantities. We thus include the following comparison cases where our algorithm is operated on a region level and the other algorithms are operated on a taxi level. * A _taxi vs. region_ comparison case: in this case, we record the taxi demand and supply values obtained by the proposed algorithm at a 3x3 region surrounding each taxi, and record demand and supply predictions of the baselines at specific city regions. This is an unfair comparison since the proposed algorithm predicts more information compared to the baselines. * A _region vs. region_ comparison case: from the demand and supply predictions obtained with the proposed algorithm at taxi locations, we calculate the overall predictions made in each city region. We compare these values to the predictions at the same regions obtained by each of the baselines. This is a fair comparison since we compare corresponding predictions. * A _taxi vs. taxi_ comparison case: we record taxi demands and supplies obtained by the proposed algorithm at taxi locations. For the baselines, we use them to predict demands and supplies at the same taxi locations (rather than on a region level). This is also a fair comparison case since we compare predictions of the same information. For each of the above-mentioned comparison cases, we operate the proposed hetGNN-LSTM model in the three decentralization settings. Namely, centralized, fully decentralized, and semi-decentralized settings denoted by scenarios \(SC1\), \(SC2\), and \(SC3\), respectively. Table II lists the results of this experiment. Considering Table II, let us first compare the performances of the proposed algorithm in centralized, fully-decentralized, and semi-decentralized settings (\(SC1\), \(SC2\), and \(SC3\), respectively). The centralized approach \(SC1\) has the best performance compared to \(SC2\) and \(SC3\). This is because it has access to all \(L\)-hop information for each node. Since \(SC3\) uses inter-cloudlet communication to achieve message passing between boundary nodes in adjacent CLNs, its performance is close to that of the centralized setting (\(SC1\)). It has some degradation compared to \(SC1\) because some nodes may have dependency across non-adjacent CLNs which is not accounted for in the inter-cloudlet communication. However, the fully decentralized setting \(SC2\) has more performance degradation as compared to \(SC1\) and \(SC3\). This is because the number of communication hops in \(S2\) is restricted to the communication ranges of taxis in the wireless ad hoc network (This is a distance of 100 m). Now, let us compare the performance of the proposed algorithm in the three settings to that of the baselines according to Table II. First, for the _taxi-region_ comparison case, the proposed algorithm has higher RMSE and MAE values compared to the other baselines. However, it has a lower MAPE than the baselines. This result is not conclusive since this comparison case compares different quantities in different settings (the proposed algorithm predicts on a taxi level, whereas the baselines predict on a region level), we only include this result for the reader's reference. The other two comparison cases are more reasonable in the sense that similar quantities are compared. In the region-region comparison case, the proposed algorithm significantly outperforms the baselines. For example, the proposed algorithm in the centralized setting achieves reductions of 45.4%, 56.0%, and 58.5% in MSE, MAE, and MAPE compared to CCRNN [16]. Similar reductions are obtained with the proposed algorithm in decentralized and semi-decentralized settings. A similar result is seen in the taxi-taxi comparison case, where the three metrics of the baselines become larger as they are operated at a taxi node level. As an example, the proposed algorithm in the centralized setting achieves reductions of 17.46%, 28.79%, and 50.31% in MSE, MAE, and MAPE compared to CCRNN [16]. A similar performance is obtained with the proposed algorithm in decentralized and semi-decentralized settings. ### _The impact of decentralization on the GNN inference delay_ In this experiment, the objective is to compare the overall GNN inference time in centralized, decentralized, and semi-decentralized settings, calculated as detailed below. * In centralized GNN: The overall inference time equals the summation of; the time for nodes to upload their messages to the server, the time of inference for nodes using message passing by calculation and layer processing, and the time to send the eventual messages to their respective nodes. The communication medium between the nodes and the server assumed is the ITS-G5 standard from [44], and we adopt a packet transmission delay of 3.3 ms as reported in [44]. As for the GNN layer computation time, we measure it as the execution time for one GNN layer. Similarly, measure the time required to pass the messages by computation instead of communication. * In decentralized GNN: The overall inference time for the nodes in the graph is calculated on a node level, where each node has its \(L\)-hop computational graph. We calculate the overall inference time for a given node by adding the transmission delays required to send and receive the messages to/from the nodes in its \(L\)-hop computational graph, and the processing time at each GNN layer. The communication medium assumed in this setting is an ad hoc wireless network, and we adopt empirical values for the transmission delay from [45]. In this network, a source node forwards its message to relay nodes that forward it till it reaches the destination node. As described in [45], the processing delay for a source node is about 16.55 ms which accounts for queuing and processing delays, whereas the link transmission delay is 7 ms. However, the processing delay at a relay node is 11.65 ms. Thus, an empirical value for the transmission delay calculated in this manner is about (16.55+7) ms for source nodes and (11.65+7) ms for relay nodes. As for the layer processing time, a decentralized device is assumed to have a 100 times longer processing time as compared to a centralized cloud server, as commonly assumed in the literature [50]. * In semi-decentralized GNN: The overall inference time is calculated as the time for the slowest CLN to obtain the embeddings of its nodes. This is calculated as the sum of the time required to upload the node messages to the CLN, the time for the CLN to produce latent embeddings based on the initial node messages, and the time to send these latent embeddings to their nodes. Furthermore, if a CLN has boundary nodes connected to nodes in an adjacent CLN, then the two CLNs have to exchange the updated messages of their connected nodes for each GNN layer. Thus, the overall message passing delay per GNN layer is the sum of the message passing by calculation (relatively small) and the inter-CLN message passing (relatively large). As for the layer processing time, a cloudlet in the semi-decentralized setting is assumed to have an order of magnitude slower computation as compared to the cloud server in the centralized setting. It is noted that if an adaptive assignment is applied, then we add its time delay. This is the sum of time to send information from a CLN to its neighbor, the time for Figure 6: Comparison of the overall inference time versus the number of message passing hops in four decentralization scenarios, with the proposed model in (a), and CCRNN in (b). the receiving CLN to perform adaptive k-means assignment, and the time for sending the result to the other CLN. With the above specifications, we assume a total of 255 nodes spread in a city area. Then, we quantify the overall inference time over the number of communication hops for the following scenarios. * _Cent.:_ centralized inference as shown in Fig. 2-a * _Decent.:_ decentralized inference as shown in Fig. 2-b * _Semi-decent.:_ semi-decentralized inference, as shown in Fig. 2-c, where decentralization is obtained using 10 CLNs of uniform node assignment. We also consider the case of 20 CLNs for observing the effect of the number of CLNs on performance. * _Semi-decent.-adaptive:_ semi-decentralized inference where decentralization is obtained by splitting the nodes into the coverage of 10 non-uniform CLNs with the proposed adaptive assignment according to Algorithm 2. Similar to the previous scenario, we also include the case of using 20 CLNs. Fig. 6 shows the results of the above experiment, with the GNN model being the proposed hetGNN-LSTM model in (a), and CCRNN [16] in (b). Several conclusions can be made from this figure. First, decentralization reduces the overall inference time (by comparing _Cent._ and _Decent._). Second, the added benefit of semi-decentralization is shown in the significant reduction, of about an order of magnitude, in inference time attained by the proposed setting (_Semi-decent._ and _Semi-decent-adaptive._) as compared to either centralization or decentralization. Furthermore, finer decentralization (20 CLNs) is shown more advantageous than coarse decentralization (10 CLNs). This is consistently seen in both _Semi-decent._ and _Semi-decent.-adaptive:_ Moreover, the advantage of adaptive node-CLN assignment is shown by a significant reduction of around 2-times in inference time in _Semi-decent.-adaptive_ compared to the case of uniform assignment _Semi-decent_. This is consistent with the expected reduction of inter-CLN edge communication. Fig. 7 visually illustrates adaptive assignment by comparing a sample uniform assignment to an adaptive one in parts (a) and (b), respectively. In part (b), we obtain adaptive assignments using the proposed distributed adaptive assignment approach in Algorithm 2. CLN boundaries are denoted by solid yellow lines and nodes are denoted by yellow dots. It can be seen that adaptive assignment can help reduce the number of taxis connected across CLNs, and thus, the volume of inter-CLN communication. This interprets the reduction in inference delay shown in the _Semi-decent.-adaptive_ scenario in Fig. 6. ### _The generalizability of the hetGNN model_ A key advantage of GNNs is that even when trained with moderately sized graphs, they can generalize to larger graphs with unforeseen structures [42]. In this experiment, we investigate the impact of training graph sizes on the performance of the proposed hetGNN-LSTM algorithm. For this purpose, we consider a sample test graph of 1916 nodes and test it with different models trained with gradually increasing the training graph sizes from 220 to 1916. Fig. 8 shows the testing RMSE, MAE, and MAPE performance metrics versus the training graph size. In view of Fig. 8, it can be seen that the drop in training set size does not incur a proportional drop in the performance as one moves from 1916 nodes to 220 nodes. With about one-tenth of the graph sizes for the training, the performance loss is still relatively marginal. This result establishes the generalizability of the hetGNN model. Figure 8: Quality metrics for testing with a fixed graph size while varying training graph sizes. Figure 7: Uniform and adaptive assignment samples in (a) and (b), respectively. ### _Training the model using federated learning_ The above experiments focus on the inference of the proposed model. Still, it is interesting to examine its operation with distributed learning such as federated learning (FL) [51, 52]. However, we assume a server-free FL framework due to the absence of a central server. This framework is similar to the server-free FL approach in [53] where neighboring clients exchange models, aggregate them to obtain a model initialization, train on local data, and then exchange the trained models, and so on. However, we assume that neighboring clients work with mutual trust. In this setting, each cloudlet is considered an FL client. Starting from the initial model states, each client exchanges its model with its adjacent neighbors, aggregates the received models along with its own local model, and trains on the aggregated model over its local data. Next, the trained models are again shared across direct neighbors and so on. This process is repeated for a predetermined number of FL rounds. In the following experiment, we divide the city area into 10 areas covered by 10 CLNs and apply this decentralized FL across them for 10 rounds. Table III lists several hyper-parameters and specifications for this experiment. We repeat this experiment with varying CLN graph sizes. For each case, we test the aggregate models over a test graph size of 1916 nodes and plot the average performance metrics across the 10 clients. Fig. 9 shows the results of this experiment. One can make the following observations in view of Fig. 9. First, an FL-trained model exhibits a slight degradation in its prediction quality as compared to a model trained in a centralized manner. This can be seen in view of the drop in the RMSE, MAE, and MAPE in Fig. 9 compared to Fig. 8, where the values in FL training are on average 7% less than their corresponding values in centralized training. Second, it can be seen that the size of the client graph has a small impact on the quality of the aggregated model. This result is consistent with the generalizability of the GNN model established in the previous experiment. Therefore, one can still train with moderately sized graphs, possibly at the cloudlet area level without sacrificing the performance. ## V Conclusion and Future Work In this paper, we propose a hetGNN-LSTM algorithm for taxi demand and supply prediction making use of several edge types between graph nodes. To enable this approach for large-scale training and inference, we also propose a semi-decentralized GNN approach that resolves the scalability and excessive communication delay limitations of centralized and decentralized GNNs, respectively. We propose the use of multiple CLNs to enable this semi-decentralized approach. Through experiments using real data, the proposed hetGNN-LSTM algorithm is shown to be effective in predicting taxi demand and supply on a taxi level. This allows for dynamic graph evolution over time as opposed to the existing approaches assuming static graphs. The proposed hetGNN-LSTM model is shown to generalize well to larger graphs with unforeseen structures. Moreover, the proposed semi-decentralized approach allows for cloudlet-level federated learning without sacrificing performance. Future extensions to this work will include incorporating other node and edge types on the constructed HIN. Besides, extensions will also include the development of custom-made hetGNN models to better fit the traffic demand prediction. Namely, by more explicit incorporation of time dependency. It is also interesting to devise better ways of node-CLN assignment. ## VI Acknowledgment This work is supported in part by the National Science Foundation under Grant No. 2216772. ### _The proof of Theorem 1_ Proof.: To derive the delay bounds, we quantify the inference delay bounds on a small graph with 2-hops and then generalize Fig. 10: A sample node’s computational graph in the decentralized GNN setting. Fig. 9: Quality metrics for testing with a fixed graph size and different average client graph sizes in distributed FL. the derived bounds for \(L\)-hop graphs with arbitrary numbers of nodes. Consider the computational graph of node \(i\) shown in Fig. 10. This node has nodes \(j\) and \(k\) as its 1-hop neighbors, and nodes \(j_{1}\) through \(j_{3}\), and \(k_{1}\) through \(k_{4}\) as its hop-2 neighbors. Let us express the overall inference delay with messages from the 1-hop and 2-hop nodes to node \(i\) according to the topology of this computational graph. We assume that nodes communicate through an ad hoc wireless network. Also, a node can only communicate with one node at a time. Let \(t_{s}\) and \(t_{r}\) denote the (per-node) sending and receiving transmission delays, respectively. For simplicity, let us further assume that nodes have similar separations for simplicity and \(t_{s}=t_{r}\). Also, let \(t_{p}\) denote the processing time for messages received at a node through a GNN layer. The total 2-hop inference delay (\(\Delta_{tot,2}\)) can be written as. \[\Delta_{tot,2}=\Delta_{1}+\Delta_{2} \tag{7}\] where \(\Delta_{1}\) is the time required to pass the 1-hop messages from \(j\) and \(k\) to \(i\), process them by the first GNN layer, and send the processed message to \(j\) and \(k\). So, the total delay of hop-1 is: \[\Delta_{1}=2(t_{s}+tr)+tp=2\times 2(t_{r}+t_{p}) \tag{8}\] In general, for node \(i\) with degree \(d_{i}\), \[\Delta_{1}=2(t_{s}+tr)+tp=2\times d_{i}(t_{r}+t_{p}) \tag{9}\] where \(d_{i}\) is the degree of node \(i\). Let us consider the delay in the 2 communication hop. \(\Delta_{2}\) is the time required to receive the messages from the neighbors of \(j\) and \(k\), sending them to \(i\), processing them by the second GNN layer, and then sending the result back to the neighbors of \(j\) and \(k\). The time required for collecting the messages from the neighbors of \(j\) (to itself) is \(\Delta_{2}^{j}=d_{j}(t_{s}+t_{r}+t_{p})=2d_{j}t_{r}+t_{p}\). Similarly, the time required to collect the messages from the neighbors of \(k\) (to itself) is \(\Delta_{2}^{k}=2d_{k}t_{r}+t_{p}\). These two messages need then to be sent to \(i\). So, this requires another time delay of \(\Delta_{j,ktoi}=2d_{i}t_{r}\). To this end, the topology of the graph determines the way how the delays of \(j\) and \(k\) contribute to the overall delay. Specifically, if \(j\) and \(k\) have uncommon nodes in their 1-hop graph (except for \(i\)) then they can work concurrently on receiving their messages. Thus, their delay will be determined by the slowest among them. This is the minimum value of their delays. However, if they have common nodes, then they must sequentially communicate with these nodes. This elongates the delay. It can be maximally equal to the summation of delays \(j\) and \(k\). So, the delay for hop-2 message passing is. \[\Delta_{tot,2}=\Delta_{1}+\Delta_{2}^{j}+\Delta_{2}^{k}-\Delta_{ conc} \tag{10}\] where \(\Delta_{conc}\) is the time duration when \(j\) and \(k\) concurrently receive messages (from their exclusive neighbors). The minimum value of \(\Delta_{conc}\) is zero when the \(j\) and \(k\) can not work simultaneously, i.e., when they share the same 1-hop neighborhood. This maximizes the delay to \(\Delta_{2,max}=\Delta_{1}+\Delta_{2}^{j}+\Delta_{2}^{k}\). On the other hand, the minimum value of \(\Delta_{conc}\) is \(\min(\Delta_{2}^{j},\Delta_{2}^{k})\) when \(j\) and \(k\) can work completely simultaneously (except for communicating with node (i)). In that case, \(\Delta_{2}\) is minimized to \(\Delta_{2,min}=\Delta_{1}+\max\{\Delta_{2}^{j},\Delta_{2}^{k}\}\). Expanding \(\Delta_{2}^{j}\) and \(\Delta_{2}^{k}\) and adding common terms, and with the delay has the following bounds. \[2t_{r}[2d_{i}+\max_{j\in N_{2}(i)}\{d_{x}\}]+3t_{p}\\ \leq\Delta_{tot,2}\leq\\ 2t_{r}[2d_{i}+\sum_{x\in N_{2}(i)}d_{x}]+3t_{p} \tag{11}\] Following the same logic, we can generalize the above bounds for an \(l\)-hops since an \(l\)-hop message received at node \(i\) is an \(l-1\)-hop message received at \(j\) and \(k\) then forwarded to \(i\). \[2t_{r}[ld_{i}+\max_{x\in N_{l}(i)}\{d_{x}\}]+(l+1)t_{p}\\ \leq\Delta_{tot,l}\leq\\ 2t_{r}[ld_{i}+\sum_{x\in N_{l}(i)}d_{x}]+(l+1)t_{p} \tag{12}\] where \(N_{l}(i)\) denotes nodes connected to \(i\) and \(l\)-hops always. In an \(L\)-hop computational graph, the hop-1 messages will be exchanged \(L\) times, the hop-2 messages will be exchanged \(L-1\) times, and so on. So, the total \(L\)-hop delay (\(\Delta_{tot,L}\) is within the following bounds. \[2t_{r}\sum_{l=1}^{L}(L-l+1)[ld_{i}+\max_{x\in N_{l}(i)}\{d_{x} \}+(l+1)t_{p}]\\ \leq\Delta_{tot,L}\leq\\ 2t_{r}\sum_{l=1}^{L}(L-l+1)[ld_{i}+\sum_{x\in N_{l}(i)}d_{x}+(l +1)t_{p}] \tag{13}\] These bounds have quadratic growth with the number of hops.
2309.15614
Delta-alpha cross-frequency coupling for different brain regions
Neural interactions occur on different levels and scales. It is of particular importance to understand how they are distributed among different neuroanatomical and physiological relevant brain regions. We investigated neural cross-frequency couplings between different brain regions according to the Desikan-Killiany brain parcellation. The adaptive dynamic Bayesian inference method was applied to EEG measurements of healthy resting subjects in order to reconstruct the coupling functions. It was found that even after averaging over all subjects, the mean coupling function showed a characteristic waveform, confirming the direct influence of the delta-phase on the alpha-phase dynamics in certain brain regions and that the shape of the coupling function changes for different regions. While the averaged coupling function within a region was of similar form, the region-averaged coupling function was averaged out, which implies that there is a common dependence within separate regions across the subjects. It was also found that for certain regions the influence of delta on alpha oscillations is more pronounced and that oscillations that influence other are more evenly distributed across brain regions than the influenced oscillations. When presenting the information on brain lobes, it was shown that the influence of delta emanating from the brain as a whole is greatest on the alpha oscillations of the cingulate frontal lobe, and at the same time the influence of delta from the cingulate parietal brain lobe is greatest on the alpha oscillations of the whole brain.
Dushko Lukarski, Spase Petkoski, Peng Ji, Tomislav Stankovski
2023-09-27T12:22:56Z
http://arxiv.org/abs/2309.15614v1
# Delta-alpha cross-frequency coupling for different brain regions ###### Abstract Neural interactions occur on different levels and scales. It is of particular importance to understand how they are distributed among different neuroanatomical and physiological relevant brain regions. We investigated neural cross-frequency couplings between different brain regions according to the Desikan-Killiany brain parcellation. The adaptive dynamic Bayesian inference method was applied to EEG measurements of healthy resting subjects in order to reconstruct the coupling functions. It was found that even after averaging over all subjects, the mean coupling function showed a characteristic waveform, confirming the direct influence of the delta-phase on the alpha-phase dynamics in certain brain regions and that the shape of the coupling function changes for different regions. While the averaged coupling function within a region was of similar form, the region-averaged coupling function was averaged out, which implies that there is a common dependence within separate regions across the subjects. It was also found that for certain regions the influence of delta on alpha oscillations is more pronounced and that oscillations that influence other are more evenly distributed across brain regions than the influenced oscillations. When presenting the information on brain lobes, it was shown that the influence of delta emanating from the brain as a whole is greatest on the alpha oscillations of the cingulate frontal lobe, and at the same time the influence of delta from the cingulate parietal brain lobe is greatest on the alpha oscillations of the whole brain. **The delta-alpha cross-frequency coupling is proving to be a valuable descriptor in increasingly more brain states and domains. Here, by applying the adaptive dynamic Bayesian inference to EEG signals of subjects at rest we reconstructed the neural cross-frequency delta to alpha coupling functions that describe the interaction mechanisms of different regions of the brain. With this analysis framework we found a number of significant brain connections, as well as several characteristic differences between the brain regions.** + Footnote †: preprint: APS/123-QED ## I Introduction The interactions in the brain are fundamental to the human ability to perceive and interact with the world. The brain is a heavily connected dynamical network system [1], with interactions that are very complex and involve a vast network of neurons and synapses. Such complex system can mediate a vast number of functions, from a relatively static structure. Importantly, the brain can evolve with time, and different changes and transitions can occur [2; 3]. Because not all the neurons and network processes in the brain are active at all time, and because they can exhibit collective, clustered, and synchronized behaviour [4; 5; 6], different types of changes, disruptions and transitions in the neural activity can occur [7; 8]. Since the functions of the brain are highly dependent on its structure, and different functions are probably performed by different brain regions with different architecture, it is essential to identify the different regions of the brain in order to better understand its functions. For that reason, a significant effort has been invested by the scientific community in the direction of parcellation of the brain, starting from the classic Brodmann map, through the widely used Desikan-Killiany atlas [9], all the way to the recently published human Brain-netome atlas [10] and Human Connectome Project (HCP) multi-modal parcellation [11] using _in vivo_ MRI data. The brain connectivity is crucial to understand how the neurons and the brain dynamics evolve. A particularly accessible and useful approach has been the study of neural cross-frequency coupling, usually extracted from an electroencephalograph (EEG) recording [12; 13; 14; 15; 16]. Neural cross-frequency coupling refers to the interaction between different frequencies of neural brainwave oscillations in the brain. Cross-frequency coupling occurs when the amplitude or phase of one frequency band of oscillations is modulated by another frequency band. Thus, there are different types of cross-frequency coupling, such as amplitude-amplitude coupling, phase-phase coupling, and amplitude-phase coupling. Neural cross-frequency coupling can be studied between different combinations of brainwave oscillations. In this work, we will focus on the delta-to-alpha neural cross-frequency coupling. Namely, it is well known that delta and alpha brainwave oscillations play an important role in the brain dynamics [17; 18; 19; 20; 21; 22]. For instance, there are differences in frequency and power during different sleep stages which appear in the separate delta and alpha brainwave dynamics [23; 24; 25; 26] and in their related delta-alpha effect [27; 28]. In another example, in a previous study about general anesthesia [29] it was found that the delta-alpha coupling function is statistically significant and strong during anaesthesia. Similarly, previous works observe a prominent delta-alpha coupling in resting state [15; 30], during the orienting response [32] and during sleep within the network physiology approach [20]. A characteristic form of the delta-alpha coupling functions was also established [29; 30; 31]. These works point out that the choice to investigate the delta-to-alpha coupling among the brain regions had a relevance for the present study of resting state. To perform the analysis needed we used comprehensive set of methods. First, to observe the oscillatory content of the brainwave oscillations we used wavelet time-frequency analysis. Then, we used the fact that the delta and alpha brainwaves have pronounced oscillating dynamics in order to study the interactions through their reduced phase dynamics [33], thus observing phase-phase cross-frequency coupling. Here, we applied a method based on adaptive dynamical Bayesian inference for analysis of data to reconstruct a dynamical phase model describing the systems and their interactions [34; 35; 36]. The method of dynamical inference reconstructs effective connectivity [1; 37] and reveals the underlying dynamical mechanisms. Here, we reconstruct the phase _coupling functions_ which describe how the interaction occurs and manifest, thus revealing a functional mechanism [38]. The design of powerful methods and the explicit assessment of coupling functions have led to applications in different scientific fields including chemistry [39], climate [40], secure communications [41], mechanics [42], social sciences [43], and oscillatory interaction in physiology for cardiorespiratory interactions [44; 45]. Arguably, the greatest recent interest for coupling functions is coming from neuroscience [46], where works have encompassed the theory and inference of a diversity of neural phenomena, physical regions, and physiological conditions [30; 47; 48; 49; 50; 51; 52; 53]. ## II Materials and Methods ### Adaptive Dynamical Bayesian Inference When investigating a complex dynamical oscillatory system, such as the oscillatory behaviour of the brain, one way to gain new insights is by modeling the system by using differential equations. Usually, by measuring some signals originating from the oscillatory time evolution of the system, one can infer the parameters of a model that describes the system under certain conditions. According to the phase reduction theory, in case when the interactions between the oscillators are sufficiently weak, the behaviour of the system can be approximated with its phase dynamics [54; 55; 33]. If the phases of the system can be considered as monotonic change of the variables, the partial dynamical process of the node \(i\) as influenced by another node \(j\) can be represented with the system of differential equations: \[\phi_{i,j}=\omega_{i}+q_{i,j}(\varphi_{i},\varphi_{j})+\xi_{i}, \tag{1}\] where \(\varphi_{i}\) is the phase of the i-th oscillator, \(\omega_{i}\) is its angular frequency parameter, \(q_{i,j}\) is the coupling function which describes the influence of the j-th oscillator on the i-th oscillator, and \(\xi_{i}\) represents the noise. Usually, the noise is assumed as a white Gaussian noise given by \(\xi_{i}(t)\xi_{j}(\tau)=\delta(t-\tau)E_{ij}\), where the information about the correlation between the noises of the different oscillators is included in the symmetric matrix \(E_{ij}\). In theory, the full model for the phase dynamics of a brain region oscillator should contain all the connections at once, in a single phase equation. However, due to the high dimensionality and computational expense, with equation (1) we infer a partial part of the full model dynamics related only to the two brain regions involved in a coupling connection. This procedure is then applied for each pair of brain regions. Because of the periodic nature of the system, the coupling function can be represented by a Fourier decomposition: \[q_{i,j}(\varphi_{i},\varphi_{j})=\sum_{k=-\infty}^{\infty}\sum_{s=-\infty}^{ \infty}c_{ijk,s}e^{i2\pi k\varphi_{i}}e^{i2\pi s\varphi_{j}}. \tag{2}\] For a system of two coupled oscillators, reduction to a finite number K of Fourier terms will give: \[\varphi_{i,j}=\sum_{k=-K}^{K}c_{k}{}^{i}\Phi_{i,j,k}(\varphi_{i},\varphi_{j}) +\xi_{i}(t), \tag{3}\] where\(\Phi_{i,j,0}=1,c_{0}{}^{i}=\omega_{i}\), and the rest \(\Phi_{i,j,k}\) and \(c_{k}{}^{i}\) are the \(K\) most important Fourier components (in this work we use \(K=2\)). With the assumption for a white Gaussian noise, the task is then reduced to inference of the unknown parameters of the model: \[M=\left\{c_{k}{}^{i},E_{ij}\right\}. \tag{4}\] When the parameters of the model are inferred, one can determine then the coupling functions \(q_{i,j}\) which describe the underlying mechanisms of the interaction of the oscillators [38]. In this work we employ the method of adaptive dynamical Bayesian inference (aDBI) [35; 36; 36; 57] in order to gain new insights into the oscillatory behaviour of the brain regions and the brain lobes. In this method, the time series of phases of the oscillators are considered to be time sequences of blocks of samples. In each block, the samples from a certain time interval are included, and the duration of this time interval is specified by the time window \(t_{w}\). In the inference procedure, the initial assumptions for the parameters of the model are that \(c_{k}{}^{i}=0\), and therefore at least few inference blocks are required to obtain appropriate estimates of the model parameters values and the corresponding coupling functions. To obtain improved inference in every subsequent block, part of the information from the previous block is included in the following one. The so-called propagation parameter \(p_{w}\) controls how much of the information of the previous block is included in the following one. In the method of aDBI both the time window \(t_{w}\) and the propagation parameter \(p_{w}\) are adaptively determined, based on the time variabilities present in the signal. After determination of \(t_{w}\) and \(p_{w}\) the final inference is performed. This final inference provides the values of the parameters and the coupling functions for each block of inference, thus observing the time evolution of the system and the inter interactions, with a temporal resolution defined by the time window \(t_{w}\). Further technical details about the parametrization, convergence and robustness of the aDBI method can be found in previous papers [35; 36; 57; 56]. Even though the aDBI was introduced for studies of coupled phase oscillators with oscillating frequencies in the cardiorespiratory range, the procedures is applicable to the frequency range of the brain waves as well. The application of the aDBI method on the subject dataset that was used in this study yielded a time window of \(t_{w}=10\) s and a propagation parameter of \(p_{w}=0.2\). The aDBI method represents a further improvement of the DBI method [35; 56], by minimizing the covariance matrix which is an indicator of the quality of the inference. The details of the method are given elsewhere [36] and in summary it leads to an improved inference of the model parameters without losing information about the temporal changes in the behavior of the oscillators. ### Dataset The dataset used in this study is publicly available [58] ([https://osf.io/mndt8/](https://osf.io/mndt8/)). The data source contains the empirical region-average fMRI (functional magnetic resonance imaging), EEG source activity and structural connectomes of the 68 parcellated cortical regions of the brain of 15 healthy human subjects, age 18-31, eight of whom are female.The data consists of a resting state time series, where the subjects were asked to just stay awake and keep their eyes closed. In this study we used the empirical EEG source activity and the structural connectomes. The time series of the source activity for each patient and each cortical region have duration of 21.6 minutes with sampling frequency of 200 Hz. The description of the 68 parcellated cortical regions is given in the Appendix. ### Wavelet transform and the phase extraction procedure In order to check the existence of brain wave oscillations and their frequency content, the EEG time-series signals were first analyzed using a continuous wavelet transform [59; 60; 61]. The continuous wavelet transform is defined by the equation \[WT(\omega,t)=\int_{0}^{\infty}\psi(\omega(u-t))x(u)\omega du, \tag{5}\] where \(x(t)\) is the signal, \(\omega\) denotes the angular frequency, \(t\) is the time and \[\psi(u)=\frac{1}{2\pi}(e^{i2\pi f/\omega u}-e^{\frac{(2\pi f_{0})^{2}}{2}})e^{ -\frac{u^{2}}{2}}\] is the complex Morlet wavelet, with central frequency \(f_{0}=1\), \(\int\psi(t)dt=0\), and with \(i\) being the imaginary unit. The continuous wavelet transform is a time-frequency representation which contains both the phase and the amplitude dynamics of the oscillatory elements from the analyzed signal. The initial wavelet observation of the oscillations contained in the corresponding EEG signals was carried out for several brain regions. After the initial wavelet observations, a phase extraction procedure was performed for the delta and alpha waves of the EEG signal in order to obtain the instantaneous phase time-series. These phase time-series then act as an input to the aDBI method. The oscillatory intervals were first evaluated by standard digital filtering procedure including FIR filter followed by a zero phase filtering procedure ("filtfit") to ensure that no time or phase lags are introduced with the filtering procedure. The delta waves signal limits were from 1 to 4 Hz, while the alpha waves signal limits were from 8 to 12 Hz [62]. The phases of the filtered signals were estimated via Hilbert transformation, thus obtaining the protophases. On these protophases, the protophase-to-phase transformation was applied [42] in order to obtain the independent phases which act as input signals for the Bayesian inference. ### Surrogate data testing When oscillatory signals are analyzed, the inferred coupling between the signals is always positive and non-zero, even if the oscillators are uncoupled or unrelated. Therefore, it is necessary to establish a significance threshold in order to determine if the obtained coupling strength indicates a genuine connection and interdependence of the phenomena. Such surrogate data is used for statistical testing of the coupling strength. A threshold is usually defined by constructing randomized surrogates of the original signals [63; 64] and calculating the coupling strength for these surrogates. The coupling strength obtained in this manner represent a baseline for the confirmation of the coupling of the oscillators. In this work surrogates were constructed for each of the \(68\times 68\) delta-alpha couplings going from, and to, each of the 68 regions of the brain by using a procedure called cyclic phase permutation surrogates [64] based on rearrangement of the cycles within the extracted phase. The surrogate threshold taken in this work is the mean plus two standard deviations (mean + 2STD) of the surrogate couplings. ## III Results Fig. 1 shows the wavelet transform of the measured signal for one of the 68 brain regions in one of the subjects. The Figure 1: Wavelet transform of the measured signal for one of the regions of one subject. The measured signal is shown in (a), while the corresponding time-frequency wavelet transform is shown in (b). The time-averaged intensity of the wavelet is shown in (c). signal itself is presented in Fig. 1 (a), while the corresponding time-frequency wavelet transform is shown in Fig. 1 (b). To show the oscillatory frequencies present in the signal more clearly, the time-averaged intensity of the wavelet is presented in Fig. 1 (c). The frequency intervals of the corresponding brain waves are given with the dashed lines. From the figure, one can clearly see the strong alpha wave, as well as the delta wave with a slightly lower wavelet power. The delta-alpha coupling functions are presented in Fig. 2. The coupling functions are evaluated first on individual subjects for specific regions - Fig. 2 (a)-(c). Here, they show the characteristic functional form where the delta-alpha phase coupling function depends predominately on the delta dynamics, or in other words it reflects the direct influence that the delta phase dynamics exert on the alpha phase dynamics by accelerating or decelerating the alpha brainwave oscillations. This specific form belongs to the category of direct, among the separation of self, direct and common coupling functions [65; 67]. By comparing the three plots for the coupling functions Fig. 2 (a), (b) and (c), one can notice that this direct influence is like a wave that shifts from left to right from 0 to \(2\pi\) along the delta axis. In general, it keeps the direct delta dependence (i.e. it still changes predominately on delta axis) but it shifts the maximum of the function along the delta axis. When we average the coupling functions for the same region from all the subjects, as shown in Fig. 2 (d), the remaining average delta-alpha coupling function still reflects the direct delta dependence, albeit with slightly reduced amplitude due to averaging. The Appendix A shows how this coupling function is similar or different in respect to all the order regions. Furthermore, when we average all the coupling functions across regions and subjects, the average coupling function disappeared i.e. it was insignificantly low without a common form of the function. In other words the region average coupling function averaged out, because there was no specific common form between regions. Fig. 3 shows a \(68\times 68\) matrix representing the significant delta-alpha coupling functions for all the 68 brain regions. The vertical axis shows the number of the region for the delta brainwaves, while on the horizontal axis the number of the region of the alpha brainwaves is given. The matrix is not symmetrical and the coupling indicated by the columns is different than the one in the rows. The figure indicates that for some regions of the brain (e.g. columns \(9,19,30,40\), etc) there is a stronger influence from the delta waves to the alpha waves. This is also shown in Fig. 4, where these 68 regions are marked as circles on a cross-section of the brain. Fig. 4 (a) shows the summarized delta-alpha coupling strengths coming from a specific regions with red circles, while Fig. 4 (b) shows the sum of the delta-alpha coupling strengths for the alpha of a specific regions with blue circles. The radii of the circles are proportional to the sum of the corresponding coupling strengths. One can notice that while the significant delta-alpha couplings emanate from various different brain regions, they end up in much smaller number of the brain regions. In order to obtain more tangible information about the overall interactions between the different brain lobes (frontal, cingulate frontal, cingulate parietal, parietal, occipital and temporal lobe), a summation of the significant coupling functions by brain lobes was performed. The sums obtained were normalized by the number of regions involved in each of the brain lobes. The results are presented in Fig. 5. The normalized Figure 3: Matrix \(68\times 68\) showing the delta-alpha couplings from region to region. Only those couplings which are statistically significant in respect of the surrogate threshold are shown in color (values above 0). Figure 2: The delta-alpha neural coupling functions. Examples of individual subject delta-alpha coupling functions between different regions (a)-(c). In particular, (a) shows a coupling function from subject 10 between delta region 60 and alpha region 60, (b) from subject 12 between delta region 50 and alpha region 23, and (c) from subject 6 between delta region 54 and alpha region 19. The last plot (d) shows a subject-averaged delta-alpha coupling function between delta region 58 and alpha region 21. sum of the delta-alpha couplings, where the delta is from specific brain lobe and alpha from any lobe of the brain (whole brain alpha), is shown with blue line. While the normalized sum of the delta-alpha couplings, where the delta comes from any lobe of the brain (whole brain delta) and alpha from specific brain lobe, is shown with red line. From the spider plot (Fig. 5) it can be seen that the greatest influence on the whole brain alpha has the cingulate parietal delta and at the same time the greatest influence of the whole brain delta is on the cingulate frontal alpha. ## IV Discussion and Conclusions The influence of delta brain waves on alpha brain waves for a resting subject has previously been determined at the whole brain level. In this paper we try to gain a deeper insight by investigating this delta-alpha influence for different brain regions according to the Desikan-Kiliany anatomical parcellation of the brain [15; 20; 30; 32]. As presented in the results, it can be concluded that this influence is clearly visible for different regions, because even after averaging the delta-alpha coupling functions for a particular region across all the subjects, the mean coupling function still shows the characteristic shape (Fig. 2), confirming the direct influence of the delta-phase dynamics on the alpha-phase dynamics in certain brain regions. This influence consists in acceleration or deceleration of alpha oscillations under the influence of delta oscillations. In terms of analysis we have applied a comprehensive methodological framework for interacting brainwave oscillations. The nature of delta and alpha oscillations were observed with wavelet transform using standard parameters, with \(f_{0}=1\) central frequency. This is a simple standard widely used procedure for time-frequency analysis. For the reconstruction of the phase model we used the adaptive Bayesian inference. It is a well established method which has been widely used and tested for robustness and convergence, where its parametrization has been systematically investigated on different numerical and biological systems [35; 36; 57; 35]. For verifying the statistical significance of the inferred delta-alpha coupling we have applied surrogate data testing [64]. The model equation (1) assumes pairwise interaction between two regions and includes only coupling function with two phase variables. This is a simplified approximation, as the brain regions form parts of a complex network, and the full model of a phase oscillator should include all the brain connections in a single equation. With equation (1) we have thus separated the inference for a partial dynamics on two-by-two basis for all the pairs of brain regions. This was possible because the Bayesian method can allow such partial dynamical filtering. While the reason to do this and separate the inference was due to the high dimensionality (68x68 regions) and the computational complexity, which otherwise could have lead to problems such as parameters overfitting. Also, we have used only pairwise coupling functions. Thus, a natural extension of this work could include also non-pairwise multivariate coupling functions [65; 66]. This is a case where the coupling function in the dynamics of one region will have more than two phase variables from phases of other regions. The coupling function results demonstrated that there is a common waveform, predominantly due to direct influence from delta oscillations, but the wave shifts along the delta axis for different regions - Fig. 2 (a-c). We present three characteristic regions here, but the general observation from all the regions was that the wave was shifting for different regions. The quantitative analysis in Appendix A also supported this by showing relative variations of the form for different regions. The answer to why the wave for coupling function forms shifts for different regions might be because there are different lengths for the structural pathways, through which different regions interact. This most likely implies different time delays for the signal propagation [68; 69], which is known to impact the synchronization and phase arrangement between brain regions [70] and is crucial for the information transfer [71]. Figure 4: Significant delta-alpha coupling strengths for the different brain regions. (a) The radii of the red circles correspond to the sum of coupling strengths of all the delta exiting the corresponding region. (b) The radii of the blue circles correspond to the sum of coupling strengths of all the alpha entering the corresponding region of the brain. Figure 5: Spider plot showing the influence of the separate brainwaves of the interaction into the specific brain lobes. The influence of the delta from the entire brain to the alpha of specific brain lobes (blue curve) and the influence of delta of specific brain lobes to the alpha from the entire brain (red curve). This time delay manifests itself as a phase shift for the oscillatory activity, i.e. \(\Delta\phi\) within the phase coupling functions (e.g. as in \(q_{\alpha}(\phi_{\delta},\phi_{\alpha})=\varepsilon\sin(\phi_{\alpha}-\phi_{ \delta}-\Delta\phi)\)), which in turn can be the cause of the phase shift of the wave observed in the figures. Our current initial observation of the structural and time delay information in this direction can stimulate future systematic analysis for quantifying how the space-time structure of the brain regions, as defined by the weights and time-delays of the connectome [72], impacts the resulting coupling functions. Such a question is even more valid because the averaged coupling function within a region was of similar form Fig. 2 (d), while the region-averaged coupling function was averaged out. The latter implies that there is a common deterministic dependence within regions across the subjects, which is different for the separate brain regions. This kind of analysis would first require better identification of the propagation velocities and the time-delays on a personalized level, which is still not established beside promising results of MRI as a myelin biomarker [73], and proposed _in vivo_ techniques [68]. However, our results indicate that even aggregated atlases for time-delays [69] could be useful, since some of the patterns are consistent across the subjects. We have seen that this influence is not evenly distributed across brain regions, but for certain brain regions the influence of delta oscillations on alpha oscillations is more pronounced, as is the case for isthmus cingulate, pars triangulars (associated with verbal and non-verbal communication [74, 75, 76]) and the supramarginal region of the left hemisphere (involved in language processing [77, 78] and tool use action [79, 80]) and the fusiform region of the right hemisphere (involved in object and face recognition [81, 82, 83]). To lesser extent this is also noticeable for the lateral orbitofrontal and the rostral middle frontal region of the left hemisphere and the inferior temporal, pars triangulars, posterior cingulate, superior parietal, frontal pole, temporal pole and transverse temporal region of the right hemisphere (see Fig.3). These regions are located in different lobes of the brain, most of them in the frontal and temporal lobes, but some in the parietal and cingulate parietal as well. No clear distinctions can be made in terms of the brain hemisphere, as is expected, since the different brain centers responsible for different actions are located in the different hemispheres. This uneven distribution of delta influence on alpha oscillations from different regions is less pronounced on the delta side of the regions, as shown in Fig. 3 and more specifically in Fig. 4 (a). Fig. 4 shows that while the delta-alpha influence is more concentrated for the alpha waves in certain regions, the distributions of significant couplings in terms of delta waves is more even across brain regions. This means that the influencing oscillations are more evenly distributed across the brain regions then the influenced oscillations, which are more concentrated in certain regions. Additional insights into delta-alpha influences in the brain can be gained by condensing this information down to the level of brain lobes, as shown in the spider plot (Fig. 5). These results indicate that the influence of delta oscillations of all brain regions is greatest on alpha oscillations of the cingulate frontal lobe. At the same time, the influence of cingulate parietal brain lobe's delta oscillations on the alpha oscillations is greatest among all the regions of the brain. This influence of the cingulate frontal and cingulate parietal regions of the brain on other brain regions and on the brain as a whole should be further investigated and put into the context of the functioning of the brain from a neurological point of view. Finally, it is worth noting that we presented the methodological framework for interactions in the brain regions network for the resting state, however, the framework carries important implications and can readily be used also for other neural states, or interacting oscillatory networks more generally. ###### Acknowledgements. D.L., T.S. and P.J. acknowledge support from the bilateral Macedonian - Chinese project for scientific and technological cooperation. P.J. acknowledge support from STI2030-Major Projects (2021ZD0204500), the NSFC (62076071), and Shanghai Municipal Science and Technology Major Project (2018SHZDZX01). ## Data Availability Statement The data that support the findings of this study are publicly available.
2309.08123
Multivariate Fibonacci-like Polynomials and their Applications
The Fibonacci polynomials are defined recursively as $f_{n}(x)=xf_{n-1}(x)+f_{n-2}(x)$, where $f_0(x) = 0$ and $f_1(x)= 1$. We generalize these polynomials to an arbitrary number of variables with the $r$-Fibonacci polynomial. We extend several well-known results such as the explicit Binet formula and a Cassini-like identity, and use these to prove that the $r$-Fibonacci polynomials are irreducible over $\mathbb{C}$ for $n \geq r \geq 3$. Additionally, we derive an explicit sum formula and a generalized generating function. Using these results, we establish connections to ordinary Bell polynomials, exponential Bell polynomials, Fubini numbers, and integer and set partitions.
Sejin Park, Etienne Phillips, Peikai Qi, Ilir Ziba, Zhan Zhan
2023-09-15T03:23:20Z
http://arxiv.org/abs/2309.08123v1
# Multivariate Fibonacci-like polynomials and their applications ###### Abstract. The Fibonacci polynomials are defined recursively as \(f_{n}(x)=xf_{n-1}(x)+f_{n-2}(x)\), where \(f_{0}(x)=0\) and \(f_{1}(x)=1\). We generalize these polynomials to an arbitrary number of variables with the \(r\)-Fibonacci polynomial. We extend several well-known results such as the explicit Binet formula and a Cassini-like identity, and use these to prove that the \(r\)-Fibonacci polynomials are irreducible over \(\mathbb{C}\) for \(n\geq r\geq 3\). Additionally, we derive an explicit sum formula and a generalized generating function. Using these results, we establish connections to ordinary Bell polynomials, exponential Bell polynomials, Fubini numbers, and integer and set partitions. The authors would like to thank Aklilu Zeleke for his suggestions and mentorship. This work is partially supported by grants from the National Science Foundation (NSF Award No. 1852066) and Michigan State University's SURIEM REU **Definition 1.1**.: _The **r-Fibonacci Polynomial** is defined as_ \[F_{n}^{[r]}(x_{1},x_{2},...x_{r})=\begin{cases}0&0\leq n<r-1\\ 1&n=r-1\\ \sum_{i=1}^{r}x_{i}F_{n-i}^{[r]}&n\geq r\end{cases}\] Notice that specific cases of the \(r\)-Fibonacci polynomial generate well-known sequences of numbers and polynomials. For example, \(F_{n}^{[2]}(x,1)\) are the aforementioned standard Fibonacci polynomials, and \(F_{n}^{[r]}(1,1,\ldots,1)\) gives the famous Fibonacci, Tribonacci and Tetranacci number sequences when \(r=2,3\) and \(4\) respectively. We also generalize this family of polynomials further to include a family with unfixed initial conditions. **Definition 1.2**.: _The **Generic r-Fibonacci Polynomial** is defined as_ \[\mathcal{F}_{n}^{[r]}=\begin{cases}\ell_{n}(x_{1},...x_{r})&0\leq n\leq r-1\\ \sum_{i=1}^{r}x_{i}\mathcal{F}_{n-i}^{[r]}&n\geq r\end{cases}\] _Where \(\ell_{i}\in\mathbb{Q}[x]\) is an arbitrary polynomial._ In Section 2, we generalize Binet's formula and Cassini's identities to the \(r\)-Fibonacci polynomials and use the former to give a complete classification of the irreducibility of all \(r\)-Fibonacci polynomials. In Section 3, we draw a connection between the \(r\)-Fibonacci polynomials and integer partitions and use this to derive an explicit combinatorial formula. Then, we study the generating functions of \(r\)-Fibonacci polynomials in Section 4 and derive several identities relating to combinatorial sequences, such as the Fubini numbers, Pell numbers, and Stirling numbers of the second kind. Finally, using the combinatorial sum formula and generating functions, we prove a relationship to Bell polynomials in Section 5. ## 2. Explicit Forms and Identities The standard Fibonacci polynomials have a well-known closed form known as a Binet form (e.g., see [2]). This formula is given by \[f_{n}(x)=F_{n}^{[2]}(x,1)=\frac{\alpha^{n}-\beta^{n}}{\alpha-\beta} \tag{2.1}\] with \[\alpha=\frac{x+\sqrt{x^{2}+4}}{2}\qquad\beta=\frac{x-\sqrt{x^{2}+4}}{2}\] In [6], this formula is generalized to the two-variate case. In this section we generalize the Binet formula to the r-Fibonacci polynomials. ### The Generalized Binet Form In [9], a method of deriving the Binet Form of the Fibonacci numbers via spectral-theoretic methods is presented. This method can be generalized to the generic \(r\)-Fibonacci polynomials. Observe that for \(n\geq r-1\), the following matrix product identity holds: \[\begin{pmatrix}x_{1}&x_{2}&\dots&x_{r}\\ 1&&0&0\\ &\ddots&&\vdots\\ 0&&1&0\end{pmatrix}\begin{pmatrix}\mathcal{F}_{n}^{[r]}\\ \mathcal{F}_{n-1}^{[r]}\\ \vdots\\ \mathcal{F}_{n-r+1}^{[r]}\end{pmatrix}=\begin{pmatrix}\mathcal{F}_{n+1}^{[r]} \\ \mathcal{F}_{n}^{[r]}\\ \vdots\\ \mathcal{F}_{n-r+2}^{[r]}\end{pmatrix} \tag{2.2}\] Equation (2.2) gives the following lemma. **Lemma 2.1**.: _Let \(M=\begin{pmatrix}x_{1}&x_{2}&\dots&x_{r}\\ 1&&0&0\\ &\ddots&&\vdots\\ 0&&1&0\end{pmatrix}\). Then_ \[M^{n-r+1}\begin{pmatrix}\ell_{r-1}^{[r]}\\ \ell_{r-2}^{[r]}\\ \vdots\\ \ell_{0}^{[r]}\end{pmatrix}=\begin{pmatrix}\mathcal{F}_{n}^{[r]}\\ \mathcal{F}_{n-1}^{[r]}\\ \vdots\\ \mathcal{F}_{n-r+1}^{[r]}\end{pmatrix} \tag{2.3}\] Proof.: Follows by induction on \(n\). The next lemma shows how \(M\) can be diagonalized. **Lemma 2.2**.: _Suppose \(x_{1},...x_{r}\in\mathbb{C}\) such that \(M\) has \(r\) distinct eigenvalues, and let \(\{\lambda_{k}\}_{1\leq k\leq r}\) be the eigenvalues of \(M\) and \(D\) be the diagonal matrix with entries \(\lambda_{k}\). Then \(M=SDS^{-1}\), where \(S_{i,j}=\lambda_{j}^{i-1}\) and \(S_{i,j}^{-1}=\sigma_{i,j}\) is given by:_ \[\sigma_{i,j}=\begin{cases}(-1)^{r-j}\bigg{(}\frac{1\leq m_{1}<m_{2}<...<m_{j- 1}\leq r}{\frac{1\leq m_{1}\sum\limits_{m_{1},...,m_{j-1}\neq i}\sum\limits_{ m_{1}\leq m_{2}\cdots\lambda_{m_{j-1}}}}{\prod\limits_{ 1\leq m\leq r}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m_{1}\leq m \leq r}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum \limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i }\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum \limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i }\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits _{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum \limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i }\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum \limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i} \sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i }\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m \neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i} \sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum \limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits_{m\neq i}\sum\limits \[\begin{pmatrix}x_{1}&x_{2}&\dots&x_{r}\\ 1&&0&0\\ &\ddots&&\vdots\\ 0&&1&0\end{pmatrix}\begin{pmatrix}\lambda_{k}^{r-1}\\ \lambda_{k}^{r-2}\\ \vdots\\ 1\end{pmatrix}=\begin{pmatrix}\sum_{i=1}^{r}x_{i}\lambda_{k}^{r-i}\\ \lambda_{k}^{r-1}\\ \vdots\\ \lambda_{k}\end{pmatrix}=\begin{pmatrix}\lambda_{k}^{r}\\ \lambda_{k}^{r-1}\\ \vdots\\ \lambda_{k}\end{pmatrix}.\] The columns of the matrix \(S\) are eigenvectors of \(M\). Because \(S\) is the Vandermonde matrix, the entries of its inverse are already well known to be given by \(\sigma_{i,j}\) (e.g., exercise 40 in [8]). Using the above lemma we derive an explicit form for the generic \(r\)-Fibonacci polynomials. **Theorem 2.3**.: _The explicit form of the Generic \(r\)-Fibonacci Polynomials is given by_ \[\mathcal{F}_{n}^{[r]}=\sum_{i=1}^{r}\left(\lambda_{i}^{n}\sum_{j=1}^{r}(\sigma _{i,j}\ell_{r-j})\right)\] Proof.: By **Lemmas 2.1** and 2.2, we obtain the following vector-matrix product from equation (2.3): \[\begin{pmatrix}\lambda_{1}^{r-1}&\lambda_{2}^{r-1}&\dots&\lambda_{r}^{r-1}\\ \lambda_{1}^{r-2}&\lambda_{2}^{r-2}&\dots&\lambda_{r}^{r-2}\\ \vdots&\vdots&\dots&\vdots\\ \lambda_{1}&\lambda_{2}&\dots&\lambda_{r}\\ 1&1&\dots&1\end{pmatrix}\begin{pmatrix}\lambda_{1}^{n-r+1}&&&0\\ &\lambda_{2}^{n-r+1}&&&\\ &&\ddots&&\\ 0&&&\lambda_{r}^{n-r+1}\end{pmatrix}\begin{pmatrix}\sigma_{1,1}&\dots&\sigma_ {1,r}\\ \vdots&&\vdots\\ \sigma_{r,1}&\dots&\sigma_{r,r}\end{pmatrix}\begin{pmatrix}\ell_{r-1}\\ \vdots\\ \ell_{0}\end{pmatrix}\\ =\begin{pmatrix}\mathcal{F}_{n}^{[r]}\\ \mathcal{F}_{n-1}^{[r]}\\ \vdots\\ \mathcal{F}_{n-r+1}^{[r]}\end{pmatrix}\\ =\begin{pmatrix}\mathcal{F}_{n}^{[r]}\\ \mathcal{F}_{n}^{[r]}\\ \vdots\\ \mathcal{F}_{n-r+1}^{[r]}\end{pmatrix}\\ =\begin{pmatrix}\mathcal{F}_{n}^{[r]}\\ \mathcal{F}_{n}^{[r]}\\ \vdots\\ \mathcal{F}_{n-r+1}^{[r]}\end{pmatrix}\\ =\begin{pmatrix}\mathcal{F}_{n}^{[r]}\\ \vdots\\ \mathcal{F}_{n}^{[r]}\end{pmatrix}\\ \end{pmatrix}\] Multiplying the matrices together gives \[\begin{pmatrix}\lambda_{1}^{n}&\lambda_{2}^{n}&\dots&\lambda_{r}^{n}\\ \vdots&\vdots&\dots&\vdots\end{pmatrix}\begin{pmatrix}\sum_{j=1}^{r}\sigma_{ 1,j}\ell_{r-j}\\ \sum_{j=1}^{r}\sigma_{2,j}\ell_{r-j}\\ \vdots\\ \sum_{j=1}^{r}\sigma_{r,j}\ell_{r-j}\end{pmatrix}=\begin{pmatrix}\sum_{i=1}^{ r}\left(\lambda_{i}^{n}\sum_{j=1}^{r}\sigma_{i,j}\ell_{r-j}\right)\\ \vdots\\ \end{pmatrix}=\begin{pmatrix}\mathcal{F}_{n}^{[r]}\\ \vdots\\ \end{pmatrix}\] Comparing the entries in the above matrices gives the desired equality. Note that known Binet-like formula emerges as special cases of **Theorem 2.3**, so it make sense to refer to the previous theorem as a generalized Binet form. We obtain the following result about the \(r\)-Fibonacci polynomials **Corollary 2.4**.: _The Binet form of the \(r\)-Fibonacci polynomial is given by_ \[F_{n}^{[r]}=\sum_{i=1}^{r}\frac{\lambda_{i}^{n}}{\prod_{\stackrel{{ 1\leq m \leq r}}{{m\neq i}}}(\lambda_{i}-\lambda_{m})}=\sum_{m_{1}+m_{2}+...+m_{r}=n-r+1 }\lambda_{1}^{m_{1}}\lambda_{2}^{m_{2}}...\lambda_{r}^{m_{r}}\] Proof.: The first equality is given by applying **Theorem 2.4**, and the second can be verified by straightforward algebraic manipulation. For \(r=2\), we obtain the equation (2.1). **Remark**.: _The generalized Binet form cannot be solved explicitly in terms of radicals for \(r\geq 5\) because the characteristic equation of \(M\) is an arbitrary monic polynomial of degree \(r\)._ ### Generalization of Cassini's Identity In the case for \(r=2\), \(x_{2}=1\), then Cassini's identity states \[f_{n-1}f_{n+1}-(f_{n})^{2}=(-1)^{n}\] Using a similar strategy as in the above results, we find a generalization of Cassini's identity. This can be derived by taking the determinants on both sides of the matrix identity for the standard Fibonacci polynomials, which is given by \[\begin{pmatrix}x&1\\ 1&0\end{pmatrix}^{n-2}=\begin{pmatrix}f_{n+1}&f_{n}\\ f_{n}&f_{n-1}\end{pmatrix}\] We generalize this identity as follows. **Theorem 2.5**.: _For \(n\geq 2r-2\)_ \[\det\begin{pmatrix}F_{n-r+1}^{[r]}&\dots&F_{n}^{[r]}\\ \vdots&\ddots&\vdots\\ F_{n-2r+2}^{[r]}&\dots&F_{n-r+1}^{[r]}\end{pmatrix}=(-1)^{n(r+1)}x_{r}^{n-2r+2}\] Proof.: Observe that by extending equation (2.3), we get the following matrix product identity: \[M^{n-2r+2}\begin{pmatrix}F_{r-1}^{[r]}&\dots&F_{2r-2}^{[r]}\\ \vdots&\ddots&\vdots\\ F_{0}^{[r]}&\dots&F_{r-1}^{[r]}\end{pmatrix}=\begin{pmatrix}F_{n-r+1}^{[r]}& \dots&F_{n}^{[r]}\\ \vdots&\ddots&\vdots\\ F_{n-2r+2}^{[r]}&\dots&F_{n-r+1}^{[r]}\end{pmatrix}\] The desired equality follows from taking the determinant of both sides. For example, in the case \(r=3\) we have the following result for \(n\geq 4\) \[(F_{n-2}^{[3]})^{2}-F_{n-1}^{[3]}F_{n-2}^{[3]}F_{n-3}^{[3]}+F_{n}^{[3]}(F_{n-3}^{ [3]})^{2}-F_{n}^{[3]}F_{n-2}^{[3]}F_{n-4}^{[3]}+F_{n-1}^{[3]}F_{n-4}^{[3]}=x_{3}^{ n-4}\] ### Irreducibility In [7], it is shown that, for the \(r=2\) case, the \(r\)-Fibonacci polynomials are irreducible if and only if \(n\) is prime. Utilizing the generalized Binet-like formula, we prove a stronger result for the \(r\)-Fibonacci polynomials when \(r\geq 3\). First we prove the following lemma. **Theorem 2.6**.: \(F_{n}^{[r]}(x_{1},...x_{r})\) _is irreducible over \(\mathbb{C}\) for all \(n\geq r\geq 3\)_ Proof.: First we show that \(F_{n+r-3}^{[r]}\) is irreducible over \(\mathbb{C}\) whenever \(F_{n}^{[3]}\) is irreducible over \(\mathbb{C}\). Through induction on \(n\), we can see the polynomial \(F_{n}^{[r]}(x_{1},x_{2}^{2},x_{3}^{3},...x_{r}^{r})\) is a homogeneous polynomial of degree \(n-r+1\). Suppose \(F_{n+r-3}^{[r]}\) is reducible over \(\mathbb{C}\). Then \(F_{n+r-3}^{[r]}(x_{1},x_{2}^{2},...x_{r}^{r})=g(x_{1},...x_{r})h(x_{1},...x_{r})\) where \(g,h\in\mathbb{C}[x_{1},...x_{r}]\) are non-constant, homogeneous polynomials. Thus \(g(x_{1},x_{2},x_{3},0,...0)h(x_{1},x_{2},x_{3},0,...0)=F_{n}^{[3]}(x_{1},x_{2} ^{2},x_{3}^{3})\) where \(g(x_{1},x_{2},x_{3},0,...0)\) and \(h(x_{1},x_{2},x_{3},0...0)\) cannot be constants, so \(F_{n}^{[3]}(x_{1},x_{2}^{2},x_{3}^{3})\) is therefore reducible. This implies that \(F_{n}^{[3]}\) is reducible over \(\mathbb{C}\). Thus, it is enough to show that \(F_{n}^{[3]}\) is irreducible for \(n\geq 3\). By **Corollary 2.5**, the Binet form of the 3-Fibonacci polynomials is given by \[F_{n}^{[3]}(x_{1},x_{2},x_{3})=\sum_{i+j+k=n-2}\lambda_{1}^{i}\lambda_{2}^{j} \lambda_{3}^{k} \tag{2.4}\] where \(\lambda_{i}\) are the solutions to the characteristic polynomial \(z^{3}-x_{1}z^{2}-x_{2}z-x_{3}=0\). If \(\lambda_{1}=s\), \(\lambda_{2}=t\), \(\lambda_{3}=u\), we can solve for \(x_{i}\) in terms of \(s,t,u\). So long as there are 3 distinct eigenvalues and equation (2.4) holds. \[z^{3}-x_{1}z^{2}-x_{2}z-x_{3}=(z-s)(z-t)(z-u)=z^{3}-(s+t+u)z^{2}-(-st-su-tu)z- stu\] \[\implies x_{1}=s+t+u\quad x_{2}=-st-su-tu\quad x_{3}=stu\] Since this parameterization of \(x_{1},x_{2}\), and \(x_{3}\) has no algebraic relationship, the resulting polynomial is irreducible if and only if \(F_{n}^{[3]}\) is irreducible. The irreducibility of \(\sum_{i+j+k=n-2}s^{i}t^{j}u^{k}\) over \(\mathbb{C}\) for all \(n\geq 3\) is proved in [3], and thus \(F_{n}^{[3]}\) is irreducible for all \(n\geq 3\) ## 3. Partitions In this section, we connect the \(r\)-Fibonacci polynomials to integer partitions and derive an associated explicit combinatorial formula for \(F_{n}^{[r]}\). To begin, an integer partition is denoted as \((1^{a_{1}},2^{a_{2}},3^{a_{3}},...)\), where \(a_{i}\) is the multiplicity of \(i\) in the multiset. Let \(P_{n}(r)\) be the set of all partitions of \(n\) with elements no greater than \(r\). Lastly, we refer to partitions of size \(n\) as \(\rho\in P_{n}(r)\); \(|\rho|=n\). To demonstrate the connection between \(r\)-Fibonacci polynomials and \(P_{n}(r)\), we define the following function. **Definition 3.1**.: _Let \(G=\sum_{(a_{1},a_{2},...,a_{r})\in I}c_{I}x_{1}^{a_{1}}x_{2}^{a_{2}}...x_{r}^{a_ {r}}\in\mathbb{Z}[x_{1},...x_{r}].\) Then \(\Omega\) is defined by_ \[\Omega(G)=\{(1^{a_{1}},2^{a_{2}},...,r^{a_{r}})\mid(a_{1},a_{2},\ldots,a_{r}) \in I\}\] For example, \(\Omega(x_{1}^{3}+x_{1}x_{2})=\Omega(x_{1}^{3})\cup\Omega(x_{1}x_{2})=\{(1^{3} ),(1,2)\}\) generates two partitions of \(3\). Note that if \(G,H\in\mathbb{Z}[x_{1},x_{2},\ldots,x_{r}]\) have positive integer coefficients, \[\Omega(G+H)=\Omega(G)\cup\Omega(H).\] **Theorem 3.2**.: _For \(n\geq r\), then \(\Omega(F_{n}^{[r]})=P_{n-r+1}(r)\) for \(n\geq r\)._ Proof.: We proceed with strong induction on \(n\) with the base case given by \(n=r\): \[\Omega(F_{r}^{[r]})=\Omega(x_{1})=\{(1)\}=P_{1}(r)\] Assume equality holds for integers between \(r\) and \(n\). By the properties of \(\Omega\), \[\Omega(F_{n+1}^{[r]})=\bigcup_{i=1}^{r}\Omega(x_{i}F_{n-i+1}^{[r]}) \tag{3.1}\] Observe that by the inductive hypothesis, for some partition \(\rho\in\Omega(x_{i}F_{n-i+1}^{[r]})\), \(|\rho|=i+(n-r+2-i)=n-r+2\). Therefore all partitions in \(\Omega(F_{n+1}^{[r]})\) have size \(k-r+2\), so \(\Omega(F_{k+1}^{[r]})\subseteq P_{n-r+2}(r)\). We must now show that \(P_{n-r+2}(r)\subseteq\Omega(F_{n+1}^{[r]})\). Let \(\rho_{i}\in P_{k-r+2}(r)\). Then \(\rho_{i}=(i,\rho^{\prime})\) where \(1\leq i\leq r\) and \(\rho^{\prime}\in P_{k-r+2-i}(r)=\Omega(F_{k-r+2-i}^{[r]})\). Therefore, \(\rho_{i}\in\Omega(x_{i}F_{n-k+1-i}^{[r]})\). Thus, by equation (2.4), \(\rho_{i}\in P_{k-r+2}(r)\). By double inclusion, \(P_{k-r+2}(r)=\Omega(F_{k+1}^{[r]})\). Using this characterization, we derive the following combinatorial formula for the \(r\)-Fibonacci polynomials. **Theorem 3.3**.: \[F_{n}^{[r]}=\sum_{\begin{subarray}{c}\alpha_{1},...\alpha_{r}\geq 0\\ \alpha_{1}+2\alpha_{2}\cdots+r\alpha_{r}=n-r+1\end{subarray}}\binom{\alpha_{1} +\alpha_{2}\cdots+\alpha_{r}}{\alpha_{1},\alpha_{2}\ldots,\alpha_{r}}x_{1}^{ \alpha_{1}}x_{2}^{\alpha_{2}}\cdots x_{r}^{\alpha_{r}}\] Proof.: By **Theorem 3.3**, \(F_{n}^{[r]}\) can be written in the form \[F_{n}^{[r]}=\sum_{\begin{subarray}{c}\alpha_{1},...\alpha_{r}\geq 0\\ \sum_{i=1}^{r}i\alpha_{i}=n-r+1\end{subarray}}c_{n}(\alpha_{1},...\alpha_{r})x_ {1}^{\alpha_{1}}x_{2}^{\alpha_{2}}\cdots x_{r}^{\alpha_{r}}\] where \(c_{n}(\alpha_{1},...\alpha_{r})\in\mathbb{Z}^{+}\) denotes the corresponding coefficient. We proceed by induction. When \(0\leq n<r-1\), the formula is true since the sum index is the empty set. For \(n=r-1\), we have that \(F_{r-1}^{[r]}=1=\binom{0}{0,\cdots 0}\). Thus the formula works for the first \(r\) cases. Presume that for \(n\leq k\), \(c_{n}(\alpha_{1},...\alpha_{r})=\binom{\alpha_{1}+\cdots+\alpha_{2}}{\alpha_{ 1},...\alpha_{r}}\). Then, when \(1\leq j\leq r\), by the recursive definition of \(F_{n}^{[r]}\), \[c_{k+1}(\alpha_{1},...\alpha_{r}) =\sum_{i=1}^{r}c_{k-i+1}(\alpha_{1},...,\alpha_{i}-1,\alpha_{i+1},...\alpha_{r})\] \[=\sum_{i=1}^{r}\frac{\alpha_{i}(\alpha_{1}+\cdots+\alpha_{r}-1)!} {\alpha_{1}!\cdots\alpha_{i}!\cdots\alpha_{r}!}\] \[=(\alpha_{1}+\cdots+\alpha_{r}-1)!\sum_{i=1}^{r}\frac{\alpha_{i} }{\alpha_{1}!\cdots\alpha_{r}!}\] \[=\frac{(\alpha_{1}+\cdots+\alpha_{r})!}{\alpha_{1}!\cdots\alpha_ {r}!}\] **Remark**.: _The coefficients of the monomials in \(F_{n}^{[r]}\) correspond precisely with the number of ways to rearrange the elements of the corresponding partition._ By **Theorem 3.4**, the \(r\)-Bonacci polynomials can be represented with a more explicit sum index by using iterated sums. **Corollary 3.4**.: _Letting \(\alpha_{1}=n-r+1-\sum_{k=2}^{r}k\alpha_{k}\), then_ \[F_{n}^{[r]}=\sum_{\alpha_{r}=0}^{\lfloor\frac{n-r+1}{r}\rfloor}\cdots\sum_{ \alpha_{r-i}=0}^{\lfloor\frac{n-r+1-\sum_{k=2}^{r-1}(r-k)\alpha_{r-k}}{\sum_{ \alpha_{2}=0}^{\lfloor\frac{n-r+1-\sum_{k=2}^{r-1}(k)\alpha_{k}}{\alpha_{1}, \ldots,\alpha_{r}}\rfloor}}\cdots\sum_{\alpha_{2}=0}^{\lfloor\frac{n-r+1- \sum_{k=2}^{r-1}(k)\alpha_{k}}{\alpha_{1},\ldots,\alpha_{r}}\rfloor}\binom{ \alpha_{1}+\cdots+\alpha_{r}}{\alpha_{1},\ldots,\alpha_{r}}x_{1}^{\alpha_{1}} x_{2}^{\alpha_{2}}\cdots x_{r}^{\alpha_{r}}.\] Proof.: The iterated sums index over all sets of \(\alpha_{r},...\alpha_{2}\) such that \(\sum_{k=2}^{r}k\alpha_{k}\leq n-r+1\). Thus, by letting \(\alpha_{1}=(n-r+1)-\sum_{k=2}^{r}k\alpha_{k}\), we can see the summations indexes over all partitions of \(n-r+1\). **Remark**.: _This generalizes the combinatorial sum formula in [6]._ ## 4. Generating Functions The generating function for Fibonacci numbers \(f_{n}\) is known to be \[\sum_{n=1}^{\infty}f_{n}x^{n}=\frac{x}{1-x-x^{2}} \tag{4.1}\] for \(|x+x^{2}|<1\), \(f_{0}=0\), and \(f_{1}=1\). In this section, we develop a similar generating function for the \(r\)-Fibonacci polynomials and derive properties and identities connecting \(r\)-Fibonacci polynomials to standard combinatorial sequences. ### Generating Function of the \(r\)-Fibonacci Polynomials **Theorem 4.1**.: _Let \(|x_{1}z|+|x_{2}z^{2}|+\cdots+|x_{r}z^{r}|<1\). Then the generating function of the \(r\)-Fibonacci polynomials converges to_ \[\sum_{n=0}^{\infty}F_{n+r-1}^{[r]}(x_{1},\ldots,x_{r})z^{n}=\frac{1}{1-x_{1} z-x_{2}z^{2}-\cdots-x_{r}z^{r}}\] Proof.: Because \(|x_{1}z|+|x_{2}z^{2}|+\cdots+|x_{r}z^{r}|<1\), \[\frac{1}{1-\sum_{i=1}^{r}x_{i}z^{i}}=\sum_{n=0}^{\infty}\left(\sum_{i=1}^{r}x _{i}z^{i}\right)^{n}\] converges absolutely. By rearranging the sum, we find that \[\sum_{n=0}^{\infty}\left(\sum_{i=1}^{r}x_{i}z^{i}\right)^{n} =1+(x_{1}z+x_{2}z^{2}+...+x_{r}z^{r})+(x_{1}z+x_{2}z^{2}+...+x_{r} z^{r})^{2}+...\] \[=1+(x_{1})z+(x_{1}^{2}+x_{2})z^{2}+(x_{1}^{3}+2x_{1}x_{2}+x_{3}) z^{3}+...\] \[=\sum_{n=0}^{\infty}\left(\sum_{\begin{subarray}{c}\alpha_{1},...\alpha_{r}\geq 0\\ \alpha_{1}+2\alpha_{2}\cdots+r\alpha_{r}=n\end{subarray}}\binom{\alpha_{1}+ \cdots+\alpha_{r}}{\alpha_{1},\ldots,\alpha_{r}}x_{1}^{\alpha_{1}}x_{2}^{ \alpha_{2}}\cdots x_{r}^{\alpha_{r}}\right)z^{n}\] \[=\sum_{n=0}^{\infty}F_{n+r-1}^{[r]}(x_{1},\ldots,x_{r})z^{n}\] **Remark**.: _Equation (4.1) is a special case of **Theorem 4.1**, where \(r=2\), and \(x_{1}=x_{2}=1\)._ Using a similar argument, a similar generating function can be written in terms of \(r\)-Fibonacci polynomials where \(r=\omega\). **Theorem 4.2**.: _Let \(g(z)=\sum_{k=1}^{\infty}c_{k}z^{k}\) with \(|g(z)|<1\). Then_ \[\sum_{n=1}^{\infty}F_{2n-1}^{[n]}(c_{1},...c_{n})z^{n}=\frac{g(z)}{1-g(z)}\] Proof.: The argument follows as in **Theorem 4.1**, but because there are always exactly \(n\) variables in the coefficient of \(z^{n}\) for the infinite-variate case, we set \(r=n\). \[1+\sum_{n=1}^{\infty}F_{2n-1}^{[n]}(c_{1},...c_{n})z^{n}=\frac{1}{1-g(z)}\] \[\sum_{n=1}^{\infty}F_{2n-1}^{[n]}(c_{1},...c_{n})z^{n}=\frac{g(z)}{1-g(z)}\] ### Applications of the Generating Function Using **Theorem 4.1** and 4.2, we will derive identities and generating functions of other sequences. **Corollary 4.3**.: _If \(|x_{1}|+...+|x_{r}|<1\), then \(\lim_{n\to\infty}F_{n}^{[r]}=0\)._ Proof.: By setting \(z=1\) in **Theorem 4.1**, we know that \(\sum_{k=0}^{\infty}F_{n+r-1}^{[r]}(x_{1},...x_{r})\) converges for \(|x_{1}|+...+|x_{r}|<1\). Thus \(\lim_{n\to\infty}F_{n}^{[r]}=0\). **Theorem 4.4**.: _Let \(f_{n}=F_{n}^{[2]}(1,1)\) be the \(n\)th Fibonacci number, \(p_{n}\) be the \(n\)th Pell number, and \(-1<z<0\). Then,_ \[\sum_{n=1}^{\infty}p_{n}z^{n}=\sum_{n=1}^{\infty}F_{2n-1}^{[n]}(f_{1},f_{2}, \ldots,f_{n})z^{n}\] Proof.: For \(-1<z<0\), equation (4.1) gives us \[\sum_{n=0}^{\infty}f_{n}z^{n}=\frac{z}{1-z-z^{2}}<1\] By **Theorem 4.3**, \[\frac{\sum_{n=1}^{\infty}f_{n}z^{n}}{1-\sum_{n=1}^{\infty}f_{n}z^ {n}} =\frac{\frac{z}{1-z-z^{2}}}{1-\frac{z}{1-z-z^{2}}}=\frac{z}{1-2z-z ^{2}}=\sum_{n=1}^{\infty}F_{n}^{[2]}(2,1)z^{n}=\sum_{n=1}^{\infty}p_{n}z^{n}\] \[=\sum_{n=1}^{\infty}F_{2n-1}^{[n]}(f_{1},f_{2},\ldots,f_{n})z^{n}\] **Remark**.: \(p_{n}\neq F_{2n-1}^{[n]}(f_{1},f_{2},\ldots,f_{n})\)_. The theorem is only true for \(-1<z<0\), thus the coefficients of each of the generating series are not equal._ We can also use the generating function to show that the \(r\)-Fibonacci polynomials can be manipulated to generate preference orderings, which are orderings of partitions of a set of size \(n\). The Fubini numbers, denoted \(a_{n}\), are defined to be the number of preference orderings of size \(n\). **Theorem 4.5**.: _Let \(\mathcal{O}_{n}(\alpha_{1},\alpha_{2},\ldots,\alpha_{r})\) denote the number of preference orderings for size \(n\) with \(\alpha_{i}\) partitions of size \(i\). Then,_ \[n!F_{n+r-1}^{[r]}\left(x_{1},\frac{1}{2!}x_{2},\frac{1}{3!}x_{3},\ldots,\frac{ 1}{r!}x_{r}\right)=\sum_{\begin{subarray}{c}\alpha_{1},\ldots\alpha_{r}\geq 0 \\ \sum_{i=1}^{n}i\alpha_{i}=n\end{subarray}}\mathcal{O}_{n}(\alpha_{1},\alpha_ {2},\ldots,\alpha_{r})x_{1}^{\alpha_{1}}x_{2}^{\alpha_{2}}\ldots x_{r}^{\alpha _{r}}\] Proof.: For \(z<\ln(2)\), the exponential generating function for the Fubini numbers is given by \[\sum_{n=1}^{\infty}\frac{a_{n}}{n!}z^{n}=\frac{1}{2-e^{z}}=\frac{1}{1-\sum_{i =1}^{\infty}\frac{1}{i!}z^{i}} \tag{4.2}\] By **Theorem 4.3**, \[\frac{1}{1-\sum_{i=1}^{\infty}\frac{1}{i!}x_{i}z^{i}}=\sum_{n=0}^{\infty}F_{2n -1}^{[n]}\left(x_{1},\frac{1}{2!}x_{2},\frac{1}{3!}x_{3},\ldots,\frac{1}{n!}x_ {n}\right)z^{n}\] Utilizing the composition and combinatorial ideas in the proof of (4.2) (see [5]), we find that adding the \(x_{i}\) coefficients distinguishes between partitions of different sizes. Thus, \[\frac{1}{1-\sum_{i=1}^{\infty}\frac{1}{i!}x_{i}z^{i}}=\sum_{n=0}^{\infty} \left(\sum_{\begin{subarray}{c}\alpha_{1},\ldots\alpha_{r}\geq 0\\ \sum_{i=1}^{r}i\alpha_{i}=n\end{subarray}}\mathcal{O}_{n}(\alpha_{1},\alpha_ {2},\ldots,\alpha_{r})x_{1}^{\alpha_{1}}x_{2}^{\alpha_{2}}\ldots x_{r}^{\alpha _{r}}\right)z^{n}\] By setting \(x_{i}=0\) for \(i>r\) and comparing coefficients of the generating functions we get the desired equality: \[n!F_{n+r-1}^{[r]}\left(x_{1},\frac{1}{2!}x_{2},\frac{1}{3!}x_{3},\ldots,\frac{ 1}{r!}x_{r}\right)=\sum_{\begin{subarray}{c}\alpha_{1},\ldots\alpha_{r}\geq 0 \\ \sum_{i=1}^{r}i\alpha_{i}=n\end{subarray}}\mathcal{O}_{n}(\alpha_{1},\alpha_ {2},\ldots,\alpha_{r})x_{1}^{\alpha_{1}}x_{2}^{\alpha_{2}}\ldots x_{r}^{\alpha _{r}}\] **Corollary 4.6**.: _Let \(a_{n}^{r}\) denote the \(n\)th Fubini number restricted by \(r\), the number of preference orderings of a set of size \(n\) with partition size of max size \(r\). Then,_ \[a_{n}^{r}=F_{n+r-1}^{[r]}\left(1,\frac{1}{2!},\frac{1}{3!},\ldots,\frac{1}{r!} \right)n!\] Proof.: Setting \(x_{1}=x_{2}=...x_{r}=1\) in **Theorem 4.5** gives the desired equality by the definition of \(a_{n}^{r}\). **Remark**.: _Since \(a_{n}=a_{n}^{n}\),_ \[a_{n}=F_{2n-1}^{[n]}\left(1,\frac{1}{2!},\frac{1}{3!},\ldots,\frac{1}{n!} \right)n!.\] _In the following section, we will derive the following identity in terms of the complete ordinary Bell polynomials:_ \[a_{n}=\hat{B}_{n}\left(1,\frac{1}{2!},\frac{1}{3!},\ldots,\frac{1}{n!}\right)n!\] ## 5. Bell Polynomials In this section, the explicit sum formula and the generating function are used to draw an identity to Bell polynomials, which will be used to give a proof of a combinatorial identity. ### Ordinary Bell Polynomials Firstly, recall the definition of the ordinary Bell polynomials. **Definition 5.1**.: _The **Partial Ordinary Bell Polynomial** is defined as_ \[\hat{B}_{n,k}=\sum_{\begin{subarray}{c}j_{1}+j_{2}+\cdots+j_{n-k+1}=k\\ j_{1}+2j_{2}+\cdots+(n-k+1)j_{n-k+1}=n\end{subarray}}\frac{k!}{j_{1}!j_{2}! \ldots j_{n-k+1}!}x_{1}^{j_{1}}x_{2}^{j_{2}}\ldots x_{n-k+1}^{j_{n-k+1}}\] **Definition 5.2**.: _The **Complete Ordinary Bell Polynomial** is defined as_ \[\hat{B}_{n}=\sum_{k=1}^{n}\hat{B}_{n,k}\] Note that the coefficients in the definition of \(\hat{B}_{n,k}\) is almost identical to that of \(F_{n}^{[r]}\) in **Theorem 3.4**. The next theorem proves how they are related. **Theorem 5.3**.: \[\hat{B}_{n}=F_{2n-1}^{[n]}(x_{1},\ldots,x_{n})\] Proof.: \[\hat{B}_{n}=\sum_{k=1}^{n}\hat{B}_{n,k}=\sum_{k=1}^{n}\sum_{\begin{subarray}{c}j_{1} +j_{2}+\cdots+j_{n-k+1}=k\\ j_{1}+2j_{2}+\cdots+(n-k+1)j_{n-k+1}=n\end{subarray}}\frac{k!}{j_{1}!j_{2}! \ldots j_{n-k+1}!}x_{1}^{j_{1}}x_{2}^{j_{2}}\ldots x_{n-k+1}^{j_{n-k+1}}\] \[=\sum_{j_{1}+2j_{2}+\cdots+nj_{n}=n}\frac{(j_{1}+j_{2}+\cdots+j_{n})!}{j_{1}!j_ {2}!\ldots j_{n}!}x_{1}^{j_{1}}x_{2}^{j_{2}}\ldots x_{n}^{j_{n}}\] \[=F_{2n-1}^{[n]}(x_{1},\ldots,x_{n})\] **Remark**.: _By letting \(x_{i}=0\) for all \(i>r\),_ \[\hat{B}_{n}(x_{1},\ldots,x_{r},0,\dots)=F_{n+r-1}^{[r]}(x_{1},\ldots,x_{r})\] We see then that \(r\)-Fibonacci polynomials are generalizations of the complete ordinary Bell polynomials. ### Exponential Bell Polynomials Recall the definition of the exponential Bell polynomial. **Definition 5.4**.: _The **Partial Exponential Bell Polynomial** is defined as_ \[B_{n,k}=\sum_{\begin{subarray}{c}j_{1}+j_{2}+\cdots+j_{n-k+1}=k\\ j_{1}+2j_{2}+\cdots+(n-k+1)j_{n-k+1}=n\end{subarray}}\frac{n!}{j_{1}!j_{2}! \ldots j_{n-k+1}!}x_{1}^{j_{1}}x_{2}^{j_{2}}\ldots x_{n-k+1}^{j_{n-k+1}}\] **Theorem 5.5**.: \[\sum_{k=1}^{n}k!B_{n,k}(x_{1},2x_{2},3!x_{3},\ldots,r!x_{r},0,\dots)=n!F_{n+r-1} ^{[r]}\] Proof.: By taking the \(n\)th derivative of the generating function for \(r\)-Fibonacci polynomials, we find that \[\sum_{i=0}^{\infty}\frac{(n+i)!}{i!}F_{i+n+r-1}z^{i}=\frac{d^{n}}{dz^{n}}\left( \frac{1}{g(z)}\right)\] where \(g(z)=1-\sum_{i=0}^{r}x_{i}z^{i}\). Recall that Faa di Bruno's formula (see [1]) tells us \[\frac{d^{n}}{dz^{n}}f(g(z))=\sum_{k=1}^{n}f^{(k)}(g(z))B_{n,k}(g^{\prime}(z), g^{\prime\prime}(z),\ldots,g^{(n-k+1)}(z))\] If we set \(f(z)=\frac{1}{z}\), then we find \[\sum_{i=0}^{\infty}\frac{(n+i)!}{i!}F_{i+n+r-1}z^{i}=\sum_{k=1}^{n}\frac{(-1)^{k} k!}{g(x)^{k}}B_{n,k}(g^{\prime}(z),\dots,g^{(n-k+1)}(z))\] If \(z=0\), \[\sum_{k=1}^{n}(-1)^{k}k!B_{n,k}(-x_{1},-2x_{2},-3!x_{3},\dots,-r!x_{r},0,\dots)\] \[=\sum_{k=1}^{n}k!B_{n,k}(x_{1},2x_{2},3!x_{3},\dots,r!x_{r},0,\dots)=n!F_{n+r-1} ^{[r]}\] Utilizing the above identity, we demonstrate a new proof of a known relation between the Fubini numbers and the Stirling numbers of the second kind. **Corollary 5.6**.: _Let \(a_{n}\) denote the Fubini numbers. Then,_ \[\sum_{k=1}^{n}k!\genfrac{\{}{\}}{0.0pt}{}{n}{k}=a_{n}\] Proof.: **Theorem 4.4** gives the following equality \[n!F_{2n-1}^{[n]}\left(\frac{1}{1!},\frac{1}{2!},\frac{1}{3!},\dots,\frac{1}{n!}\right)=a_{n}\] Additionally, we have the following identity between the partial exponential Bell polynomials and Stirling numbers (see [4]): \[B_{n,k}(1,1,...1)=\genfrac{\{}{\}}{0.0pt}{}{n}{k}\] By **Theorem 5.5**, we find \[a_{n}=n!F_{2n-1}^{[n]}\left(\frac{1}{1!},\frac{1}{2!},\frac{1}{3!},\dots, \frac{1}{n!}\right)=\sum_{k=1}^{n}k!B_{n,k}(1,1,1,\dots)=\sum_{k=1}^{n}k! \genfrac{\{}{\}}{0.0pt}{}{n}{k}\] This corollary shows how the \(r\)-Fibonacci polynomials can be used as a tool to prove combinatorial identities in new manner.
2301.13447
Benchmarking Model Predictive Control Algorithms in Building Optimization Testing Framework (BOPTEST)
We present a data-driven modeling and control framework for physics-based building emulators. Our approach consists of: (a) Offline training of differentiable surrogate models that accelerate model evaluations, provide cost-effective gradients, and maintain good predictive accuracy for the receding horizon in Model Predictive Control (MPC), and (b) Formulating and solving nonlinear building HVAC MPC problems. We extensively evaluate the modeling and control performance using multiple surrogate models and optimization frameworks across various test cases available in the Building Optimization Testing Framework (BOPTEST). Our framework is compatible with other modeling techniques and can be customized with different control formulations, making it adaptable and future-proof for test cases currently under development for BOPTEST. This modularity provides a path towards prototyping predictive controllers in large buildings, ensuring scalability and robustness in real-world applications.
Saman Mostafavi, Chihyeon Song, Aayushman Sharma, Raman Goyal, Alejandro Brito
2023-01-31T06:55:19Z
http://arxiv.org/abs/2301.13447v2
# A Data-Driven Modeling and Control Framework for Physics-Based Building Emulators ###### Abstract We present a data-driven modeling and control framework for physics-based building emulators. Our approach comprises: (a) Offline training of differentiable surrogate models that speed up model evaluations, provide cheap gradients, and have good predictive accuracy for the receding horizon in Model Predictive Control (MPC) and (b) Formulating and solving nonlinear building HVAC MPC problems. We extensively verify the modeling and control performance using multiple surrogate models and optimization frameworks for different available test cases in the Building Optimization Testing Framework (BOPTEST). The framework is compatible with other modeling techniques and customizable with different control formulations. The modularity makes the approach future-proof for test cases currently in development for physics-based building emulators and provides a path toward prototyping predictive controllers in large buildings. D + Footnote †: footnoteinfo very large, and high variance and reproducibility issues mar the performance Henderson et al. (2018). At the moment, RL algorithms remain intractable for adjustable and reproducible implementations at scale. On the other hand, most of the the building MPC work (Sturzenegger et al., 2015; Mostafavi et al., 2022; Oei et al., 2020; Walker et al., 2017) consider either simple low-fidelity RC-based models, bilinear models with low accuracy, Machine Learning (ML) approaches that cannot be directly used for fast MPC implementation, or directly use Modelica-based models with hand-tuned cost functions for nonlinear optimization of energy consumption. Such modeling and control approaches require a lot of customization for high-fidelity models with complex, hybrid, and constrained systems that use external inputs and therefore, are not suited to a robust control framework. **CONTRIBUTIONS** The main contribution of this paper is the development of a modeling and control framework for building HVAC control based on identifying differentiable models that are compatible with optimization-based nonlinear optimal control methods. We address these limitations by the following two-fold approach: first, in an off-line round, we identify a differentiable surrogate model for the following nonlinear mapping: \[x_{t+1}=f(x_{t},u_{t},d_{t}) \tag{1}\] where \(x\) represent the state of the model, \(u\) the control inputs, and \(d\) the external time-varying disturbances, associated with the weather and occupancy conditions. Second, we use automatic differentiation (AD) (Paszke et al., 2017) to compute gradients for solving nonlinear model predictive control (NMPC) with box constraints for state and inputs. The details for modeling and control approaches are discussed in Section 2 and Section 3. The individual contributions of the paper are as follows: we demonstrate how to identify a _suitable_ Neural Network (NN) to capture the dynamics of building envelope and HVAC control system. We investigate several choices of lags for states, controls, and disturbances and provide insight into best practices. We also present different MPC formulations, assisted by using AD, to maintain occupants' comfort constraints while minimizing KPIs for HVAC energy consumption. We show the customizability of the framework through the ease of using two different control approaches to solve the MPC problem. We show that the proposed approach can be used to warm-start the receding horizon replanning for the MPC problem. In the result section, we also provide a performance comparison between different approaches for modeling and solving NMPC when operating on computationally intensive hybrid system models. We also discuss potential best practices based on desired control criteria (speed, optimality, etc.). Finally, to the best of our knowledge, the NMPC control of the BOPTEST five-zone model is the first of its kind. We believe this framework is scalable for data-driven NMPC control of the BOPTEST, and potentially other physics-based building emulators, that are being developed for prototyping controllers in large building HVAC systems. ## 2 Surrogate Modeling for Building Emulator Our aim is to replace the computationally expensive nonlinear numerical simulations with alternative, fast representations for model-based control. In the context of using NNs for MPC, we believe that one should include the following criteria in their surrogate modeling process: * **Computing cost:** Small computing cost for fast iterative evaluations. * **Predictive accuracy:** Good prediction accuracy for MPC's horizon. * **Differentiability:** Fast and accurate gradient information for successive linearization, nonlinear solvers, etc., for different MPC formulations. We leverage Pytorch (Paszke et al., 2019) modeling library to meet these goals. In this study, we consider the following cases: Linear, MLP, and Long short-term memory (LSTM). MLP has a fast forward computation and good expressivity to approximate complex functions Hornik et al. (1989). On the other, since BOPTEST is Partially Observable MDP (POMDP), it requires lag information from states, actions, and time-varying disturbance for model fitting. This can be curtailed by using LSTM which has proven to work well for nonlinear mappings with autoregressive features Siami-Namini et al. (2018). While fairly simple, the linear model has the advantage of fastest model evaluations and plug-and-play viability for fast QP solvers. ### Linear The surrogate model takes its input as the states \(x\), control inputs \(u\), time-varying disturbances \(d\), and their lags of past time-steps. The output of the surrogate model is the future state prediction \(\{x_{t+1}\}\), i.e.,: \[x_{t+1}=f(x_{t-M_{x}:t},u_{t-M_{u}:t},d_{t-M_{d}:t}) \tag{2}\] where \(M_{x},M_{u},M_{d}\) are state, input and disturbance lags, respectively. Since the choices of lags are application dependent, we discuss this further in the result section. Here, \(f\) is linearized as follows: \[x_{t+1}=\sum_{k=0}^{M_{x}}A_{k}x_{t-k}+\sum_{k=0}^{M_{u}}B_{k}u_{t-k}+\sum_{k =0}^{M_{d}}C_{k}d_{t-k} \tag{3}\] where \(A_{k}=\nabla_{x}f\in\mathbb{R}^{N_{x}\times N_{x}},B_{k}=\nabla_{u}f\in \mathbb{R}^{N_{x}\times N_{u}}\) and \(C_{k}=\nabla_{d}f\in\mathbb{R}^{N_{x}\times N_{d}}\) are learnable parameter matrices for state, control input and disturbance, respectively. ### Mlp The linearized model given by Equation 3 also applies here. The forward computation in MLP is written as the following: \[\begin{split} h_{0}&=[x_{t-M_{x}},u_{t-M_{u}},d_{t-M _{d}}]\\ h_{k+1}&=\tanh(W_{k}h_{k}+b_{k}),\qquad\quad k=\{0,...,K-1\}\\ x_{t+1}&=o_{t+1}&=W_{K}h_{K}+b_{K}\end{split} \tag{4}\] where \(h_{k}\in\mathbb{R}^{l}\) is a hidden unit of the layer \(k\), \(W_{k}\) and \(b_{k}\) are weight parameters of the layer \(k\). ### Lstm The forward computation of LSTM is written as the following: \[h_{t},c_{t} =MLP_{\text{enc}}(x_{t-M_{x};t},u_{t-M_{u};t-1},d_{t-M_{u},t}) \tag{5}\] \[i_{t} =\sigma(W_{ii}u_{t}+b_{ii}+W_{hi}h_{t-1}+b_{hi})\] \[f_{t} =\sigma(W_{ift}u_{t}+b_{if}+Ww_{hf}h_{t-1}+b_{hf})\] \[g_{t} =tanh(W_{ig}u_{t}+big+W_{hg}h_{t}+b_{hg})\] \[o_{t+1} =\sigma(W_{io}u_{t}+b_{io}+W_{ho}h_{t}+b_{ho})\] \[c_{t+1} =f_{t}\odot c_{t}+i_{t}\odot g_{t}\] \[h_{t+1} =o_{t}\odot tanh(c_{t+1})\] \[x_{t+1} =MLP_{\text{dec}}(h_{t+1})\] where \(h_{t}\) is the hidden state, \(c_{t}\) is the cell state, \(i_{t},f_{t},g_{t}\) and \(o_{t}\) are the input, forget, cell, and output gates, respectively. \(\sigma(\cdot)\) is the sigmoid function, \(\odot\) is the Hadamard product, and \(MLP_{\text{enc}}\) and \(MLP_{\text{dec}}\) are a MLP encoder and decoder, respectively. ## 3 Control Problem Formulation Consider the discrete-time nonlinear dynamical system: \[x_{t+1}=f(x_{t},u_{t},d_{t}), \tag{6}\] where \(x_{t}\in\mathbb{R}^{n_{x}}\) and \(u_{t}\in\mathbb{R}^{n_{u}}\) correspond to the state and control vectors at time \(t\) and \(d_{t}\in\mathbb{R}^{n_{d}}\) is the set of contextual variables/external inputs. The optimal control problem is to find the optimal control policy that minimizes the cumulative cost: \[\min_{u_{t}}\sum_{t=0}^{T}c_{t}(x_{t},u_{t},d_{t}) \tag{7}\] \[\text{Subject to}:\ x_{t+1}=f(x_{t},u_{t},d_{t}),\] (8) \[\text{Subject to}:\ u_{t}^{l}\leq u_{t}\leq u_{t}^{u}, \tag{9}\] for given \(x_{0}\), and where \(c_{t}(\cdot)\) is the instantaneous cost function given as: \[c_{t}(\cdot)=P_{c}+P_{h}+L_{k}+\gamma P_{x}, \tag{10}\] where \(P_{c}\) and \(P_{h}\) are total cooling and heating cost, \(L_{k}=\|\tilde{u}_{t+1}-\tilde{u}_{t}\|_{R}^{2}\) is a regularizer term, which penalizes large changes in the control inputs to avoid undesirable oscillations, and \(P_{x}=\max(x_{t}^{l}-x_{t},0)+\max(x_{t}-x_{t}^{u},0)\) enforces the occupant comfort constraints implemented with ReLU function with a penalty coefficient \(\gamma\). The problem also considers input box constraints with lower and upper bound given as \([u_{t}^{l},u_{t}^{u}]\). ### Gradient Descent Method The gradient descent method is one of the widely-used algorithms to optimize a differentiable objective function. At each iteration, the gradient of the objective function is computed and the decision variables are updated in direction of the computed gradient. Gradient descent algorithms have a precedent across domains such as training neural networks Schmidhuber (2015) and solving optimal control problems (Lin et al., 2014). In this paper, we use Adam (Kingma and Ba, 2014), which has shown promising results in deep learning applications. For input constraint (9), we use projected gradient descent, a common method in solving constrained optimization: after each gradient update, we project the control vector \(u_{t}\) into a feasible region \([u_{t}^{l},u_{t}^{u}]\). Since the feasible region is a box constraint, the projected control vector is easily computed by using a clamp function after each update of the algorithm. ### Sequential Quadratic Programming There have been numerous tools and methods developed to solve specific nonlinear optimization problems with particular structures of cost functions, equality, and inequality constraint functions. However, Sequential Quadratic Programming (SQP) remains one of the most efficient approaches to solving any general constrained-nonlinear optimization problem. For the SQP approach, we utilize the optimization subroutine originally proposed by Dieter Kraft (1988) and as implemented in SciPy Virtanen et al. (2020) to solve the control optimization problem described in Eqns. (7-9). The algorithm is a quasi-Newton method (using BFGS) applied to a Lagrange function consisting of a loss function and equality and inequality constraints. In our implementation, we provide the function evaluations, which are calculated using Equation 10, and it's Jacobian using automatic differentiation. Instead of clamping, we pass bounds for control inputs directly to the solver. ## 4 Results We demonstrate the effectiveness of our control framework for controlling building models in BOPTEST (Blum et al., 2021), a software for simulation-based benchmarking of building HVAC control algorithms. The rest of this section details two test cases that demonstrate the results of deriving different surrogate models and discusses the subsequent control results for the control algorithms described in Section 3. ### Model Description BOPTEST emulators use Modelica (Wetter et al., 2014) to represent realistic physical dynamics. Embedded in these models are baseline control algorithms that can be overwritten using supervisory and local-loop control signals. BOPTEST uses a containerized run-time environment (RTE) which enables rapid, repeatable deployment of models. Using this feature, we stand up several instances of models on servers and query these models to speed-up data generation at scale for surrogate modeling. We also test controls on the same containers, representing _digital-twins_ of real buildings. We consider the following case studies: _BESTEST Case 900 model_ This test case is a single room with floor dimensions of 6m x 8m and a floor-to-ceiling height of 2.7m. The building is assumed to be occupied by two people from 8 an to 6 pm each day. Heating and cooling are provided to the office using an idealized four-pipe fan coil unit (FCU), presented in Figure 1. The FCU contains a fan, cooling coil, and heating coil. The fan draws room air into the HVAC unit and supplies the conditioned air back to the room. No outside air is mixed during this process. The fan has a variable speed drive serving the fan motor. The cooling coil is served by chilled water produced by a chiller and the heating coil is served by hot water produced by a gas boiler. Two different PI controllers for heating and cooling modulate the supply air temperature and fan speed to provide cooling and heating load to the room. The schematics and control mapping are shown in Figure 1. For our supervisory MPC controller, we manipulate supply air temperature and fan speed as control inputs to minimize the combined cooling, heating, and fan power consumption while maintaining the occupant comfort bounds. Assuming the building to be in a climate close to Denver, CO, USA, the state and input box constraints are as follows: \[21^{o}C\leq x^{T_{sone_{i},occ}}\leq 24^{o}C \tag{11}\] \[15^{o}C\leq x^{T_{sone_{i},unocc}}\leq 30^{o}C\] (12) \[0.0\leq u^{fan}\leq 1.0\] (13) \[12^{o}C\leq u^{T_{snp}}\leq 40^{o}C \tag{14}\] _Multi-zone office (ASHRAE 2006 VAVReaheat)_ The test case represents the middle floor of an office building located in Chicago, IL, as described in the set of DOE Commercial Building Benchmarks for new construction (Deru et al., 2011) with weather data from TMY3 for Chicago O'Hare International Airport. The represented floor has five zones, with four perimeter zones and one core zone. The occupied time for the HVAC system is between 6 AM and 7 PM each day. The HVAC system is a multi-zone single-duct Variable Air Volume (VAV) system with pressure-independent terminal boxes with reheat. A schematic of the system is shown in Figure 2. The cooling and heating coils are water-based, served by an air-cooled chiller and air-to-water heat pump respectively. A number of low-level, local-loop controllers are used to maintain the desired setpoints using the available actuators. The primary local-loop controllers are specified on the diagrams of Figure 3 as C1 to C3. C1 is responsible for maintaining the zone temperature setpoints as determined by the operating mode of the system and implements dual-maximum logic. C2 is responsible for maintaining the duct static pressure setpoint and implements a duct static pressure reset strategy. C3 is responsible for maintaining the supply air temperature setpoint as well as the minimum outside air flow rate as determined by the operating mode of the system. In this case, we assume the fan speed to be constant and our supervisory MPC controller manipulates the damper position and reheat control signal to control the airflow and zone supply air temperature respectively (at each zone). In addition, the central HVAC cooling and heating units are manipulated to control the central supply air temperature. The optimization objective is to minimize the overall cooling and heating loads while maintaining the occupant comfort bounds and central supply air temperature. The state and input box constraints are as follows: \[21^{o}C\leq x^{T_{sone_{i},occ}}\leq 24^{o}C \tag{15}\] \[15^{o}C\leq x^{T_{sone_{i},unocc}}\leq 30^{o}C\] (16) \[0.0\leq u^{dam_{i}}\leq 1.0\] (17) \[0.0\leq u^{yReaHea_{i}}\leq 1.0\] (18) \[\forall i\in\{1,2,3,4,5\}\] \[5^{o}C\leq x^{T_{snp}}\leq 20^{o}C\] (19) \[0.0\leq u^{yHea}\leq 1.0\] (20) \[0.0\leq u^{yCoo}\leq 1.0 \tag{21}\] ### System Identification We consider the three choices of models as described in Section 2 for the single zone and multi-zone case. We describe how we sufficiently excite the system to generate data and report the training and out-of-training performance of each model. Data generationFor each time-step \(t=0,...,T-1\), we sample a random control input \(u_{t}\) from a uniform distribution of the feasible input space and pass the sampled control input to BOPTEST simulation to get the next observation and disturbance. We collect the data up to time-step \(T\), and repeat this procedure \(K\) times using different initial conditions. In the BESTEST case, we choose \(K=120\), \(T=500\), and use 100 distinct trajectories as training data, 10 for validation and 10 for test. In the Figure 1: Control schematics of the BESTEST Case 900 model. Source: [https://ibpsga.github.io/project1-boptest/](https://ibpsga.github.io/project1-boptest/) Figure 2: Envelope, Floorplan and control schematics of multi zone office air simple emulator model of BOPTEST. Source: [https://ibpsa.github.io/project1-boptest/](https://ibpsa.github.io/project1-boptest/) multi-zone office case, we choose \(K=600\), \(T=1000\), and use 500 trajectories as the training dataset, and keep 50 for validation and 50 for test purposes. It is evident that test data, which all results are reported on, is the data that the model has never been trained on. HyperparametersThe MLP framework consists of 4 layers with 256 nodes in each layer, and \(\tanh(\cdot)\) activation layers in-between the MLP layers. For the LSTM model, we implement 2 layers with 256 nodes for \(MLP_{\text{enc}}\) and \(MLP_{\text{dec}}\) and choose the dimension of hidden and cell state as 256. Mean squared error (MSE) is used for computing training loss. For all surrogate models, we choose _Adam_ to optimize the parameters with learning rate=0.001, and epoch=1000. Predictive performanceTable 1 and Table 2 show the results of test performance for single-zone and five-zone models respectively. Losses are calculated using average prediction error for 40 steps.For multi-step ahead prediction, a _for_-loop is implemented in the forward propagation of the ML models. The results for single-zone and multi-zone models demonstrate the superiority of LSTM in prediction accuracy, although, MLP performance is comparable in the five-zone case as depicted in Figure 4. In Table 2, we compare the performance of different MLP model choices with different lag values of the state, input, and time-varying disturbances. (5,5,5) is the best model among all choices but (1,5,5) model comes very close with fewer model inputs. This model depends on lags of weather data and control inputs, which we speculate is not unrelated to the lags associated with lower-level controllers in this system. We chose (1,5,5) as a more simple, equally accurate choice. Figure 5 is a visual depiction of the predictive accuracy of the chosen MLP for surrogate modeling of the five-zone model during three distinct weather events (January, May, and August) for the core zone. Each orange trajectory is a 50-step ahead prediction (12.5 hours) starting from the leftmost point of the trajectory. These results appear to be conclusive for deploying the model in MPC. speed up convergence, the previously optimized control trajectory is used as the initial trajectory for warm-starting the receding horizon replanning for the MPC problem. The control results for single-zone and multi-zone cases are reported in Table 3 and Table 4, respectively. In the single-zone case, LSTM model performs best for control. This is expected from the superior predictive accuracy of the model. It also has the best average computation time. As or the control algorithm, Gradient-based approach finds a better local minima for the problem. In the multi-zone case, LSTM performs poorly (unexpectedly) and MLP outperforms all models. Here, in contrast to the previous case, SLSQP finds a better local minima. Next, we discuss the significance of these results. ### Discussion The modeling results indicate that it is possible to derive accurate ML models from the building emulators. It is worth mentioning that the bottleneck in this process is data generation which is not always trivial for hybrid systems with many if-else conditions, low-level control loops and system constraints, and finely-tuned numerical solvers. On the control side, we have run extensive tests using SLSQP and Gradient-based approaches from different initial conditions. In the one-zone case, the gradient-based approach with the LSTM model shows the lowest power consumption with an acceptable discomfort level. However, in the multi-zone case, SLSQP with MLP model reaches the lowest power consumption, even though LSTM model shows better predictive performance. This can happen when the optimization problem in the control formulation is highly non-convex. The complexity of the surrogate model likely creates many additional local minima, which in turn, depreciates the control performance. This, somewhat contradictory, implies that better predictive performance does not always guarantee better control performance. We believe that based on this experiment, a middle-ground between model complexity and predictive performance should be considered for these types of MPC problems. Alternatively, better control formulations might help to curb this issue. Since we have found little precedent in the literature, we are running more tests to find better and more definitive answers. It is also worth pointing out that the framework is working as designed, helping to frame new hypotheses based on experimentation. Computation TimeBy comparing the average computation time between several methods, we make the following interesting observations: First, both the gradient-based approach and SLSQP show comparable computation time, though the computation time of both solvers depends on their stopping criteria. For example, after running extensive tests, we decided that 100 iterations was a good stopping criteria for the gradient-based approach. We expect this hyperparameter tuning to be problem specific. Second, for the surrogate model, it is obvious to us that MLP should take longer than the Linear model to run. Surprisingly, the LSTM model, which has the most complex structure among the three candidates, shows the fastest computation time. We think that this computation time gap most likely comes from a difference in the implementation language. Each surrogate model has a \(for\)-loop to predict the multi-steps. Although all surrogate models are implemented in Pytorch, the linear and MLP model conduct their \(for\)-loops in python, while LSTM model uses C++. ## 5 Conclusion and Future Work We presented a modeling and control framework for controlling physics-based building emulators. We have shown that our approach is successful in reducing cooling and heating loads in the BOPTEST emulator while satisfying Figure 4: Test MSE for different choices of surrogate models in multi-zone test case. LSTM and MLP have comparable performance and outperform the Linear model. \begin{table} \begin{tabular}{l l|c c c} \hline \hline **Model** & **Solver** & **Power** & **Discomfort** & **Time** \\ \hline Linear & GDM & 0.0189 & 1556 & 1.607 \\ Linear & SLSQP & 0.2551 & 1528 & 0.933 \\ MLP & GDM & 4.804 & 2.935 & 1.694 \\ MLP & SLSQP & 5.059 & 5.207 & 1.684 \\ **LSTM** & **GDM** & **4.818** & **2.081** & **0.620** \\ LSTM & SLSQP & 4.943 & 4.415 & 0.661 \\ \hline \hline \end{tabular} \end{table} Table 3: Avgere of topward(\(kWh/m^{2}\)), thermal distribution(\(k\)/zone) and temperature(\(sec\)) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOPTEST emulator while \(\gamma\) is the temperature(sec) on BOP emulator while \(\gamma occupant comfort and adhering to control system constraints. The approach is modular, meaning that it will be compatible with various other choices of models and control algorithms. For example, while we did not succeed in training a good LSTM model for the five-zone case, we anticipate that the right hyperparameter tuning should address this issue and we are actively working on it. The same is true for control. For example, we tested the framework with an iLQR controller which failed to satisfy constraints. While we did not manage to get the results we expected, we anticipate that significantly better control results are possible with iLQR and we are currently fixing our implementation of the algorithm. This is especially important since iLQR has shown superior performance for nonlinear optimal control problems (Li and Todorov, 2007). We are also exploring other fast-order solvers with alternative control formulations. For example, we are considering OSQP (Stellato et al., 2020), which will significantly speed up the optimization while producing high-quality solutions, or distributed ADMM (Boyd et al., 2020). Figure 5: The set of figures show the results of out-of-training predictive performance for five zone model during three distinct weather events (January, May, and August) for core zone (top). The ambient temperature trajectories is depicted in red (bottom). The orange lines represent the 50-step ahead predictions (12.5 hours) starting from the left most point of the trajectory. The full MSEs are reported in Table 2. Figure 6: Result comparison for different choices of models and control algorithms. The top figure represents the temperate. The bottom figure is the relevant weather data, and the middle figures are the corresponding control inputs. The results are divided into a cold (Jan) and hot (Aug) weather events. (a) Result for control of core-zone in the multi-zone test case using SLSQP with Linear, MLP, and LSTM models. Using MLP model, the control outperforms LSTM and Linear model-based implementation. (b) MLP-based control results with SLSQP solver slightly outperform the Gradient-based approach. 2011) for district-level problems. In addition, We are actively working with the developers of BOPTEST to control scaled-up models, including multiple coupled buildings, with the framework. The main bottleneck for scaling the current approach is the customized nature of the data generation process. In the current process, many trials and errors are required to find a feasible input space that does not break the emulator in forward simulations. Latest studies(Chakrabarty et al., 2022) provide some promising insight into more robust sampling procedures. We are currently working on incorporating similar approaches into our process. Last but not least, while in this paper we focused on control as an application, we firmly believe that system design, fault diagnosis, and reliability are other applications that will benefit from the proposed modeling approach, and we are actively investigating problems in these domains.
2307.16869
Anomalous noise spectra in a spin-exchange-relaxation-free alkali-metal vapor
We perform spin-noise spectroscopy on an unpolarized $^{87}\mathrm{Rb}$ vapor in the spin-exchange-relaxation-free (SERF) regime. We observe noise spectral distributions that deviate strongly from Lorentzian models that accurately describe lower-density regimes. For example, at magnetic fields of $\sim 1 \mathrm{\mu T}$ and $^{87}\mathrm{Rb}$ densities $\gtrsim 1 \times 10^{14} \rm{atoms/cm^{3}}$ we observe an asymmetric spin-noise distribution in which the resonance line is depleted by about half its power, with the diverted power becoming a broad spectral component that could be mistaken for optical shot noise. The results are in good agreement with recent models accounting for correlations between the ground hyperfine states. We discuss implications for quantum sensing and absolute noise calibration in spin-squeezing and entanglement detection.
K. Mouloudakis, J. Kong, A. Sierant, E. Arkin, M. Hernández Ruiz, R. Jiménez-Martínez, M. W. Mitchell
2023-07-31T17:24:57Z
http://arxiv.org/abs/2307.16869v2
# Anomalous spin projection noise in a spin-exchange-relaxation-free alkali-metal vapor ###### Abstract We perform spin-noise spectroscopy on an unpolarized \({}^{87}\)Rb vapor in the spin-exchange-relaxation-free (SERF) regime. We observe noise spectral distributions that deviate strongly from Lorentzian models that accurately describe lower-density regimes. For example, at magnetic fields of \(\sim 1\,\mathrm{\SIUnitSymbolMicro T}\) and \({}^{87}\)Rb densities \(\gtrsim 1\times 10^{14}\,\mathrm{atoms/cm^{3}}\) we observe an asymmetric spin-noise distribution in which the resonance line is depleted by about half its power, with the diverted power becoming a broad spectral component that could be mistaken for optical shot noise. The results are in good agreement with recent models accounting for correlations between the ground hyperfine states. We discuss implications for quantum sensing and absolute noise calibration in spin-squeezing and entanglement detection. Quantum noise ultimately limits the performance of atomic sensors including optical clocks [1; 2], magnetometers [3; 4; 5], inertial sensors [6], and instruments for fundamental physics [7; 8]. In these sensors atomic quantum noise, or _spin projection noise_ (SPN), is rooted in the discreteness of the atom and the quantization of atomic observables, and is shaped by dynamical processes [9; 10]. Such noise can be accurately described using the quantum structure of spin states, spin dynamics [11; 12; 13], and the regression theorem [14]. In non- or weakly-interacting atomic media this leads to, for example, Lorentzian-shaped noise-spectral features, characteristic of harmonic oscillators under linear dissipation, as well as spin-noise powers scaling as the number of atoms. These characteristics are employed in absolute calibrations based on the atomic spin structure [15; 16]. An important class of atomic sensors operate outside this weakly-coupled regime, with relaxation dynamics significantly different than in weakly-interacting systems. Of particular interest are magnetometers operated in the spin-exchange-relaxation-free (SERF) regime [17], employed in biomagnetism detection [18], inertial sensors [19], and tests of fundamental physics [20]. In SERF, which occurs in dense alkali vapors, spin-exchange (SE) collisions and hyperfine interactions dominate the spin dynamics, leading to line narrowing of the magnetic resonances and a corresponding boost to the sensitivity [21; 22]. The line shifts and line narrowing associated with the SERF regime are well known from optically-detected magnetic resonance experiments using spin-polarized ensembles [23; 24]. The implications of the SERF physics for spin projection noise are less well explored. An experimental study found that SERF media support and preserve non-classical spin correlations, i.e., entanglement and spin squeezing [25]. In addition, a theoretical study showed that the spin-exchange interaction can sustain spin correlations for experimentally meaningful timescales [26]. It has recently been predicted [10] that SPN will show importantly non-Lorentzian spectra in the SERF regime with implications for sensing and calibration applications. Here we experimentally study the behavior of SPN of an unpolarized \({}^{87}\)Rb ensemble in the SERF regime. Using spin-noise spectroscopy (SNS), i.e., optical detection of thermally-driven ensembles [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38], we observe the spontaneous spin fluctuations of the vapor across the phase transition to the SERF regime. Unpolarized ensembles are insensitive to magnetic noise, therefore naturally offering a testbed for studying spin fluctuations under minimal interference from technical noise. Our characterization of SPN not only validates the recent predictions [10], but also experimentally demonstrates anomalous spin-noise behavior, including spin-noise redistribution that can affect both the fundamental sensitivity of SERF instruments and the use of spin-noise as a calibrated noise source. The experimental setup is shown in Fig. 1a. Isotopically enriched \({}^{87}\)Rb and \(0.12\,\mathrm{amg}\) of N\({}_{2}\) buffer gas are held in a cylindrical cell of \(12.7\,\mathrm{mm}\) diameter and \(30\,\mathrm{mm}\) internal length, with anti-reflection coated windows of \(5\,\mathrm{mm}\) thickness. The cell is placed at the center of a cylindrical, four-layer, mu-metal magnetic shield and solenoid and shim coils are used to produce a homogeneous DC magnetic field \(\mathbf{B}=(B,0,0)\) along the \(\mathbf{\hat{x}}\) direction. A ceramic oven, intermittent Joule heating, and a thermocouple are used to control the cell temperature. An external cavity diode laser produces a linearly polarized \(795\,\mathrm{nm}\) beam detuned \(46\,\mathrm{GHz}\) to the blue of the D\({}_{1}\) line of \({}^{87}\)Rb, monitored with a wavelength meter. The laser output, propagating along \(\mathbf{\hat{z}}\), is spatially filtered with a single-mode fiber to produce a Gaussian beam with effective area \(A_{\mathrm{eff}}\approx 1.5\,\mathrm{mm}^{2}\), defined as \(A_{\rm eff}=L[\int I(x,y,z)\,dx\,dy]^{2}/\int I^{2}(x,y,z)\,dx\,dy\,dz\), where \(I(x,y,z)\) is the intensity of the Gaussian beam and \(L\) the length of the cell. The intensity is measured with a beam profiler. The effective number of atoms probed by the laser beam is \(N_{\rm at}=nA_{\rm eff}L\), where \(n\) is the alkali number density. Both the detuning of the light and the atomic \(2.4\,\)GHz full-width at half maximum (FWHM) pressure-broadened optical linewidth are larger than the \(0.8\,\)GHz hyperfine splitting of the excited state, so tensor polarizability effects are expected to be negligible [39]. The transmitted light is detected by a balanced polarimeter comprised of a half-waveplate, a Wollaston prism and an amplified differential photodetector (PD). The PD signal is recorded by a \(24\,\)bit digitizer for later processing. The experimentally obtained noise-spectra are of the form \[S_{\rm opt}(\nu)=S_{\rm psn}+S_{1/f}(\nu)+S_{\rm el}(\nu)+S_{\rm at}(\nu), \tag{1}\] where the contribution from photon shot noise (PSN) is \(S_{\rm psn}=2G^{2}q_{e}rP\approx 0.91\times 10^{-12}\,\)V\({}^{2}\)Hz\({}^{-1}\), with \(q_{e}\approx 1.9\times 10^{-19}\) C being the electron charge, \(r\approx 0.52\) A W\({}^{-1}\) at \(795\,\)nm the PD responsivity, \(G=1\times 10^{5}\,\)V A\({}^{-1}\) the transimpedance gain of the PD and \(P\approx 550\) uW the laser power reaching the polarimeter. \(S_{1/f}=\zeta^{2}\nu^{-\beta}\), \(\beta>0\) is "1/f noise" with strength \(\zeta^{2}\), and \(S_{\rm el}(\nu)\) is the electronic noise of the PD and acquisition system, which in practice is about \(20\,\)dB below the PSN background. The last term in Eq.(1) is the atomic spin noise spectrum, presenting a resonance feature at the spin precession frequency. The spin-noise power of the thermal state is a readily available noise reference, and has been used in noise calibration for spin squeezing [40] and entanglement detection [25] experiments. We note that for frequencies above \(0.5\) kHz, \(S_{1/f}(\nu)\) is negligible, thus, in the analysis that follows \(S_{\rm opt}(\nu)\) is approximated as \(S_{\rm opt}(\nu)\approx S_{\rm at}(\nu)+S_{\rm psn}\). To model the atomic spectra we employ the Ornstein-Uhlenbeck approach as derived in [10] and further discussed in [41]. In this model, the spectra result from the stochastic dynamics of the hyperfine collective spin vectors \(\hat{\bf F}^{\alpha}(t)\), \(\alpha\in\{a=I+1/2,b=I-1/2\}\), governed by \[d\hat{\bf X}(t)=A\hat{\bf X}(t)dt+Qd\hat{\bf W}(t), \tag{2}\] where \(\hat{\bf X}\equiv[\hat{F}_{x}^{a},\hat{F}_{y}^{a},\hat{F}_{z}^{b},\hat{F}_{x} ^{b},\hat{F}_{y}^{b},\hat{F}_{z}^{b}]^{T}\), \(A\) is the drift matrix, \(Q\) is the noise strength matrix, and \(d\hat{\bf W}\) is a length-six vector of independent Wiener increments [41]. For such processes, with real \(A\) and \(Q\), the power spectral density matrix is [14] \[S_{\hat{\bf X},\hat{\bf X}}(\omega)=-\frac{1}{2\pi}(A+i\omega\mathbb{1})^{-1}QQ ^{T}(A^{T}-i\omega\mathbb{1})^{-1}, \tag{3}\] where \(\mathbb{1}\) is the \(6\times 6\) identity matrix. In equilibrium, \(QQ^{T}\) is directly related to \(A\) and to the steady-state, equal-time covariance matrix \(\mathcal{R}_{\hat{\bf X},\hat{\bf X}}(0)\) by \[QQ^{T}=A\mathcal{R}_{\hat{\bf X},\hat{\bf X}}(0)+\mathcal{R}_{\hat{\bf X}, \hat{\bf X}}(0)A^{T}, \tag{4}\] Figure 1: **a)** Schematic representation of the experimental setup (see text). **b)** Predicted non-Lorentzian spin-noise contributions :) \(S_{\hat{F}_{x}^{a},\hat{F}_{y}^{a}}(\nu)\), ii) \(S_{\hat{F}_{y}^{b},\hat{F}_{z}^{b}}(\nu)\), iii) \(S_{\hat{F}_{x}^{a},\hat{F}_{z}^{b}}(\nu)\) and \(S_{\hat{F}_{y}^{b},\hat{F}_{z}^{b}}(\nu)\), computed using Eq.(6) and experimentally relevant parameters: \(R_{ae}\approx 3.02\times 10^{5}\) s\({}^{-1}\) and \(R_{\rm ad}\approx 0.03\times 10^{5}\) s\({}^{-1}\), corresponding to \(3.4\times 10^{14}\) atoms/cm\({}^{3}\) and temperature \(T=169\,^{\circ}\)C. The magnetic field is \(B=385\) nT along the \(\hat{\bf x}\) direction. **c)** Spin-noise spectra acquired at a magnetic field of \(B=385\) nT and a number density of \(n\approx 3.4\times 10^{14}\) atoms/cm\({}^{3}\). The mean PSN level is depicted by the green dashed line and has been subtracted from the spectrum. Data are fitted by a Lorentzian model (black solid line) and red lines and Eq.(1) (red solid lines) with and without “1/f-noise”. The departure from the Lorentzian spectrum is demonstrated. where \[\mathcal{R}_{\hat{F}^{\alpha}_{i},\hat{F}^{\beta}_{j}}(0)=\delta_{ij}\delta_{ \alpha\beta}\frac{f^{\alpha}(f^{\alpha}+1)(2f^{\alpha}+1)}{6(2I+1)}N_{\rm at}, \tag{5}\] where \(N_{\rm at}\) is the number of atoms contributing to the spectrum and \(f^{\alpha}\) is the single-atom hyperfine spin value [10]. In this way, it is possible to compute fluctuation spectra for the distinct hyperfine (\(\alpha\)) components. A Faraday rotation signal from such a medium has power spectral density [41] \[\begin{split} S_{\rm at}(\nu)=&\mathcal{A}\,r^{2}G ^{2}P^{2}\Big{[}g_{a}^{2}S_{\hat{F}^{\alpha}_{i},\hat{F}^{\alpha}_{i}}(\nu)+g_ {b}^{2}S_{\hat{F}^{\alpha}_{i},\hat{F}^{\alpha}_{i}}(\nu)\\ &-g_{a}g_{b}\left(S_{\hat{F}^{\alpha}_{i},\hat{F}^{\alpha}_{a}}( \nu)+S_{\hat{F}^{\alpha}_{i},\hat{F}^{\alpha}_{i}}(\nu)\right)\Big{]},\end{split} \tag{6}\] where \(\mathcal{A}\) is a unitless scale factor and \(g_{\alpha}\) is a detuning-dependent coupling proportional to the vector polarizability for the hyperfine state \(\alpha\). Cross-correlations between the two ground-state hyperfine levels allows for the \(g_{a}g_{b}\) term in Eq.(6) to partially cancel the \(g_{a}^{2}\) and \(g_{b}^{2}\) terms, thereby distorting the spectra and affecting the distribution of spin-noise power. The non-Lorentzian character of these spectra is illustrated in Fig. 1b) and 1c). It is these peculiar effects of SPN in the SERF regime that we study below. Representative spin-noise spectra, acquired as a function of transverse bias field, are shown in Fig. 2. The observed growth and narrowing of the spin-noise resonance with decreasing field are hallmarks of the SERF regime, revealing information about the way spin interactions affect the auto-correlation functions of the system in thermal equilibrium. We fit the observed spectra with \(S_{\rm opt}(\nu)=S_{\rm at}(\nu)+S_{\rm psn}\), with \(S_{\rm at}(\nu)\) from Eq.(6) and photon shot noise \(S_{\rm psn}=0.91\times 10^{-12}\,\mathrm{V}^{2}\,\mathrm{Hz}^{-1}\) from an independent measurement. The magnetic field is inferred from the current in the \(B_{x}\) coil, previously calibrated by spin-noise spectroscopy at low density [41]. A simultaneous fit to all spectra finds best-fit parameters \(R_{\rm se}=3.02\times 10^{5}\) s\({}^{-1}\), \(R_{\rm sd}=0.03\times 10^{5}\) s\({}^{-1}\), \(R=400\) s\({}^{-1}\), and \(\mathcal{A}=2.3\times 10^{3}\). These are respectively the rates of spin-exchange, spin-destruction and spin-depolarizing processes as defined in [41]. The fitted spectra are shown as black lines in Fig. 2, and agree well except at the lowest field strengths. Deviations from Eq.(6) at low field are expected due to imperfect compensation of remanent fields, the \(S_{1/f}(\nu)\) contribution, and diffusion. A complete model accounting both for spin-exchange and atomic diffusion effects is still missing from the literature, however diffusion alone has been extensively studied in [34; 42]. From the fitted value of the spin-exchange rate, the \(169\,^{\circ}\)C temperature of the vapor and the \(1.9\times 10^{-14}\,\mathrm{cm}^{2}\) SE cross-section [43], we infer an alkali number density of \(3.4\times 10^{14}\) atoms/cm\({}^{3}\). To visualize the "slowing-down" of the spin precession and the linewidth reduction, in Fig. 2 (inset) we compare the observed resonance frequency and linewidth from distorted-Lorentzian fits to individual spectra [41] against the predictions of Eq. (6) with the above fit parameters. As described in [41], the predicted values can be computed from the real and imaginary parts of the eigenvalues of the drift matrix \(A\). This extends the results of [24] to account for spin-destruction and spin depolarizing processes, for any alkali species. We now study the redistribution of spin-noise power across the transition from the SE-dominated to the SERF regime. The total atomic noise power in this state is given by \[\int_{0}^{\infty}S_{\rm at}(\nu)d\nu=\frac{1}{2}\mathcal{A}r^{2}G^{2}P^{2}[g _{a}^{2}\,\mathrm{var}(F^{a})+g_{b}^{2}\,\mathrm{var}(F^{b})], \tag{7}\] where \(\mathrm{var}(F^{\alpha})\), \(\alpha\in[a,b]\) are given by Eq.(5). Since our acquisition is limited by a 100 kHz Nyquist frequency, the experimentally obtained noise is only a portion of Eq.(7), as discussed in [41]. We stress that the noise in Eq.(7) is independent of the magnetic-resonance parameters and depends only on the number of probed atoms, the probe intensity and detuning, and the optical linewidth. In the SERF regime, the predicted spectra are non Figure 2: Single-sided power spectral density (PSD) of the polarimeter signal for transverse magnetic fields ranging from 280 nT to 12 nT while the vapor cell is maintained at approximately 169\(\,^{\circ}\)C. Each spectrum shows the linear average [41] of 150 spectra, each computed on a 0.5 s acquisition with a sampling rate of 200 kSa\({}^{-1}\). A 20 Hz (ten-bin) boxcar smoothing has also been applied [37]. Black solid lines: fit of Eq.(1) (excluding 1/f and electronic noise) to the observed spectra (see text). Inset: Left axis shows spin-noise precession frequency \(\omega_{q}\) normalized to \(\omega_{0}=g_{s}\mu_{B}B/[\hbar(2I+1)]\), versus \(\omega_{0}\) known by calibration of the coils at low density [41]. Right axis shows the spin-noise linewidth (HWHM) versus \(\omega_{0}\). Data are obtained by fitting the spectra with a distorted Lorentzian (see text). Error bars show \(\pm\) one standard deviation in the fit estimation-parameters over 150 acquisitions. Blue (purple) solid line shows \(\mathrm{Im}[\lambda]\) (\(\mathrm{Re}[\lambda]\)) of the eigenvalues of the drift matrix \(A\), as given by Eq.7 of [41]. The parameters are discussed in the main text. Lorentzian, with a significant portion of spin noise spread over the high-frequency part of the spectrum. To demonstrate this, we acquire spectra under a fixed transverse field of \(B=918\) nT and alkali number density across the transition from slow SE (\(R_{\rm se}\ll\omega_{0}\)) to rapid SE (\(R_{\rm se}\gg\omega_{0}\)), see Fig. 3, inset. We numerically integrate the observed spectra to compute \[\int_{\nu_{\rm low}}^{\nu_{\rm br}}S_{\rm at}(\nu)d\nu\bigg{/}\int_{\nu_{\rm low }}^{\nu_{\rm low}}S_{\rm at}(\nu)d\nu, \tag{8}\] which describes the fraction of the observed power below a cut-off frequency \(\nu_{\rm br}\). We choose \(\nu_{\rm bw}=95\) kHz as the upper and \(\nu_{\rm low}=0.5\) kHz as the lower limits of integration in order to avoid distortions in the noise power due to the digitizer's anti-aliasing filter and the 1/f noise, respectively. The cut-off frequency \(\nu_{\rm br}=20\) kHz is chosen to be a few FWHM above resonance so that, were the line Lorentzian, nearly all the spin noise would be below cut-off. It is seen in Fig. 3 that at low densities nearly all of the atomic noise is below \(\nu_{\rm br}\), whereas at higher densities, in the SERF regime, nearly 50% of the noise shifts above \(\nu_{\rm br}\). The analogous calculation for different values of \(\nu_{\rm br}\) is given in [41]. These observations support the picture [10] that \(\mathbf{\hat{F}}^{a}\)-\(\mathbf{\hat{F}}^{b}\) cross-correlations, a consequence of the strong inter-hyperfine coupling in SERF, are important in this regime. The noise distribution of the different hyperfine levels are discussed in [41]. This line reshaping, if not accounted for, can produce systematic errors in calibration based on the atomic noise spectra. For example, an accurate spectral model outside the SERF regime describes a white shot-noise background plus a Lorentzian or sum of Lorentzians representing \(S_{\rm at}(\nu)\). The area of the Lorentzians indicates the atomic number, and the shot noise level the optical power. Fitting this model to SERF regime noise spectra would produce an underestimate of the atom number and an over-estimate of the optical power, due to shifting of spin noise power into the long, high-frequency tail of \(S_{\rm at}(\nu)\). Spin-noise redistribution out of the peak at \(\omega_{q}\), while derived and observed here at zero mean polarization, can be expected to occur also in polarized ensembles, at least ones with weak polarization [35]. A variety of magnetometry strategies obtain signals due to spin precession at \(\omega_{q}\), and would thus would have improved signal-to-noise ratio in the SERF regime relative to the SE-dominated regime. This fundamental sensitivity advantage is in addition to the well-known coherence-time advantage in the SERF regime [17; 21; 22]. In conclusion, we have measured and characterized the spin noise of a thermal \({}^{87}\)Rb in the transition from the SE-dominated to SERF regimes. We observe anomalous noise lineshapes arising from strong coupling of the ground hyperfine spins in the SERF regime. The line reshaping notably reduces the power in the resonant peak, and produces a broadband component that imitates photon shot noise. The results validate recent theoretical models, improve the accuracy of thermal-state-based noise calibration for spin squeezing and entanglement generation, and suggest a hyperfine-correlation-induced reduction in fundamental quantum noise for optically-pumped magnetometers operating in the SERF regime. We thank G. Vasilakis, J. Kolodynski and V.G. Lucivero for useful discussions. JK and EA acknowledge support from the National Natural Science Foundation of China (NSFC) (Grants No. 12005049, No. 11935012). KM acknowledges support from Grant FJC2021-047840-I funded by MCIN/AEI/ 10.13039/501100011033 and by the European Union "NextGenerationEU/PRTRR." MHR acknowledges support from Ayuda PRE2021-098880 financiala por MCIN/AEI/ 10.13039/501100011033 y por el FSE+. MHR, AS, KM and MWM acknowledge the Spanish Ministry of Science MCIN with funding from NextGenerationEU (PRTRR-C17.I1) and by Generalitat de Catalunya, "Severo Ochoa" Center of Excellence CEX2019-000910-S; projects SAPONAIA (PID2021-123813NB-I00) and MARICHAS (PID2021-126059OA-I00) funded by MCIN/ AEI /10.13039/501100011033/ FEDER, EU; Generalitat de Catalunya through the CERCA program; Agencia de Gestio d'Ajuts Universitaris i de Recerca Grant No. 2017-SGR-1354; Fundacio Privada Cellex; Fundacio Mir-Puig; The European Commission project OPMMEG (101099379). Figure 3: Spin-noise spectrum (single-sided PSD) as a function of the \({}^{87}\)Rb number density for a fixed magnetic field of \(B=918\) nT. Each spectrum shows the linear average of 100 spectra. Long high-frequency tails are apparent. Inset: Resonant noise power fraction as a function of number density as calculated using Eq.(8). The cut-off frequency \(\nu_{\rm br}\) at 20 kHz is indicated by the red dashed line. Error bars show \(\pm\) one standard deviation in the numerical integration over 100 acquisitions.
2309.07204
Bounds for moments of $\ell$-torsion in class groups
Fix a number field $k$, integers $\ell, n \geq 2$, and a prime $p$. For all $r \geq 1$, we prove strong unconditional upper bounds on the $r$-th moment of $\ell$-torsion in the ideal class groups of degree $p$ extensions of $k$ and of degree $n$ $S_n$-extensions of $k$, improving upon results of Ellenberg, Pierce and Wood as well as GRH-conditional results of Frei and Widmer. For large $r$, our results are comparable with work of Heath-Brown and Pierce for imaginary quadratic extensions of $\mathbb{Q}$. When $r=1$, our results are new even for the family of all quadratic extensions of $\mathbb{Q}$, leading to an improved upper bound for the count of degree $p$ $D_p$-extensions over $\mathbb{Q}$ (where $D_p$ is the dihedral group of order $2p$).
Peter Koymans, Jesse Thorner
2023-09-13T17:59:47Z
http://arxiv.org/abs/2309.07204v1
# Bounds for moments of \(\ell\)-torsion in class groups ###### Abstract. Fix a number field \(k\), integers \(\ell,n\geq 2\), and a prime \(p\). For all \(r\geq 1\), we prove strong unconditional upper bounds on the \(r\)-th moment of \(\ell\)-torsion in the ideal class groups of degree \(p\) extensions of \(k\) and of degree \(n\)\(S_{n}\)-extensions of \(k\), improving upon results of Ellenberg, Pierce and Wood as well as GRH-conditional results of Frei and Widmer. For large \(r\), our results are comparable with work of Heath-Brown and Pierce for imaginary quadratic extensions of \(\mathbb{Q}\). When \(r=1\), our results are new even for the family of all quadratic extensions of \(\mathbb{Q}\), leading to an improved upper bound for the count of degree \(p\)\(D_{p}\)-extensions over \(\mathbb{Q}\) (where \(D_{p}\) is the dihedral group of order \(2p\)). ## 1. Introduction and statement of the main results Cohen, Lenstra, and Martinet [6, 7] gave heuristics that predict the distribution of ideal class groups \(\mathrm{Cl}_{K}\) of number fields \(K\) in certain families, including the distribution of their \(\ell\)-torsion subgroups \(\mathrm{Cl}_{K}[\ell]\) for certain primes \(\ell\). Fix an algebraic closure \(\overline{\mathbb{Q}}\). Let \(d\geq 2\) be an integer, \(G\) be a transitive group of the symmetric group \(S_{d}\), \(\ell\nmid|G|\) be a "good" prime, \(K/k\) be a finite extension of number fields chosen inside \(\overline{\mathbb{Q}}\), and \(D_{K}\) be the absolute discriminant of \(K\). Let \(\widetilde{K}\) be the Galois closure of \(K\) over \(k\) inside \(\overline{\mathbb{Q}}\). Define the families \[\mathscr{F}_{k}^{d,G}:=\{K\colon[K:k]=d,\,\mathrm{Gal}(\widetilde{K}/k)\cong G \},\qquad\mathscr{F}_{k}^{d}:=\{K\colon[K:k]=d\}. \tag{1.1}\] For a subset \(\mathcal{S}\subseteq\mathscr{F}_{k}^{d,G}\) or \(\mathcal{S}\subseteq\mathscr{F}_{k}^{n}\), we define \(\mathcal{S}(Q):=\{K\in\mathcal{S}\colon D_{K}\in(Q,2Q]\}\) and \[\alpha_{\mathcal{S}}:=\liminf_{Q\to\infty}\frac{\log|\mathcal{S}(Q)|}{\log Q},\qquad\beta_{\mathcal{S}}:=\limsup_{Q\to\infty}\frac{\log|\mathcal{S}(Q)|}{ \log Q}. \tag{1.2}\] It is conjectured that there exists a constant \(c_{G,d,k,\ell}>0\) such that \[\lim_{Q\to\infty}\frac{1}{|\mathscr{F}_{k}^{d,G}(Q)|}\sum_{K\in\mathscr{F}_{k }^{d,G}(Q)}|\mathrm{Cl}_{K}[\ell]|=c_{G,d,k,\ell}. \tag{1.3}\] So far, the existence of the limit (1.3) is only known when \(G=S_{2}\) and \(\ell=3\) ([10] when \(k=\mathbb{Q}\), [9] otherwise), \(G=S_{3}\) and \(\ell=2\) ([2] when \(k=\mathbb{Q}\), [4] otherwise), or \(G\subseteq S_{2^{m}}\) is a transitive permutation \(2\)-group containing a transposition and \(\ell=3\)[26]. Since we cannot establish (1.3) in full generality yet, we instead try to bound the moments \[\sum_{K\in\mathcal{S}(Q)}|\mathrm{Cl}_{K}[\ell]|^{r},\qquad\ell,r\geq 1. \tag{1.4}\] Duke [11] conjectured that if \(F/E\) is any extension of number fields and \(\varepsilon>0\), then \(|\mathrm{Cl}_{F}[\ell]|\ll_{[F:E],E,\ell,\varepsilon}D_{F}^{\varepsilon}\). This would imply that (1.4) is \(O_{d,k,\ell,r,\varepsilon}(Q^{\varepsilon}|\mathcal{S}(Q)|)\). Extending work of Gauss, Kluners and Wang [23] proved Duke's conjectured bound for \(|\mathrm{Cl}_{F}[\ell]|\) when Introduction Let \(\ell\geq 1\) be a positive integer. Let \(\ell\geq 1\) be a positive integer. Let \(\ell\geq 1\) be a positive integer. Let \(\ell_{1},\ell_{2}\) be the smallest integer such that \(\ell_{1}=\ell_{2}\). If \(j\in\{1,2,3,4\}\) and \((\ell_{1},\ell_{2},\ell_{3},\ell_{4})=(3,4,9,26)\), then the \(j\)-th line of (1.10) improves upon the \(j\)-th line of (1.9) and provides an on-average power-saving improvement over (1.6) when \(\ell\geq\ell_{j}\). Unlike (1.8), the second, third, and fourth lines of (1.10) do not recover the trivial bound (1.5). Since \(|\mathscr{F}_{\mathbb{Q}}^{2}(Q)|\asymp Q\), the first line recovers (1.5) when \(r\geq\ell+2\), a range that is weaker than the result in (1.8) for _imaginary_ quadratic extensions of \(\mathbb{Q}\). Let \(d\geq 2\), let \(\mathcal{S}\subseteq\mathscr{F}_{\mathbb{Q}}^{d}\), and assume \(\beta_{\mathcal{S}}>0\). Note that \(\beta_{\mathcal{S}}\) is finite [24]. Frei and Widmer [18, Theorem 1.4] proved that if \(\zeta_{\tilde{K}}(s)\) satisfies GRH for each \(K\in\mathcal{S}\), then \[\sum_{K\in\mathcal{S}(Q)}|\mathrm{Cl}_{K}[\ell]|^{r}\ll_{\beta_{\mathcal{S}}, d,\ell,r,\varepsilon}Q^{\frac{r}{2}+\beta_{S}-\min\big{\{}\beta_{\mathcal{S}}, \beta_{\mathcal{S}}\frac{r}{\ell(d-1)+2}\big{\}}+\varepsilon},\qquad\ell,r \geq 1. \tag{1.11}\] Fix an integer \(n\geq 2\) and a prime \(p\). Recall (1.1) and (1.2). In this paper, we synthesize the ideas of Heath-Brown and Pierce [20] with those of Frei and Widmer [18] to refine the process of estimating moments of \(\ell\)-torsion in ideal class groups of number fields in any subset \(\mathcal{S}\) of the families \(\mathscr{F}_{\mathbb{Q}}^{p}\) and \(\mathscr{F}_{\mathbb{Q}}^{n,S_{n}}\). In these families, our results uniformly improve upon (1.9), (1.10), and (1.11) for all \(\ell\geq 2\) and all \(r\geq 1\) without any assumptions on GRH or the sizes of \(\alpha_{\mathcal{S}}\) and \(\beta_{\mathcal{S}}\). We do not need to assume GRH because of the recent work of Lemke Oliver, Thorner, and Zaman [25] on the holomorphy and nonvanishing of Artin \(L\)-functions (see also the related work of Pierce, Turnage-Butterbaugh, and Wood [30] and Thorner and Zaman [32]). This improvement also holds for \(\mathscr{F}_{k}^{p}\) and \(\mathscr{F}_{k}^{n,S_{n}}\) when \(k\neq\mathbb{Q}\) without loss in quality. We prove the following theorem. **Theorem 1.1**.: _Fix a number field \(k\), integers \(\ell,n\geq 2\), a prime \(p\), and \(r\geq 1\). Let \(Q\geq 1\) and \(\varepsilon>0\). Let the ordered pair \((\mathscr{F}_{k},d)\) equal \((\mathscr{F}_{k}^{p},p)\) or \((\mathscr{F}_{k}^{n,S_{n}},n)\). If \(\mathcal{S}\subseteq\mathscr{F}_{k}\), then_ \[\sum_{K\in\mathcal{S}(Q)}|\mathrm{Cl}_{K}[\ell]|^{r}\ll_{\alpha_{\mathcal{S}}, d,k,\ell,r,\varepsilon}Q^{\frac{r}{2}+\varepsilon}\big{(}1+|\mathcal{S}(Q)|^{1- \frac{r}{\ell(d-1)+1}}\big{)}.\] _Remark_.: The implied constant is effectively computable. We restrict to the families \(\mathscr{F}_{k}^{p}\) and \(\mathscr{F}_{k}^{n,S_{n}}\) only so that we can apply the results in [25] (see Theorem 5.1 below). If the ordered pair \((\mathscr{F}_{k},d)\) equals \((\mathscr{F}_{k}^{p},p)\) or \((\mathscr{F}_{k}^{n,S_{n}},n)\), then Theorem 1.1 implies that \[\sum_{K\in\mathscr{F}_{k}(Q)}|\mathrm{Cl}_{K}[\ell]|^{r}\ll_{d,k,\ell,r, \varepsilon}Q^{\frac{r}{2}+\varepsilon},\qquad r\geq\ell(d-1)+1. \tag{1.12}\] This has several appealing features beyond the fact that it is unconditional. First, (1.12) recovers (1.5). Second, (1.12) is completely independent of \(|\mathscr{F}_{k}(Q)|\). This is important because of the inexactitude of the existing bounds for \(|\mathscr{F}_{k}(Q)|\). A weak form of Malle's conjecture [28] asserts that \(\alpha_{\mathscr{F}_{k}}=\beta_{\mathscr{F}_{k}}=1\) in (1.2). As of now, we know that there exists an absolute and effectively computable constant \(C>0\) such that \[\begin{cases}1&\text{if $d\leq 5$ and $k=\mathbb{Q}$},\\ \frac{1}{2}+\frac{1}{d-1}&\text{if $d\geq 6$ and $k=\mathbb{Q}$},\,\leq\alpha_{\mathscr{F}_{k}}\leq\beta_{ \mathscr{F}_{k}}\leq\begin{cases}1&\text{if $d\leq 5$ and $k=\mathbb{Q}$},\\ 1.564(\log d)^{2}&\text{if $d\geq 6$ and $k=\mathbb{Q}$},\\ \exp(C\sqrt{\log d})&\text{if $k\neq\mathbb{Q}$}.\end{cases}\end{cases} \tag{1.13}\] (See [1, 2, 5, 10, 13, 24].) Third, if \(r=\ell(d-1)+1\), then (1.12) implies that on average over \(K\in\mathscr{F}_{k}(Q)\), we have a power-saving improvement over (1.6) when \(\alpha_{\mathscr{F}_{k}}>\frac{1}{2}+\frac{1}{2\ell(d-1)}\). Therefore, in this setting, the more accurate _lower_ bounds in (1.13) matter more than the upper bounds in (1.13). Fourth, when \(d=2\), (1.12) reduces to (1.8) with two new benefits: (i) when \(k=\mathbb{Q}\), the real quadratic extensions of \(\mathbb{Q}\) is now included, and (ii) we no longer require that \(k=\mathbb{Q}\). Therefore, Theorem 1.1 extends (1.8) to \(\mathscr{F}_{k}\). Theorem 1.1 is also new for low moments, producing new results even for quadratic extensions of \(\mathbb{Q}\). In light of (1.13), Theorem 1.1 implies that \[\sum_{K\in\mathscr{F}_{\mathbb{Q}}^{2}(Q)}|\mathrm{Cl}_{K}[\ell]|\ll_{\ell, \varepsilon}Q^{\frac{3}{2}-\frac{1}{\ell+1}+\varepsilon}. \tag{1.14}\] This improves upon the work of Frei and Widmer in the first line of (1.10) when \(r=1\), though it does not improve upon (1.7) for _imaginary_ quadratic extensions of \(\mathbb{Q}\). We briefly describe an application of (1.14). For an odd prime \(p\), let \(D_{p}\) be the dihedral group of order \(2p\), and \(D_{p}(2p)\) be the regular permutation representation of \(D_{p}\). A conjecture of Malle [28] asserts that there exist constants \(c(p)>0\) and \(c(2p)>0\) such that \[\mathscr{F}_{\mathbb{Q}}^{p,D_{p}}(Q)\sim c(p)Q^{\frac{2}{p-1}}\qquad\text{ and}\qquad\mathscr{F}_{\mathbb{Q}}^{2p,D_{p}(2p)}(Q)\sim c(2p)Q^{\frac{1}{p}}\] as \(Q\to\infty\). Kluners [22] related upper bounds for \(\mathscr{F}_{\mathbb{Q}}^{p,D_{p}}(Q)\) and \(\mathscr{F}_{\mathbb{Q}}^{2p,D_{p}(2p)}(Q)\) to the behavior of \(|\mathrm{Cl}_{K}[p]|\) as \(K\in\mathscr{F}_{\mathbb{Q}}^{2}(Q)\) varies. Using this relationship, Kluners [22] proved that \(\mathscr{F}_{\mathbb{Q}}^{2p,D_{p}(2p)}(Q)\ll_{p,\varepsilon}Q^{3/(2p)+\varepsilon}\). By combining the work of Kluners with the first line of (1.9) when \(\ell=p\), Cohen and Thorne [8, Theorem 1.1] proved that \(\mathscr{F}_{\mathbb{Q}}^{p,D_{p}}(Q)\ll_{p,\varepsilon}Q^{3/(p-1)-1/(p(p-1))+\varepsilon}\). By combining the work of Kluners with the first line of (1.10) when \(r=1\) and \(\ell=p\), Frei and Widmer [18, Corollary 1.2] improved both bounds to \[\mathscr{F}_{\mathbb{Q}}^{p,D_{p}}(Q)\ll_{p,\varepsilon}Q^{\frac{3}{p-1}- \frac{2}{(p+2)(p-1)}+\varepsilon},\qquad\mathscr{F}_{\mathbb{Q}}^{2p,D_{p}(2p )}(Q)\ll_{p,\varepsilon}Q^{\frac{3}{2p}-\frac{1}{p(p+2)}+\varepsilon}. \tag{1.15}\] Using (1.14) instead of (1.10), we immediately improve upon (1.15) as follows. **Corollary 1.2**.: _If \(p\) is an odd prime and \(\varepsilon>0\), then_ \[\mathscr{F}_{\mathbb{Q}}^{p,D_{p}}(Q)\ll_{p,\varepsilon}Q^{\frac{3}{p-1}- \frac{2}{(p+1)(p-1)}+\varepsilon},\qquad\mathscr{F}_{\mathbb{Q}}^{2p,D_{p}(2p )}(Q)\ll_{p,\varepsilon}Q^{\frac{3}{2p}-\frac{1}{p(p+1)}+\varepsilon}.\] ### Acknowledgements We thank Roger Heath-Brown and Lillian Pierce for their encouragement. PK would like to thank Carlo Pagano and Martin Widmer for fruitful discussions. PK gratefully acknowledges the support of Dr. Max Rossler, the Walter Haefner Foundation and the ETH Zurich Foundation. JT gratefully acknowledges the support of the Simons Foundation (MP-TSM-00002484). ## 2. Notation, conventions, and overview Throughout this paper, we fix integers \(d,\ell\geq 2\). Let \(\mathscr{F}_{k}^{d}\) be as in (1.1) and \(\mathcal{S}\subseteq\mathscr{F}_{k}^{d}\). For \(K\in\mathcal{S}\), the set of places is denoted \(\Omega_{K}\). For each \(v\in\Omega_{K}\), let \(K_{v}\) be the completion of \(K\) with respect to \(v\). We normalize the absolute value \(|\cdot|_{v}\) such that it extends an absolute value of \(\mathbb{Q}\) (either archimedean or \(p\)-adic). We also define \(d_{v}:=[K_{v}:\mathbb{Q}_{v}]\). The ring of integers of \(K\) is \(\mathcal{O}_{K}\), the set of nonzero ideals of \(\mathcal{O}_{K}\) is \(\mathcal{I}_{K}\), and the set of nonzero prime ideals of \(\mathcal{O}_{K}\) is \(\mathcal{P}_{K}\). Let \(\mathrm{N}_{K/\mathbb{Q}}\) be the absolute norm of \(K\), defined on \(\mathfrak{a}\in I_{K}\) by \(\mathrm{N}_{K/\mathbb{Q}}\mathfrak{a}=|\mathcal{O}_{K}/\mathfrak{a}|\). Let \(\mathbb{N}\) denote the set of positive integers. Given \(\mathfrak{P}\in\mathcal{P}_{K}\) lying over \(\mathfrak{p}\in\mathcal{P}_{k}\), we write \(e(\mathfrak{P})=e(\mathfrak{P}/\mathfrak{p})\) for the ramification index and \(f(\mathfrak{P})=f(\mathfrak{P}/\mathfrak{p})\) for the inertia degree of \(\mathfrak{P}\) over \(\mathfrak{p}\). We define \[\mathcal{P}_{K}^{(1)}=\{\mathfrak{P}\in\mathcal{P}_{K}\colon e(\mathfrak{P})=f( \mathfrak{P})=1\}, \tag{2.1}\] the set of prime ideals of \(K\) of relative degree \(1\) (over \(k\)), and the counting functions \[\pi_{K}(x) :=|\{\mathfrak{P}\in\mathcal{P}_{K}\colon\mathrm{N}_{K/\mathbb{Q}} \mathfrak{P}\leq x\}|,\] \[\pi_{K}^{(1)}(x) :=|\{\mathfrak{P}\in\mathcal{P}_{K}^{(1)}\colon\mathrm{N}_{K/ \mathbb{Q}}\mathfrak{P}\leq x\}|. \tag{2.2}\] For complex-valued functions \(f\) and \(g\), we write \(f=O_{\nu}(g)\) or \(f\ll_{\nu}g\) to denote that there exists an effectively computable constant \(c>0\) (depending at most on \(\nu\), \(d\), \(\ell\), and \(k\)) such that in a domain \(U\subseteq\mathbb{C}\) that will be clear from context, we have that if \(z\in U\), then \(|f(z)|\leq c|g(z)|\). We write \(f\asymp_{\nu}g\) to denote that \(f\ll_{\nu}g\) and \(g\ll_{\nu}f\). ## 3. A bound for \(|\mathrm{Cl}_{K}[\ell]|\) Let \(\mathcal{S}\subseteq\mathscr{F}_{k}^{d}\) and \(K\in\mathcal{S}\). For \(\alpha\in K\), let \[H_{K}(\alpha)=\prod_{v\in\Omega_{K}}\max(1,|\alpha|_{v}^{d_{v}})\] be the multiplicative Weil height. This height depends on \(K\), whence the subscript. **Definition 3.1**.: _A lattice \(\mathcal{L}\) is a discrete subgroup of \(\mathbb{R}^{n}\). Write \(\|\cdot\|\) for the Euclidean norm on \(\mathbb{R}^{n}\). A basis \((\mathbf{b}_{1},\ldots,\mathbf{b}_{m})\) of \(\mathcal{L}\) is a Minkowski basis if for every \(i\), \(\mathbf{b}_{i}\) is vector with minimal \(\|\cdot\|\) such that \((\mathbf{b}_{1},\ldots,\mathbf{b}_{i})\) extends to a basis. This is equivalent to_ \[\|\mathbf{b}_{i}\|\leq\Big{\|}\sum_{j=1}^{m}a_{j}\mathbf{b}_{j}\Big{\|}\] _for every vector of integers \((a_{1},\ldots,a_{m})\in\mathbb{Z}^{m}\) such that \(\gcd(a_{i},\ldots,a_{m})=1\)._ **Lemma 3.2**.: _Let \(\mathcal{L}\subseteq\mathbb{R}^{n}\) be a lattice. Then there exists a Minkowski basis for \(\mathcal{L}\)._ Proof.: This follows from a greedy algorithm. Given \(K\in\mathcal{S}\), \(\ell\geq 2\), and \(Z>0\), we define \(S_{\ell}(K,Z)\) to equal \[\{\beta\in K\colon H_{K}(\beta)\leq Z\text{, there exist distinct }\mathfrak{P}_{1},\mathfrak{P}_{2}\in\mathcal{P}_{K}^{(1)}\text{ with }\beta\mathcal{O}_{K}=(\mathfrak{P}_{1}\mathfrak{P}_{2}^{-1})^{\ell}\}. \tag{3.1}\] Our next result, a crucial bound for \(|\mathrm{Cl}_{K}[\ell]|\) in terms of \(S_{\ell}(K,Z)\) and \(\pi_{K}^{(1)}(Z)\), gives a variant of [20, Proposition 2.1] for all extensions \(K/k\) using ideas from [18, Section 2]. **Theorem 3.3**.: _Let \(\varepsilon>0\), \(d\geq 2\), \(\ell\geq 2\), and \(Z>0\). There exists an effectively computable constant \(c_{1}=c_{1}(d,k,\ell)>0\) such that if \(K\in\mathcal{S}\) and \(\pi_{K}^{(1)}(Z)>0\), then_ \[|\mathrm{Cl}_{K}[\ell]|\ll_{\varepsilon}D_{K}^{\frac{1}{2}+\varepsilon}/\pi_{ K}^{(1)}(Z)+D_{K}^{\frac{1}{2}+\varepsilon}\cdot|S_{\ell}(K,c_{1}Z^{\ell})|/\pi_{ K}^{(1)}(Z)^{2}.\] Proof.: We write \(R_{K}\) for the regulator of \(K\) and define \(A:=\mathrm{Cl}_{K}/\mathrm{Cl}_{K}[\ell]\). The bound \[|\mathrm{Cl}_{K}[\ell]|\cdot|A|\cdot R_{K}=|\mathrm{Cl}_{K}|\cdot R_{K}\ll_{ \varepsilon}D_{K}^{\frac{1}{2}+\varepsilon}\] follows from the bound \(\mathrm{Res}_{s=1}\zeta_{K}(s)\ll(\log D_{K})^{[K\cdot\mathbb{Q}]-1}\)[27, (2)] and the analytic class number formula. It remains to show that there exists an effectively computable constant \(c_{1}=c_{1}(d,k,\ell)>0\) such that \[5|A|\cdot R_{K}\geq\left(\frac{1}{\pi_{K}^{(1)}(Z)}+\frac{|S_{\ell}(K,c_{1}Z^{ \ell})|}{\pi_{K}^{(1)}(Z)^{2}}\right)^{-1}. \tag{3.2}\] To prove (3.2), we let \(v_{1},\ldots,v_{r}\) be the real places of \(K\) and \(v_{r+1},\ldots,v_{r+s}\) be the complex places of \(K\). Let \(d_{i}=1\) if \(1\leq i\leq r\) and \(d_{i}=2\) if \(r+1\leq i\leq r+s\). We have the classical homomorphism \[\varphi\colon K^{\times}\to\mathbb{R}^{r+s},\qquad\varphi(\alpha)=(d_{i}\log| \alpha|_{v_{i}})_{i=1}^{r+s}.\] Consider the subspace of \(\mathbb{R}^{r+s}\) given by \[V_{0}=\Big{\{}(x_{1},\ldots,x_{r+s})\in\mathbb{R}^{r+s}\colon\sum_{i=1}^{r+s}d _{i}x_{i}=0\Big{\}}. \tag{3.3}\] By Dirichlet's unit theorem, \(\mathcal{L}:=\varphi(\mathcal{O}_{K}^{\times})\) is a full rank lattice inside \(V_{0}\). By Lemma 3.2, there exists a Minkowski basis \((\mathbf{u}_{1},\ldots,\mathbf{u}_{r+s-1})\) of \(\mathcal{L}\). If we define \(\mathbf{d}:=(d_{i})_{1\leq i\leq r+s}\), then \((\mathbf{u}_{1},\ldots,\mathbf{u}_{r+s-1},\mathbf{d})\) forms a full rank lattice inside \(\mathbb{R}^{r+s}\) by (3.3). Let \[\mathcal{F}=\Big{\{}\sum_{i=1}^{r+s-1}x_{i}\mathbf{u}_{i}:0\leq x_{i}<1\Big{\}}\] be a fundamental domain of \(\mathcal{L}\). For all \(\alpha\in K\), there exists \(g_{\alpha}\in\mathcal{O}_{K}^{\times}\) (unique up to multiplication by a root of unity), \(\mathbf{v}_{\alpha}\in\mathcal{F}\), and \(y_{\alpha}\in\mathbb{R}\) such that \(\varphi(g_{\alpha}\alpha)=\mathbf{v}_{\alpha}+y_{\alpha}\cdot\mathbf{d}\). We claim that \[\|\mathbf{u}_{i}\|\gg 1. \tag{3.4}\] Indeed, if \(u_{i}\in\varphi^{-1}(\mathbf{u}_{i})\), in which case \(u_{i}\in\mathcal{O}_{K}^{\times}\) and \(\mathbf{u}_{i}=(d_{j}\log|u_{i}|_{v_{j}})_{1\leq j\leq r+s}\), then the inequality of arithmetic and geometric means yields \[\|\mathbf{u}_{i}\|=\Big{(}\sum_{j=1}^{r+s}(d_{j}\log|u_{i}|_{v_{j}})^{2}\Big{)} ^{1/2}\geq\frac{\sum_{j=1}^{r+s}d_{j}\max(0,\log|u_{i}|_{v_{j}})}{\sqrt{r+s}} \geq\frac{\sum_{j=1}^{r+s}d_{j}\max(0,\log|u_{i}|_{v_{j}})}{\sqrt{d[k:\mathbb{ Q}]}}.\] Since \(u_{i}\in\mathcal{O}_{K}^{\times}\), the right hand side of the above inequality equals \((\log H_{K}(u_{i}))/\sqrt{d[k:\mathbb{Q}]}\). Therefore, the claim (3.4) follows from Northcott's theorem. We will apply Minkowski's second theorem to \(V_{0}\). Let \(\pi:V_{0}\to\mathbb{R}^{r+s-1}\) be the isomorphism that drops the last coordinate, and let \(\iota\) be the compositional inverse of \(\pi\). We endow \(V_{0}\) with the pushforward measure, via \(\iota\), of the Lebesgue measure on \(\mathbb{R}^{r+s-1}\). Since \((\mathbf{u}_{1},\ldots,\mathbf{u}_{r+s-1})\) is a Minkowski basis, Minkowski's second theorem applied to \(V_{0}\) yields \[\prod_{i=1}^{r+s-1}\|\mathbf{u}_{i}\|\ll\operatorname{Vol}(\mathcal{F}). \tag{3.5}\] Together, (3.4) and (3.5) yield \(\|\mathbf{u}_{i}\|\ll\operatorname{Vol}(\mathcal{F})\). Since the volume of \(\pi(\operatorname{Vol}(\mathcal{F}))\) equals \(R_{K}\) by definition, we obtain \(\|\mathbf{u}_{i}\|\ll R_{K}\). Since \(R_{K}\geq 0.205\) by [19], we have that \(\lceil R_{K}\rceil<5R_{K}\). Thus, there exists an effectively computable constant \(c_{2}=c_{2}(d,k,\ell)>0\) and an integer \(n\in[1,5R_{K})\) such that \(\mathcal{F}\) partitions into \(n\) cells \(\mathcal{F}_{1},\ldots,\mathcal{F}_{n}\), each with diameter at most \(c_{2}\). Fix a full set of integral representatives \(\mathfrak{b}_{1},\ldots,\mathfrak{b}_{|\mathrm{Cl}_{K}|}\) of the ideal class group \(\mathrm{Cl}_{K}\). For each \(\mathfrak{P}\in\mathcal{P}_{K}^{(1)}\), there exists a unique integer \(j_{\mathfrak{P}}\in\{1,\ldots,|\mathrm{Cl}_{K}|\}\) such that \(\mathfrak{b}_{j_{\mathfrak{P}}}\mathfrak{P}^{\ell}\) is a principal ideal. Given a prime ideal \(\mathfrak{P}\in\mathcal{P}_{K}^{(1)}\), let \(a=a_{\mathfrak{P}}\) be the image of \(\mathfrak{P}\) in \(A\) and \(i\) be the least integer such that a generator \(\alpha\) of \(\mathfrak{b}_{j_{\mathfrak{P}}}\mathfrak{P}^{\ell}\) satisfies \(\mathbf{v}_{\alpha}\in\mathcal{F}_{i}\). We define the map \[f\colon\{\mathfrak{P}\in\mathcal{P}_{K}^{(1)}\colon\mathrm{N}_{K/\mathbb{Q}} \mathfrak{P}\leq Z\}\to A\times\{1,\ldots,n\},\qquad f(\mathfrak{P})=(a,i).\] The Cauchy-Schwarz inequality yields \[\pi_{K}^{(1)}(Z)=\sum_{(a,i)}|f^{-1}(a,i)|\leq\Big{(}\sum_{(a,i)\colon\,f^{-1}(a, i)\neq\varnothing}1\Big{)}^{1/2}\Big{(}\sum_{(a,i)\colon\,f^{-1}(a,i)\neq \varnothing}|f^{-1}(a,i)|^{2}\Big{)}^{1/2},\] hence \[5|A|\cdot R_{K}\geq|A|\cdot\lceil R_{K}\rceil\geq\sum_{(a,i)\colon\,f^{-1}(a, i)\neq\varnothing}1\geq\frac{\pi_{K}^{(1)}(Z)^{2}}{\sum_{(a,i)\colon\,f^{-1}(a,i) \neq\varnothing}|f^{-1}(a,i)|^{2}}.\] It remains to produce constant \(c_{1}=c_{1}(d,k,\ell)>0\) such that \[\sum_{(a,i)\colon\,f^{-1}(a,i)\neq\varnothing}|f^{-1}(a,i)|^{2}\leq\pi_{K}^{(1) }(Z)+|S_{\ell}(K,c_{1}Z^{\ell})|. \tag{3.6}\] In the Cartesian product \(f^{-1}(a,i)\times f^{-1}(a,i)\), whose cardinality is \(|f^{-1}(a,i)|^{2}\), there are the diagonal elements \((\mathfrak{P},\mathfrak{P})\) and off-diagonal elements \((\mathfrak{P}_{1},\mathfrak{P}_{2})\) with \(\mathfrak{P}_{1}\neq\mathfrak{P}_{2}\). We treat these contributions differently. The diagonal elements contribute \(\pi_{K}^{(1)}(Z)\). For the off-diagonal contribution, take distinct \(\mathfrak{P}_{1},\mathfrak{P}_{2}\in\mathcal{P}_{K}^{(1)}\). This implies that \(\mathfrak{P}_{1}/\mathfrak{P}_{2}\) is \(\ell\)-torsion, and thus \(\mathfrak{P}_{1}^{\ell}\) and \(\mathfrak{P}_{2}^{\ell}\) have the same image in \(\mathrm{Cl}_{K}\). Therefore, there exists one class \(\mathfrak{b}_{j}\) and elements \(\alpha_{1},\alpha_{2}\) such that \(\mathfrak{b}_{j}\mathfrak{P}_{1}^{\ell}=(\alpha_{1})\), \(\mathfrak{b}_{j}\mathfrak{P}_{2}^{\ell}=(\alpha_{2})\), and \(\varphi(g_{\alpha_{1}}\alpha_{1}),\varphi(g_{\alpha_{2}}\alpha_{2})\in \mathcal{F}_{i}\). By interchanging \(\alpha_{1}\) and \(\alpha_{2}\) if necessary, we may assume that \(y_{\alpha_{1}}\leq y_{\alpha_{2}}\). Since \(\mathbf{v}_{\alpha_{1}},\mathbf{v}_{\alpha_{2}}\in\mathcal{F}_{i}\), there exists a constant \(c_{3}=c_{3}(d,k)>0\) such that if \(v\mid\infty\), then \[\log|g_{\alpha_{1}}\alpha_{1}|_{v}-\log|g_{\alpha_{2}}\alpha_{2}|_{v}\leq c_{ 3}.\] In other words, we have that \(|\frac{g_{\alpha_{1}}\alpha_{1}}{g_{\alpha_{2}}\alpha_{2}}|_{v}\leq e^{c_{3}+y _{\alpha_{1}}-y_{\alpha_{2}}}\leq e^{c_{3}}\). Therefore, as desired, there exists a constant \(c_{1}=c_{1}(d,k,\ell,\varepsilon)>0\) such that \(H_{K}(\frac{g_{\alpha_{1}}\alpha_{1}}{g_{\alpha_{2}}\alpha_{2}})\leq c_{1}N( \mathfrak{P}_{2})^{\ell}\leq c_{1}Z^{\ell}\). Theorem 1.1 follows from showing that \(S_{\ell}(K,Z)\) is small and \(\pi_{K}^{(1)}(Z)\) is large on average over \(K\in\mathcal{S}\). These are proved in Lemma 4.1 and Corollary 5.2, respectively. ## 4. Estimating \(S_{\ell}(k,z)\) on average Let \(\mathcal{S}\subseteq\mathscr{F}_{k}^{d}\). For each \(K\in\mathcal{S}\), consider \(S_{\ell}(K,Z)\) from (3.1). With small modifications to the case of \(k=\mathbb{Q}\) considered in [18], the ideas in the proof of [18, Lemma 4.2] lead to \[\sum_{K\in\mathcal{S}}|S_{\ell}(K,Z)|\ll_{\varepsilon}Z^{d[k:\mathbb{Q}]-1+2/ \ell+\varepsilon}. \tag{4.1}\] The main result of this section, Lemma 4.1, refines (4.1). **Lemma 4.1**.: _If \(Z\geq 2\) and \(\varepsilon>0\), then \(\sum_{K\in\mathcal{S}}|S_{\ell}(K,Z)|\ll_{\varepsilon}Z^{d-1+2/\ell+\varepsilon}\)._ ### Prerequisites For this subsection, fix a number field \(F\) and a full set \(\mathfrak{C}_{1},\dots,\mathfrak{C}_{|\mathrm{Cl}_{F}|}\in\mathcal{I}_{F}\) of integral representatives for the ideal class group \(\mathrm{Cl}_{F}\). **Lemma 4.2**.: _If \(\mathfrak{a}\in\mathcal{I}_{F}\), then there exists \(\alpha\in F^{\times}\) such that \(H_{F}(\alpha)\ll\mathrm{N}_{F/\mathbb{Q}}(\alpha)\) and \(\mathfrak{a}=(\alpha)\)._ Proof.: This is a special case of [16, Proposition 4.3.12]. **Lemma 4.3**.: _There exists a constant \(c_{4}=c_{4}(F)>0\) such that for all \(\alpha\in F^{\times}\), there exists \((t,t^{\prime})\in\mathcal{O}_{F}\times\mathcal{O}_{F}-\{0\}\) such that if \(\alpha=t/t^{\prime}\), then \(H_{F}(t),H_{F}(t^{\prime})\leq c_{4}H_{F}(\alpha)\). Also, there exists \(i\in\{1,\dots,|\mathrm{Cl}_{F}|\}\) such that \(\gcd((t),(t^{\prime}))=\mathfrak{C}_{i}\)._ Proof.: Write \((\alpha)=\prod_{\mathfrak{p}\in\mathcal{P}_{F}}\mathfrak{p}^{\mathfrak{e}_{ \mathfrak{p}}}\), and define \(I=\prod_{e_{\mathfrak{p}}<0}\mathfrak{p}^{-e_{\mathfrak{p}}}\). Let \(\mathfrak{C}_{i}\) be the unique representative such that \(I^{-1}\sim\mathfrak{C}_{i}\) in \(\operatorname{Cl}_{F}\). Therefore, \(I\mathfrak{C}_{i}\) is a principal ideal of \(\mathcal{O}_{F}\). By Lemma 4.2, there exists a generator \(t^{\prime}\) of \(I\mathfrak{C}_{i}\) satisfying \(H_{F}(t^{\prime})\ll\operatorname{N}_{F/\mathbb{Q}}(I\mathfrak{C}_{i})\ll \operatorname{N}_{F/\mathbb{Q}}(I)\). Finally, we choose \(t=\alpha t^{\prime}\). Then we have \(t\in\mathcal{O}_{F}\) and \(\gcd((t),(t^{\prime}))=\mathfrak{C}_{i}\) by construction. Furthermore, since \(t\) and \(t^{\prime}\) are integral, we see that \[H_{F}(t)=\prod_{v|\infty}\max(1,|\alpha t^{\prime}|_{v}^{d_{v}} )\leq\prod_{v|\infty}\max(1,|\alpha|_{v}^{d_{v}})\prod_{v|\infty}\max(1,|t^{ \prime}|_{v}^{d_{v}})\\ =H_{F}(\alpha)\frac{\prod_{v|\infty}\max(1,|t^{\prime}|_{v}^{d_{ v}})}{\prod_{v\text{ finite}}\max(1,|\alpha|_{v}^{d_{v}})}=H_{F}(\alpha)\frac{H_{F}(t^{\prime})}{ \operatorname{N}_{F/\mathbb{Q}}(I)}\ll H_{F}(\alpha).\qed\] **Lemma 4.4**.: _If \(\varepsilon>0\), then \(|\{u\in\mathcal{O}_{F}^{\times}\colon\text{ if }v\mid\infty,\text{ then }|u|_{v}\leq X\}|\ll_{ \varepsilon}X^{\varepsilon}\)._ Proof.: This follows from the main theorem of [15]. **Lemma 4.5**.: _If \(\alpha\in F^{\times}\), \(H_{F}(\alpha)\leq X\), and \(\varepsilon>0\), then \(|\{u\in\mathcal{O}_{F}^{\times}:H_{F}(\alpha u)\leq X\}|\ll_{\varepsilon}X^{\varepsilon}\)._ Proof.: We have that \(|\alpha|_{v}^{d_{v}}\geq\frac{1}{X}\) for all \(v\mid\infty\), since otherwise \(H_{F}(\alpha)=H_{F}(\alpha^{-1})>X\). Therefore the condition \(H_{F}(\alpha u)\leq X\) forces \(|u|_{v}^{d_{v}}\leq X^{2}\) for all \(v\mid\infty\). Since \(d_{v}\geq 1\), the desired result follows from Lemma 4.4. **Lemma 4.6**.: _If \(\varepsilon>0\), then \(|\{\alpha\in\mathcal{O}_{F}:H_{F}(\alpha)\leq X\}|\ll_{\varepsilon}X^{1+\varepsilon}\)._ Proof.: This follows from work of Widmer [35, Theorem 1.1 with \(e=n=1\)]. Note that Widmer works with the absolute height, while we work with the relative height. ### The results Recall that \(K\in\mathcal{S}\), as in Section 3. We estimate the average of \(|S_{\ell}(K,Z)|\) using the following lemma. Given \(\alpha\in K\), we write \(f_{\alpha}(X)\in k[X]\) for the unique monic minimal polynomial of \(\alpha\) over \(k\). In this subsection, we prove variants of [18, Lemmata 4.1 and 4.2] for general extensions \(K/k\). **Lemma 4.7**.: _Let \(K\in\mathcal{S}\) and \(\alpha\in K\). If there exist distinct \(\mathfrak{P}_{1},\mathfrak{P}_{2}\in\mathcal{P}_{K}^{(1)}\) such that \(\alpha\mathcal{O}_{K}=(\mathfrak{P}_{1}\mathfrak{P}_{2}^{-1})^{\ell}\), then \(K=k(\alpha)\), and there exist unique \(a_{1},\dots,a_{d}\in k\) such that_ \[f_{\alpha}(X)=X^{d}+a_{1}X^{d-1}+\dots+a_{d}. \tag{4.2}\] _The elements \(a_{i}\) have the following properties._ * _If_ \(1\leq i\leq d\)_, then_ \(H_{k}(a_{i})\ll H_{K}(\alpha)\)_._ * _The fractional ideal_ \(a_{d}\mathcal{O}_{k}\) _is an_ \(\ell\)_-th power._ * _If_ \(v\in\Omega_{k}\) _is a finite place such that_ \(v(a_{i})<0\)_, then_ \(v\) _lies below_ \(\mathfrak{P}_{2}\)_._ Proof.: First, we assert that \(\alpha\) has degree \(d\) over \(k\), which immediately implies that \(K=k(\alpha)\). Indeed, if the claim is false, then there exists a proper subfield \(K^{\prime}\subseteq K\) such that \(\alpha\in K^{\prime}\). Writing \(\mathfrak{q}\) for the prime ideal of \(K^{\prime}\) below \(\mathfrak{P}_{1}\), we see that \(\mathfrak{q}\) divides \(\alpha\mathcal{O}_{K^{\prime}}\). But since \(e(\mathfrak{P}_{1})=f(\mathfrak{P}_{1})=1\), there exists another prime ideal \(\mathfrak{P}\) of \(K\) above \(\mathfrak{q}\), which must then divide \(\alpha\mathcal{O}_{K}\) as well. This is a contradiction; therefore, our assertion is true. Recall that \(f_{\alpha}(X)\in k[X]\) is the minimal polynomial of \(\alpha\) over \(k\). Since \(\alpha\) has degree \(d\) over \(k\), there exist unique \(a_{1},\dots,a_{d}\in k\) such that \(f_{\alpha}(X)\) is of the form (4.2). Writing \(\alpha=\alpha^{(1)},\dots,\alpha^{(d)}\in\overline{\mathbb{Q}}\) for the conjugates of \(\alpha\) over \(k\), we have \[f_{\alpha}(X)=\prod_{i=1}^{d}(X-\alpha^{(i)}).\] Vieta's formulas relating the numbers \(\alpha^{(1)},\ldots,\alpha^{(d)}\) to the coefficients \(a_{1},\ldots,a_{d}\) imply that \(H_{k}(a_{i})=H_{K}(a_{i})^{1/d}\ll H_{K}(\alpha)\). Furthermore, since \(a_{d}=\prod_{i=1}^{d}-\alpha^{(i)}=(-1)^{d}\mathrm{N}_{K/k}(\alpha)\), we see that \(a_{d}\mathcal{O}_{k}\) is the \(\ell\)-th power of an ideal. Finally, suppose that \(v(a_{i})<0\). Using Vieta's formulas once more, we deduce that there exists some place \(w\) of \(\widetilde{K}\) above \(v\) and some \(j\) such that \(w(\alpha^{(j)})<0\). Therefore there exists some place \(\widetilde{w}\) of \(\widetilde{K}\) above \(v\) such that \(\widetilde{w}(\alpha)<0\). This implies that \(\widetilde{w}\) lies above \(\mathfrak{P}_{2}\), finishing the proof. Proof of Lemma 4.1.: Write \(T\) for the set of all \(\alpha\in\overline{\mathbb{Q}}\) such that there exist distinct prime ideals \(\mathfrak{P}_{1},\mathfrak{P}_{2}\in\mathcal{P}_{K}^{(1)}\) such that \(k(\alpha)\in\mathcal{S}\) and \(\alpha\mathcal{O}_{k(\alpha)}=(\mathfrak{P}_{1}\mathfrak{P}_{2}^{-1})^{\ell}\). We define \[N_{H}(Z):=\{\alpha\in T:H_{k(\alpha)}(\alpha)\leq Z\}.\] By Lemma 4.7, the map \(N_{H}(Z)\to\bigsqcup_{K\in\mathcal{S}}S_{\ell}(K,Z)\) that sends \(\alpha\) to \((k(\alpha),\alpha)\) is surjective. If \(\alpha\in T\), then Lemma 4.7 shows that its minimal polynomial \(f_{\alpha}\) is of the shape \(f_{\alpha}(X)=X^{d}+a_{1}X^{d-1}+\cdots+a_{d}\) with \(H_{k}(a_{i})\ll H_{k(\alpha)}(\alpha)\), \(a_{d}\mathcal{O}_{k}\) an \(\ell\)-th power and \(v(a_{i})<0\) implies that \(\mathfrak{P}_{2}\) lies above \(v\). In order to estimate the number of such polynomials \(f_{\alpha}\), we use Lemma 4.3 (with \(F=k\)) to write each \(a_{i}\) as \[a_{i}=t_{i}/t_{i}^{\prime},\quad H_{k}(t_{i}),H_{k}(t_{i}^{\prime})\ll H_{k}(a _{i})\ll H_{k(\alpha)}(\alpha)\leq Z,\quad\gcd((t_{i}),(t_{i}^{\prime}))= \mathfrak{C}_{i}.\] By Lemma 4.6 (with \(F=k\)), there are at most \(\ll_{\varepsilon}Z^{d-1+\varepsilon}\) possibilities for \(t_{1},\ldots,t_{d-1}\). Furthermore, each of \(t_{1}^{\prime},\ldots,t_{d-1}^{\prime}\) divides \(\mathrm{N}_{k(\alpha)/k}(\mathfrak{P}_{2})^{\ell}\prod_{i=1}^{h}\mathfrak{C}_{i}\). Note that \(\mathrm{N}_{k(\alpha)/\mathbb{Q}}(\mathfrak{P}_{2})^{\ell}\leq H_{k(\alpha)}( \alpha)\leq Z\), thus \(\mathrm{N}_{k(\alpha)/\mathbb{Q}}(\mathfrak{P}_{2})\leq Z^{1/\ell}\). The number of ideals of \(k\) with norm up to \(Z^{1/\ell}\) is bounded by \(\ll Z^{1/\ell}\). Since \(t_{1}^{\prime},\ldots,t_{d-1}^{\prime}\) all divide \(\mathrm{N}_{k(\alpha)/k}(\mathfrak{P}_{2})^{\ell}\prod_{i=1}^{h}\mathfrak{C}_{i}\), Lemma 4.5 gives \(\ll_{\varepsilon}Z^{1/\ell+\varepsilon}\) possibilities for \(t_{1}^{\prime},\ldots,t_{d-1}^{\prime}\). Arguing similarly for \(a_{d}\) produces another \(\ll_{\varepsilon}Z^{1/\ell+\varepsilon}\) possibilities, completing the proof of the lemma. ## 5. Proof of Theorem 1.1 Throughout this section, we fix integers \(n,\ell\geq 2\), a prime \(p\), and a number field \(k\). Recall (1.1) and (2.2). Let the ordered pair \((\mathscr{F}_{k},d)\) equal \((\mathscr{F}_{k}^{n,S_{n}},n)\) or \((\mathscr{F}_{k}^{p},p)\), and let \(\mathcal{S}\subseteq\mathscr{F}_{k}\). We may assume that \(\alpha_{\mathcal{S}}>0\); otherwise, Theorem 1.1 is trivial. Let \(Q\geq 1\), let \(0<\varepsilon<\min\{\frac{1}{2},\alpha_{\mathcal{S}}\}\) be an arbitrarily small quantity, and define \[R=\ell(d-1)+1.\] It remains to bound for \(\pi_{K}^{(1)}(Z)\) in Theorem 3.3 from below. To obtain such a lower bound on average, we use the work of Lemke Oliver, Thorner, and Zaman [25]. **Theorem 5.1**.: _Recall (2.2). There exists an effectively computable constant \(c_{5}=c_{5}(k,n)>0\) such that for all \(K\in\mathscr{F}_{k}^{n,S_{n}}(Q)\) with \(O_{\varepsilon}(Q^{\varepsilon})\) exceptions, there holds_ \[\pi_{K}(x)\geq 2c_{5}x/\log x,\qquad x\geq(\log D_{K})^{42(n!)^{2}/\varepsilon}.\] _Also, there exists an effectively computable constant \(c_{6}=c_{6}(k,p)>0\) such that if \(G\) is a transitive subgroup of \(S_{p}\), then for all \(K\in\mathscr{F}_{k}^{p,G}(Q)\) with \(O_{\varepsilon}(Q^{\varepsilon})\) exceptions, there holds_ \[\pi_{K}(x)\geq 2c_{6}x/\log x,\qquad x\geq(\log D_{K})^{42(p!)^{2}/\varepsilon}.\] Proof.: This is established during the proof of [25, Theorem 2.4]. **Corollary 5.2**.: _There exists an effectively computable constant \(c_{7}=c_{7}(d,k)>0\) and a subset \(\mathscr{E}_{k}(Q)\subseteq\mathscr{F}_{k}(Q)\) satisfying \(|\mathscr{E}_{k}(Q)|\ll_{\varepsilon}Q^{\varepsilon}\) such that if \(K\in\mathcal{S}(Q)-\mathscr{E}_{k}(Q)\), then_ \[\pi_{K}^{(1)}(x)\geq c_{7}x/\log x,\qquad x\geq(\log D_{K})^{42(d)^{2}/ \varepsilon}.\] Proof.: First, let \((\mathscr{F}_{k},d)=(\mathscr{F}_{k}^{n,S_{n}},n)\). If \(\mathfrak{P}\in\mathcal{P}_{K}-\mathcal{P}_{K}^{(1)}\), then the relative degree (over \(k\)) of \(\mathfrak{P}\) is at least \(2\). Consequently, the absolute degree (over \(\mathbb{Q}\)) of \(\mathfrak{P}\) is at least \(2\). There are \(O(\sqrt{x})\) such prime ideals \(\mathfrak{P}\) such that \(\mathrm{N}_{K/\mathbb{Q}}\mathfrak{P}\leq x\). The corollary now follows from Theorem 5.1. If \((\mathscr{F}_{k},d)=(\mathscr{F}_{k}^{p},p)\), then we apply this reasoning to \(\mathscr{F}_{k}^{p,G}\) for each transitive subgroup of \(S_{p}\). Since there are \(O_{p}(1)\) such subgroups, the theorem follows. In a manner inspired by [20, Section 6], we use Corollary 5.2 to bound the frequency with which \(|\mathrm{Cl}_{K}[\ell]|\) can be large. Define \(A_{\ell}(Q;H):=\{K\in\mathcal{S}\colon|\mathrm{Cl}_{K}[\ell]|\geq H\}\). **Proposition 5.3**.: _If \(H,Q\geq 1\), then \(|A_{\ell}(Q;H)|\ll_{\varepsilon}\min\{|\mathcal{S}(Q)|,Q^{R/2+\varepsilon}/H^ {R}\}\)._ Proof.: The bound \(|A_{\ell}(Q;H)|\leq|\mathcal{S}(Q)|\) is trivial. For the remaining part, it follows from (1.5) that \(|A_{\ell}(Q;H)|=0\) unless \(H\ll Q^{1/2}(\log Q)^{d[k:\mathbb{Q}]-1}\). We begin with the estimate \[|A_{\ell}(Q;H)|\leq\frac{1}{H}\sum_{K\in A_{\ell}(Q;H)}|\mathrm{Cl}_{K}[\ell ]|.\] Corollary 5.2 and (1.5) imply that \[\frac{1}{H}\sum_{K\in A_{\ell}(Q;H)\cap\mathscr{E}_{k}(Q)}|\mathrm{Cl}_{K}[ \ell]|\ll\frac{Q^{\frac{1}{2}}(\log Q)^{d[k:\mathbb{Q}]-1}}{H}.\] For the other \(K\), it suffices to assume that \(Q\) is sufficiently large in terms of \(d\), \(k\), \(\ell\), and \(\varepsilon\). In particular, there exists a constant \(c_{8}=c_{8}(d,k,\ell,\varepsilon)\geq 1\) such that if \(Q\geq c_{8}\), then \[Z:=Q^{\frac{1}{2}+\frac{\varepsilon}{R}}/H>(\log Q)^{42(d)^{2}/\varepsilon}.\] We now apply Theorem 3.3, Lemma 4.1, and Corollary 5.2 with this choice of \(Z\): \[\frac{1}{H}\sum_{K\in A_{\ell}(Q;H)-\mathscr{E}_{k}(Q)}|\mathrm{ Cl}_{K}[\ell]| \ll_{\varepsilon}\frac{1}{H}\sum_{K\in A_{\ell}(Q;H)-\mathscr{E}_ {k}(Q)}\Big{(}\frac{D_{K}^{\frac{1}{2}+\frac{\varepsilon}{4R}}}{\pi_{K}^{(1)} (Z)}+\frac{D_{K}^{\frac{1}{2}+\frac{\varepsilon}{4R}}|S_{\ell}(K,c_{1}Z^{\ell })|}{\pi_{K}^{(1)}(Z)^{2}}\Big{)}\] \[\ll_{\varepsilon}\frac{Q^{\frac{1}{2}+\frac{\varepsilon}{4R}}( \log Z)^{2}}{H}\sum_{K\in A_{\ell}(Q;H)-\mathscr{E}_{k}(Q)}\Big{(}\frac{1}{Z}+ \frac{|S_{\ell}(K,c_{1}Z^{\ell})|}{Z^{2}}\Big{)}\] \[\ll_{\varepsilon}\frac{Q^{\frac{1}{2}+\frac{\varepsilon}{2R}}}{H }\Big{(}\frac{|A_{\ell}(Q;H)|}{Z}+Z^{R-1}\Big{)}.\] We conclude that there exists a constant \(c_{9}=c_{9}(d,k,\ell,\varepsilon)\geq 1\) such that \[|A_{\ell}(Q;H)|\leq c_{9}\frac{Q^{\frac{1}{2}+\frac{\varepsilon}{2R}}}{H} \Big{(}\frac{|A_{\ell}(Q;H)|}{Z}+Z^{R-1}\Big{)}.\] We may assume that \(c_{8}\geq(c_{9}+1)^{2R/\varepsilon}\). Now, by our choice of \(Z\), the lemma follows. Proof of Theorem 1.1.: Define \(B_{\ell}(Q;H):=\{K\in\mathcal{S}\colon H\leq|\mathrm{Cl}_{K}[\ell]|<eH\}\). By (1.5) and Proposition 5.3, there exists a constant \(c_{10}=c_{10}(d,k)>0\) such that if \(J=\log(c_{10}Q^{\frac{1}{2}}(\log Q)^{d[k:\mathbb{Q}]-1})\), then \[\sum_{K\in\mathcal{S}}|\mathrm{Cl}_{K}[\ell]|^{r}\leq\sum_{0\leq j\leq J}|B_{ \ell}(Q;e^{j})|\cdot(e^{j+1})^{r}\ll_{r,\varepsilon}\sum_{0\leq j\leq J}\min \Big{(}\frac{Q^{\frac{R}{2}+\frac{\varepsilon}{2r}}}{e^{jR}},|\mathcal{S}(Q)| \Big{)}e^{jr}. \tag{5.1}\] Define \[j_{0}=\frac{1}{R}\log\Big{(}\frac{Q^{\frac{R}{2}+\frac{\varepsilon}{2r}}}{| \mathcal{S}(Q)|}\Big{)}.\] By (1.13) and our range of \(\varepsilon\), there exists a constant \(c_{11}=c_{11}(\alpha_{\mathcal{S}},d,k,\ell,\varepsilon)>0\) such that if \(Q\geq c_{11}\), then \(1<j_{0}<J\). Therefore, (5.1) is \[\ll_{r,\varepsilon}|\mathcal{S}(Q)|\sum_{0\leq j\leq j_{0}}e^{jr}+Q^{\frac{R} {2}+\frac{\varepsilon}{2r}}\sum_{j_{0}<j\leq J}e^{j(r-R)}\ll_{r,\varepsilon}| \mathcal{S}(Q)|e^{j_{0}r}+Q^{\frac{R}{2}+\frac{\varepsilon}{2r}}\sum_{j_{0} <j\leq J}e^{j(r-R)}.\] Theorem 1.1 now follows from the bounds \(|\mathcal{S}(Q)|e^{j_{0}r}\ll_{r,\varepsilon}|\mathcal{S}(Q)|^{1-\frac{r}{R} }Q^{\frac{r}{2}+\frac{\varepsilon}{2R}}\) and \[Q^{\frac{R}{2}+\frac{\varepsilon}{2r}}\sum_{j_{0}<j\leq J}e^{j(r-R)}\ll_{r, \varepsilon}\begin{cases}|\mathcal{S}(Q)|^{1-\frac{r}{R}}Q^{\frac{r}{2}+ \varepsilon}&\text{if }r<R,\\ Q^{\frac{r}{2}+\varepsilon}&\text{if }r\geq R.\end{cases}\qed\]
2306.17692
Parallel flows as a key component to interpret Super-X divertor experiments
The Super-X Divertor (SXD) is an alternative divertor configuration leveraging total flux expansion at the Outer Strike Point (OSP). While the extended 2-Point Model (2PM) predicts facilitated detachment access and control in the SXD configuration, these attractive features are not always retrieved experimentally. These discrepancies are at least partially explained by the effect of parallel flows which, when self-consistently included in the 2PM, reveal the role of total flux expansion on the pressure balance and weaken the total flux expansion effect on detachment access and control, compared to the original predictions. This new model can partially explain the discrepancies between the 2PM and experiments performed on tokamak \`a configuration variable (TCV), in ohmic L-mode scenarios, which are particularly apparent when scanning the OSP major radius Rt. In core density ramps in lower Single-Null (SN) configuration, the impact of Rt on the CIII emission front movement in the divertor outer leg - used as a proxy for the plasma temperature in the divertor - is substantially weaker than 2PM predictions. Furthermore, in OSP radial sweeps in lower and upper SN configurations, in ohmic L-mode scenarios with a constant core density, the peak parallel particle flux density at the OSP is almost independent of Rt, while the 2PM predicts a linear dependence. Finally, analytical and numerical modeling of parallel flows in the divertor is presented. It is shown that an increase in total flux expansion can favour supersonic flows at the OSP. Parallel flows are also shown to be relevant by analysing SOLPS-ITER simulations of TCV.
M. Carpita, O. Février, H. Reimerdes, C. Theiler, B. P. Duval, C. Colandrea, G. Durr-Legoupil-Nicoud, D. Galassi, S. Gorno, E. Huett, J. Loizu, L. Martinelli, A. Perek, L. Simons, G. Sun, E. Tonello, C. Wüthrich, the TCV team
2023-06-30T14:19:39Z
http://arxiv.org/abs/2306.17692v3
# Reduction in benefits of total flux expansion on divertor detachment due to parallel flows ###### Abstract The Super-X divertor (SXD) is an alternative divertor configuration leveraging total flux expansion at the outer strike point (OSP). According to the _extended 2-point model_ (2PM), the key attractive features of the SXD are facilitated detachment access and control, but this is not always retrieved experimentally. However, parallel flows are not consistently included in the 2PM. In this work, the 2PM is refined to overcome this limitation: the role of total flux expansion on the pressure balance is made explicit, by including the effect of parallel flows. Consequentially, the effect of total flux expansion on detachment access and control is weakened, compared to predictions of the 2PM. This new model partially explains discrepancies between the 2PM and experiments performed on TCV, in ohmic L-mode scenarios, which are particularly apparent when scanning the OSP major radius Rt. In core density ramps in lower single-null (SN) configuration, the impact of Rt on the CIII emission front movement in the divertor outer leg - used as a proxy for the plasma temperature - is substantially weaker than 2PM predictions. Furthermore, in OSP radial sweeps in lower and upper SN configurations, in ohmic L-mode scenarios with a constant core density, the peak parallel particle flux density at the OSP is almost independent of Rt, while the 2PM predicts a linear dependence. Finally, analytical and numerical modelling of parallel flows in the divertor is presented. It is shown that an increase in total flux expansion can favour supersonic flows at the OSP. Parallel flows are also shown to be relevant by analysing SOLPS-ITER simulations of TCV. + Footnote †: : \(*\)See author list of Reimerdes et al. 2022, Nucl. Fusion 62 042018 _Keywords_: power exhaust, divertor, detachment, total flux expansion, Mach number, parallel flows ## 1 Introduction Power exhaust is a key challenge for the realisation of a magnetic confinement fusion reactor, such as tokamaks, as identified by the European roadmap for fusion energy [1]. In a future power plant, large power losses from the confined plasma must be exhausted in a very narrow scrape-off-layer (SOL) region. The peak power density at the target, if unmitigated, are predicted to greatly exceed material limits [2]. Moreover, avoiding excessive long-term erosion on the reactor vessel components requires sufficiently low plasma target temperature [3]. Diverted plasma configurations are employed for power exhaust, with the ability to support a large plasma temperature gradient between the confined plasma and the divertor targets. At sufficiently low electron temperature \(T_{e}\), radiation by hydrogen and low-Z impurities becomes more efficient (\(T_{e}\lesssim 10\ eV\)), and the cross-sections for charge exchange (\(T_{i}\lesssim 5\ eV\)) and volumetric recombination (\(T_{e}\lesssim 1\ eV\)) increase, redistributing the exhausted power more isotropically, and transferring some of the plasma momentum and energy to neutrals [4, 5]. This allows the divertor to enter the detached regime, greatly reducing the power and particle fluxes to the targets. The lower-single-null (LSN) is currently the reference configuration for most operating tokamaks and is the chosen configuration for ITER [6]. Nonetheless, the extrapolation of this configuration to future reactors, with higher power and particle fluxes, cannot be safely assumed, in particular with respect to the integration of a detached divertor with a reactor-relevant core plasma. Alternative divertor configurations (ADCs) are, therefore, studied as potential solutions to this problem. ADCs' foreseen benefits include easier access to detachment, higher total radiated power capabilities, and better control over the location of the radiation front [7]. However, ADCs must be judged in the light of increased machine complexity, hence their assessment, through experiments and modeling, is crucial [8]. Among ADCs, one considered concept for future reactors is the Super-X divertor (SXD) [9]. Its main feature is an increase of the outer strike point (OSP) major radius \(R_{t}\), which comes with an increased total flux expansion. The increase of \(R_{t}\) increases the cross-sectional area of a flux tube \(A_{\perp,t}\) (as the total magnetic field \(B_{tot}\) is proportional to the inverse of the major radius \(R^{-1}\)) and, as a result, decreases the parallel power densities at the target, \(q_{\parallel,t}\). For a constant grazing angle at the outer target, an increase in \(R_{t}\) corresponds exactly to a linear increase of the target wetted area. The power density at the OSP then scales as \(R_{t}^{-1}\), which has been demonstrated experimentally [7]. According to the _extended 2-point model_ (2PM) [10, 11], the key attractive features of the SXD are facilitated detachment access and control. However, in some cases, these predictions were neither retrieved experimentally [7, 12, 13] nor numerically [14]. In specific cases, it was argued that this disagreement with analytical predictions was explained by several possible effects, _e.g._ target geometry [12], neutral compression [15, 16] and / or the divertor being in a sheath-limited regime [14]. However, a general understanding of the discrepancy has still not been obtained. In this paper, the role of \(R_{t}\) is discussed, both in terms of target conditions and for detachment access and control. Section 2 presents the 2PM, its predictions with respect to total flux expansion effects on detachment access and control and its modification to make the effect of parallel flows on the total pressure balance explicit, leading to predictions of weaker total flux expansion effects compared to the original ones. Section 3 presents SXD experiments on TCV tokamak [13] to investigate the role of \(R_{t}\). Finally, in section 4, the analytical and numerical modelling of parallel flows in the divertor is presented, showing that an increase in total flux expansion can favour supersonic flows at the OSP and that parallel flows are relevant by analysing SOLPS-ITER [17] simulations of TCV. A summary and conclusions are presented in section 5. ## 2 2PM extension accounting for parallel flows The 2PM [10, 18] is a reduced model which relates target quantities (electron temperature \(T_{e,t}\), electron density \(n_{e,t}\), parallel particle flux density \(\Gamma_{t}\)) with upstream control parameters for the SOL plasma, _e.g._ the total plasma pressure \(p_{tot,u}\) and the parallel power density \(q_{\parallel,u}\) at the upstream location. These quantities pertain to one individual flux tube in the SOL and are linked together by momentum and power balances. The upstream location, labeled by \(u\), is somewhat arbitrary, and can refer to the X-point location, the outer mid-plane (OMP), etc. It is usually taken as a stagnant location, _i.e._ where \(v_{\parallel}=0\). In the following, when needed, this location will be specified. In the interest of completeness, in the 2PM, the parallel power density \(q_{\parallel}\) is defined as the total parallel plasma power density, _i.e._ in the simplest form (taking \(n_{e}=n_{i}=n\) for densities and \(T_{e}=T_{i}=T\) for temperatures) \[q_{\parallel}=(5nT+\frac{1}{2}m_{i}nv_{\parallel}^{2})v_{\parallel}+q_{ \parallel}^{heat,cond} \tag{1}\] with \(q_{\parallel}^{heat,cond}\) the total conducted heat flux density, \(T\) the plasma temperature, \(n\) the plasma density and \(v_{\parallel}\) the parallel plasma velocity. ### 2PM predictions for target quantities and their dependence on \(R_{t}\) The most general 2PM expressions for target quantities are reported by Stangeby in (15)-(17) of [10]. These are equivalent to expressions obtained by Kotov and Reiter in [19] that were derived from the steady-state version of the equations solved by the 2D multi-species plasma fluid code B2. These expressions are reported here, assuming the following simplifying hypotheses: (S-I) only hydrogenic ion species (_i.e._\(n=n_{e}=n_{i}\)) and no net current (_i.e._\(v_{\parallel}=v_{e,\parallel}=v_{i,\parallel}\)); (S-II) thermal equilibration is achieved along the flux tube (_i.e._\(T=T_{e}=T_{i}\)); (S-III) the plasma flow at the target is sonic (_i.e._\(M_{t}=1\), where \(M=v_{\parallel}/c_{s}\) is the Mach number and \(c_{s}=\sqrt{(T_{e}+T_{i})/m_{i}}=\sqrt{2T/m_{i}}\) the sound speed, and the subscript \(t\) representing the target in what follows). Hypothesis (S-III) and its link to the total flux expansion effects are discussed in section 4.1. These assumptions, introduced for simplicity, can be easily relaxed and do not limit the following discussion. However, an additional assumption _required_ in the derivation of the following 2PM expressions is: (A-I) target quantities are evaluated at the sheath entrance (_i.e._\(q_{\parallel,t}=q_{\parallel,se}=\gamma n_{t}T_{t}c_{s,t}\)). Further details are provided in appendix A. The expressions are \[T_{t}^{2PM} = \frac{8m_{i}}{\gamma^{2}}\cdot\frac{q_{\parallel,u}^{2}}{p_{tot, u}^{2}}\cdot\frac{(1-f_{cooling})^{2}}{(1-f_{mom-loss})^{2}}\] \[\cdot\left(\frac{R_{u}}{R_{t}}\right)^{2}\] \[n_{t}^{2PM} = \frac{\gamma^{2}}{32m_{i}}\cdot\frac{p_{tot,u}^{3}}{q_{\parallel,u}^{2}}\cdot\frac{(1-f_{mom-loss})^{3}}{(1-f_{cooling})^{2}}\] \[\cdot\left(\frac{R_{u}}{R_{t}}\right)^{-2}\] \[\Gamma_{t}^{2PM} = \frac{\gamma}{8m_{i}}\cdot\frac{p_{tot,u}^{2}}{q_{\parallel,u}} \cdot\frac{(1-f_{mom-loss})^{2}}{(1-f_{cooling})}\] \[\cdot\left(\frac{R_{u}}{R_{t}}\right)^{-1}\] where \(m_{i}\) is the ion mass, \(\gamma\approx 8.5\) the sheath heat transmission coefficient [18] and \(R_{u/t}\) are the upstream and target major radii respectively. The power and momentum loss factors, \(f_{cooling}\) and \(f_{mom-loss}\), are \[\frac{q_{\parallel,t}}{q_{\parallel,u}}\cdot\frac{R_{t}}{R_{u}} \equiv 1-f_{cooling} \tag{5}\] \[\frac{p_{tot,t}}{p_{tot,u}} \equiv 1-f_{mom-loss} \tag{6}\] and the total plasma pressure is \[p_{tot}=2nT+m_{i}nv_{\parallel}^{2}=2(1+M^{2})nT \tag{7}\] The ratio \((R_{u}/R_{t})\) in (2)-(4) explicitly relates target quantities to total flux expansion. Both experiments and simulations [7, 12, 14] were done to test such specific explicit dependencies of target quantities on \(R_{t}\), showing several discrepancies. ### Explicit dependence of \(f_{mom-loss}\) on \(R_{t}\) and the effective Mach number \(M_{eff}\) The loss factors \(f_{cooling}\) and \(f_{mom-loss}\) are lumped parameters accounting for a variety of complex physical processes [12, 14, 15, 16]. These processes can be separated into two main groups: (1) volumetric sources and cross-field transport effects; (2) geometrical effects, related to flux tube cross-sections. This work focuses mainly on the latter, as they can be explicitly linked to total flux expansion effects, as shown in the following. While \(f_{cooling}\) relates only to processes pertaining to group (1), \(f_{mom-loss}\) accounts also for geometrical effects. To show this, the total power and parallel momentum steady-state balances in a flux tube element are taken \[\frac{1}{A_{\perp}}\partial_{s}(A_{\perp}q_{\parallel})=S_{pwr} \tag{8}\] \[\frac{1}{A_{\perp}}\partial_{s}(A_{\perp}mnv_{\parallel}^{2})=- \partial_{s}(2nT)+S_{mom} \tag{9}\] where \(s\) is a length coordinate along the flux tube and \(S_{pwr/mom}\) are effective sources (or sinks) within the flux tube, respectively for power and momentum, related to processes pertaining to group (1). As in a flux tube \(A_{\perp}\propto B_{tot}^{-1}\propto R\), rearranging (8)-(9) gives \[\frac{1}{q_{\parallel}}\partial_{s}(q_{\parallel})=\frac{S_{pwr} }{q_{\parallel}}-\frac{1}{R}\partial_{s}(R) \tag{10}\] \[\frac{1}{p_{tot}}\partial_{s}(p_{tot})=\frac{S_{mom}}{p_{tot}}- \frac{\kappa}{R}\partial_{s}(R) \tag{11}\] where \(\kappa=mnv_{\parallel}^{2}/p_{tot}=M^{2}/(1+M^{2})\) is the local ratio of dynamic and total pressure in the flux tube. Integrating (10)-(11) from upstream to target, rearranging and using (5)-(6) gives \[\frac{q_{\parallel,t}}{q_{\parallel,u}}\cdot\frac{R_{t}}{R_{u}} = exp\left(\int_{u}^{t}\frac{S_{pwr}}{q_{\parallel}}ds\right)\equiv \tag{12}\] \[\equiv 1-f_{cooling}\] \[\frac{p_{tot,t}}{p_{tot,u}} = exp\left(\int_{u}^{t}\left[\frac{S_{mom}}{p_{tot}}-\frac{\kappa}{R} \partial_{s}(R)\right]ds\right)\equiv\] (13) \[\equiv 1-f_{mom-loss} \tag{14}\] It thus becomes apparent that \(f_{mom-loss}\) includes geometrical effects, whereas \(f_{cooling}\) does not. In literature, the influence of geometrical effects on \(f_{mom-loss}\) was recognized, but was not investigated in detail, as it was considered negligible or avoided for simplicity [10, 14]. To explicitly highlight the effect of total flux expansion on the total pressure variation, it is useful to rewrite (13) in a form similar to (12). A constant \(\kappa_{eff}\) is introduced, which satisfies \[\int_{u}^{t}\frac{\kappa}{R}\partial_{s}(R)ds=\kappa_{eff}\int_{u}^{t}\frac{1}{R }\partial_{s}(R)ds \tag{15}\] \(\kappa_{eff}\) is then the average of the ratio of dynamic to total pressure, weighted by the local relative variation of the flux tube area, between upstream and target. (13) then becomes \[1-f_{mom-loss} \equiv\frac{p_{tot,t}}{p_{tot,u}}= \tag{16}\] \[=\left(\frac{R_{u}}{R_{t}}\right)^{\kappa_{eff}}exp\left(\int_{u }^{t}\frac{S_{mom}}{p_{tot}}ds\right)\] This equation now explicitly shows the effect of total flux expansion on the total pressure variation. It also shows the explicit dependence of \(f_{mom-loss}\) on total flux expansion. An additional quantity can be defined to substitute \(\kappa_{eff}\) in (16), termed the effective Mach number \(M_{eff}\) \[M_{eff}=\sqrt{\frac{\kappa_{eff}}{1-\kappa_{eff}}}\ \leftrightarrow\ \kappa_{eff}=\frac{M_{eff}^{2}}{1+M_{eff}^{2}} \tag{17}\] From here, \(M_{eff}\) will be used. Further insights on \(\kappa_{eff}\) and \(M_{eff}\), and their physical interpretation, are provided in appendix B. ### Consequence on target quantities scaling The result obtained in (16) is now considered together with (2)-(4). For the sake of clarity, the following notation is introduced \[1-f_{cooling} \equiv(1-f_{cooling}^{S}) \tag{18}\] \[1-f_{mom-loss} \equiv(1-f_{mom-loss}^{S})\] (19) \[\cdot\left(\frac{R_{u}}{R_{t}}\right)^{\frac{M_{eff}^{2}}{1+M_{ eff}^{2}}}\] so the newly defined factors \(f_{cooling}^{S}\) and \(f_{mom-loss}^{S}\) are accounting for the same physics, _i.e._ volumetric sources and cross-field effects only. With this new definition of loss factors, (2)-(4) then become \[T_{t}^{mod} =\frac{8m_{i}}{\gamma^{2}}\cdot\frac{q_{\parallel,u}^{2}}{p_{tot,u}^{2}}\cdot\frac{(1-f_{cooling}^{S})^{2}}{(1-f_{mom-loss}^{S})^{2}} \tag{20}\] \[\cdot\left(\frac{R_{u}}{R_{t}}\right)^{2-\frac{2M_{eff}^{2}}{1+M _{eff}^{2}}}\] \[n_{t}^{mod} =\frac{\gamma^{2}}{32m_{i}}\cdot\frac{p_{tot,u}^{3}}{q_{\parallel,u}^{2}}\cdot\frac{(1-f_{mom-loss}^{S})^{3}}{(1-f_{cooling}^{S})^{2}}\] (21) \[\cdot\left(\frac{R_{u}}{R_{t}}\right)^{-2\frac{3M_{eff}^{2}}{1+M _{eff}^{2}}}\] \[\Gamma_{t}^{mod} =\frac{\gamma}{8m_{i}}\cdot\frac{p_{tot,u}^{2}}{q_{\parallel,u}} \cdot\frac{(1-f_{mom-loss}^{S})^{2}}{(1-f_{cooling}^{S})}\] (22) \[\cdot\left(\frac{R_{u}}{R_{t}}\right)^{-1+\frac{2M_{eff}^{2}}{1+M _{eff}^{2}}}\] The dependence of target quantities on \(R_{u}/R_{t}\) now varies with \(M_{eff}\), figure 1, and is weakened with increasing \(M_{eff}\). The qualitative dependence of \(\Gamma_{t}^{mod}\) and \(n_{t}^{mod}\) on \(1/R_{t}\) even reverses for \(M_{eff}\geq 1\) and \(\geq\sqrt{2}\), respectively. When \(M_{eff}=0\), the dependence of target quantities on \(R_{u}/R_{t}\) recovers the original ones in (2)-(4). ### Consequence on detachment window It has been predicted that the Super-X configuration allows for an increased detachment front stability and control [8; 11; 20], _e.g._ a larger control parameters interval over which a detached regime can be achieved in the divertor while maintaining a tolerable impact on the core performance. This is a consequence of the negative parallel power density gradient that total flux expansion establishes towards the target, providing opposing response with respect to an upstream detachment front movement, _i.e._ opposing radiation cooling. In terms of the operational window for detachment, Lipschultz _et al._ (see (30) of [11]) provided an analytical estimate for the dependence of the detachment window on \(B_{tot}\propto R_{t}^{-1}\) \[\frac{\zeta_{x}}{\zeta_{t}}=\left[\frac{R_{t}}{R_{x}}\right]^{\beta} \tag{23}\] where \(\zeta_{x,t}\) are the values of a control parameter \(\zeta=[p_{u},f_{I},P_{SOL}]\), that corresponds to the detachment front+ being at the X-point or at the target, Figure 1: Exponents of \(R_{u}/R_{t}\) for target temperature \(T_{t}^{mod}\) (red), target density \(n_{t}^{mod}\) (blue) and parallel particle flux density \(\Gamma_{t}^{mod}\) (green), plotted against the effective Mach number \(M_{eff}\). respectively. The three control parameters considered in this work are the upstream static pressure \(p_{u}=2n_{u}T_{u}\) (instead of \(n_{u}\), as used in [11] - see appendix C), the impurity fraction \(f_{I}\) and the power entering the SOL in the flux tube of interest \(P_{SOL}\). \(R_{x,t}\) are the X-point and the target major radii, respectively. \(\beta=[1,2,-1]\) is a specific exponent related to the considered control parameter. The derivation of (23) uses a momentum balance equivalent to the one in the 2PM and does not account explicitly for any \(p_{u}\) variation from upstream to target, _i.e._ flux expansion effects and/or total pressure redistribution between dynamic and static contributions. When taken into account, the dependence of the detachment window on \(B_{tot}\propto R_{t}^{-1}\) becomes \[\frac{\zeta_{x}}{\zeta_{t}}=\left[\left(\frac{R_{t}}{R_{x}}\right)^{1-\frac{M_ {eff}^{2}}{1+M_{eff}^{2}}}\cdot\frac{1+M_{x}^{2}}{1+M_{t}^{2}}\right]^{\beta} \tag{24}\] where the first factor in (24) accounts for the total flux expansion and the second factor accounts for the total pressure redistribution. Further details on the derivation of (24) are provided in appendix C. The inclusion of total flux expansion and redistribution effects on total pressure reveals that the static pressure \(p\) can include a gradient towards the target. In particular, \(p\) is proportional to the radiated power in the detachment front, as shown in (C.8). Consequentially, a negative gradient of the static pressure, as opposed to parallel power density, provides a positive feedback for the upstream movement of the detachment front and, hence, weakens the total flux expansion dependence of the detachment window. ### Summary of the effects of parallel flows on total flux expansion The impact of accounting for total flux expansion effects on momentum balance was shown and the following important points were highlighted: * The total pressure variation along a flux tube, see (16), can be linked explicitly to total flux expansion via \(M_{eff}\), a lumped parameter characterising flows in the flux tube of interest. * For negligible \(M_{eff}\), this variation and its related effects are negligible. * Increasing \(M_{eff}\) generally weakens the dependence on \(R_{t}\) of target quantities, see (20)-(22), and detachment window, see (24), compared to predictions by the 2PM. In the case of "effective supersonic" flows (\(M_{eff}\geq 1\)), some dependencies also qualitatively reverse, starting with the particle flux. * \(M_{eff}\) depends on both the flow patterns in the flux tube and the geometrical design of the leg, in particular on the change of relative flux expansion along field lines, _i.e._\(R^{-1}\partial_{s}(R)\), see (15) and (17). Two different divertor geometries, characterized by the same flow patterns and total flux expansion, can exhibit different behaviour with respect to their sensitivity to \(R_{t}\). In appendix B this point is discussed in detail. ## 3 SXD experiments in TCV and comparison with 2PM predictions Experiments to investigate the SXD configuration are carried out in the _Tokamak a Configuration Variable_ (TCV) at EPFL [21, 13], testing the 2PM predictions presented in section 2.1, regarding total flux expansion effects on detachment access and control. TCV is a medium-sized tokamak (\(R_{0}\sim 0.88\) m, \(B_{0}<1.45\) T, \(a\sim 0.25\) m) with a highly elongated open vessel structure and a set of 16 independently-powered poloidal field coils, allowing for unique shaping capabilities that can generate many divertor configurations. The almost complete coverage of the vessel surfaces with graphite tiles allows for flexible placement of particle and power loads. ### Key diagnostics and experimental approach Different plasma geometries, characterized by varying OSP major radius \(R_{t}\), are employed in this study, figure 2. A set of polycrystalline graphite tiles, characterized by a longer structure on the low-field side compared to the high-field side (SILO baffles), is also employed in some experiments. They are designed to increase divertor closure, whilst maintaining good compatibility with alternative divertor configurations [22, 23]. D\({}_{2}\) fuelling can employ alternative valves either on the floor, the inner wall or the ceiling of the vessel, figure 2(b), allowing, among other things, to test the possible impact of fuelling locations on the results. The flow rates are feedback controlled and can be adjusted according to the line-averaged density \(\langle n_{e}\rangle\) measurements by a far-infrared (FIR) interferometer, along a vertical chord, figure 2(a). Density and temperature measurements in the core and across the separatrix are also measured by Thomson scattering [24], figure 2(b). Thomson scattering measurements also allow to compute \(\langle n_{e}\rangle\) in the core. Wall-embedded Langmuir probes (LP) [25] cover a large part of the poloidal perimeter of the vessel, figure 2(a). These were operated with a triangular voltage sweep (from \(-120\) to \(80\) V at \(\sim 330\) Hz and \(\sim 990\) Hz frequencies), in order to obtain temperature measurements as well as particle flux. Details on their analyses are provided in [26]. The orange lines in the right panel show the lines of sight of a divertor spectroscopy system (DSS) [27]. Line radiation and their distributions are also obtained from a system of filtered optical cameras, MANITS [28], that provide 2D poloidal map inversions of the emissivity for selected radiating spectral lines. This work focuses, in particular, on the CIII (465.8 nm) line emission to obtain emissivity profiles. In previous TCV studies, the CIII radiation front along a divertor leg (determined as the location where the emissivity profile along the outer leg drops by 50% with respect to the peak) was shown to provide a convenient estimation of the detachment status of the divertor. Due to a strong dependency on the local electron temperature, the CIII radiation front is a reliable proxy to identify the low temperature region along the outer leg [29, 23]. A system of 64 gold foil bolometers, then substituted with a new system of 120 channels [30], is used to obtain radiation emissivity maps across a TCV poloidal section, by tomographically inverting their line integrated chord intensities. Finally, LIUQE [31] is used to reconstruct the magnetic equilibrium across the discharges. Two different scenarios are explored in this work, both with a plasma current \(I_{p}\sim 250\) kA and the ion \(\nabla B\) drift directed away from the X-point into the core, to avoid accessing H-mode [7]. The first employs ohmically-heated L-mode core density ramps \(\langle n_{e}\rangle\simeq[4.0\to 10.0]\cdot 10^{19}\) m\({}^{-3}\) (corresponding to \(f_{g}\simeq[0.20\to 0.55]\), \(f_{g}\) being the Greenwald fraction). The density ramp is performed separately for two LSN configurations with small and large \(R_{t}\), respectively. SILO baffles are employed to increase divertor closure, that is expected to improve the match between the 2PM predictions and experimental results, according to SOLPS-ITER simulations of TCV [15]. Fuelling is performed from either the floor, inner wall (IW) or ceiling valves. The second scenario employs ohmically-heated L-mode OSP target radius \(R_{t}\) scans at constant density \(\langle n_{e}\rangle\simeq 5.5\cdot 10^{19}\) m\({}^{-3}\) (\(f_{g}\simeq 0.31\)). This scenario is repeated in both Lower-Single-Null (LSN) or Upper-Single-Null (USN) configurations, with either SILO baffles or without, and floor-only fuelling. ### Density ramps at constant \(R_{t}\) Two values of \(R_{t}\) are investigated during core density ramps: \(R_{t}\simeq 0.62\) m (small \(R_{t}\)) and \(R_{t}\simeq 1.03\) m (large \(R_{t}\)). When ramping the core density, the temperature in the divertor gradually reduces. Using the CIII front as proxy for low temperature region in the divertor, the 2PM prediction on temperature is tested in these experiments. The discharges have a similar time evolution for \(\langle n_{e}\rangle\) and input ohmic power \(P_{OHM}\) dependence on \(\langle n_{e}\rangle\), figures 3a and 3b. The power to the SOL, Figure 2: Examples of baffled geometries used in the experimental work (large and small \(R_{t}\)). (a) The red dots indicate the position of wall-embedded Langmuir probes, while the cyan line indicates the FIR chord used for the feedback control of fuelling. (b) The black rectangles indicate the poloidal location of fuelling valves, the orange lines indicate the lines of sight of the DSS and the green dots indicate Thomson scattering measurement locations (intercepts between the laser and spectrometer lines of sight). \(P_{SOL}\), is defined as the difference between \(P_{OHM}\) and the power radiated from the core, computed from bolometry, excluding a 5 cm circular region centered around the X-point, figure 4. \(P_{SOL}\) dependence on \(\langle n_{e}\rangle\) shows significant differences in cases with inner wall fuelling (up to 25%), figure 3c. Tomographic reconstruction of the emissivities for this fuelling location, figure 4, suggests that this difference can be ascribed to increased radiation inside the confined plasma region at higher \(\langle n_{e}\rangle\). Thomson scattering measurements (not shown) also show that the density and temperature in the core and near the separatrix remain comparable in all cases. Relevant SOL geometry quantities are reported in table 1. The CIII front movement at the outer leg against \(\langle n_{e}\rangle\), taken from inversions of MANTIS measurements, is analysed to compare small and large \(R_{t}\) configurations, figure 5a. Similar results are obtained by the DSS (not shown). However, \begin{table} \begin{tabular}{|c|c|c||c|c|c|} \hline _Shot_ & \(R_{t}\) & _Fuel._ & \((1/B_{tot})^{OSP}\) (\(T^{-1}\)) & \(L_{\parallel}^{OSP}\) (\(m\)) & \(f_{x,pol}^{OSP}\) \\ \hline 70202 & Small & IW & 0.50 & 14.2 & 2.79 \\ \hline 70201 & Large & IW & 0.80 & 14.7 & 2.36 \\ \hline 63935 & Small & Floor & 0.50 & 13.6 & 2.82 \\ \hline 63917 & Large & Floor & 0.82 & 14.3 & 2.38 \\ \hline 63925 & Small & Ceiling & 0.50 & 13.8 & 2.83 \\ \hline 63934 & Large & Ceiling & 0.85 & 12.4 & 2.57 \\ \hline \end{tabular} \end{table} Table 1: Density ramps at constant \(R_{t}\) - SOL geometry quantities at the OSP: inverse of the total magnetic field \((1/B_{tot}^{OSP})\propto R_{t}^{OSP}\), parallel connection length \(L_{\parallel}^{OSP}\) (measured from the OMP, 5 mm from the separatrix) and poloidal flux expansion \(f_{x,pol}^{OSP}\) (measured at 5 mm from the separatrix). Figure 4: Density ramps at constant \(R_{t}\), inner wall (IW) fuelling cases - Emissivity maps (W/m\({}^{3}\)) at \(\langle n_{e}\rangle=6.75\cdot 10^{19}\) m\({}^{-3}\). The colormap is saturated at \(2.1\cdot 10^{6}\) W/m\({}^{3}\), to better highlight features of the emissivity maps away from the X-point. The red circle defines the 5 cm radial area centered around the X-point, excluded from core radiation computation. Figure 3: Density ramps at constant \(R_{t}\) - (a) line-averaged density \(\langle n_{e}\rangle\) variation in time; (b) ohmic power \(P_{OHM}\) variation against \(\langle n_{e}\rangle\); (c) power to the SOL \(P_{SOL}\) variation against \(\langle n_{e}\rangle\). variations in \(P_{SOL}\) and \(L_{\parallel}^{OSP}\) can influence the CIII front behaviour between compared cases, as the front position is strictly related to the temperature in the divertor leg. According to the 2PM, the OSP target temperature \(T_{t}\) (see (2), when changing the upstream control parameter from total pressure \(p_{tot,u}\) to density \(n_{u}\)[10, 18]) is proportional to \[T_{t}^{2PM}\propto\frac{1}{R_{t}^{2}}\cdot\frac{q_{\parallel,u}^{10/7}}{n_{u}^ {2}L_{\parallel}^{4/7}} \tag{25}\] and taking \[n_{u}\propto\langle n_{e}\rangle \tag{26}\] \[q_{\parallel,u}\propto\frac{P_{SOL}}{\lambda_{sol,u}2\pi R_{u}B_{pol,u}/B_{ tot,u}}\] one can write \[T_{t}^{2PM}\propto\frac{1}{R_{t}^{2}}\cdot\frac{P_{SOL}^{10/7}}{\langle n_{e} \rangle^{2}(L_{\parallel}^{OSP})^{4/7}} \tag{27}\] Note that this reasoning does not account for differences in other quantities between compared cases, such as: I) the geometrical location and features of the upstream location (_e.g._ the scrape-off layer width \(\lambda_{sol,u}\)); II) in-out power sharing; III) the conducted-to-total power density ratio \(f_{cond}\), IV) the ratio \(n_{u}/\langle n_{e}\rangle\). From (27), the parameter \[C\equiv\frac{\langle n_{e}\rangle(L_{\parallel}^{OSP}/L_{\parallel}^{ref})^{ 2/7}}{(P_{SOL}/P_{SOL}^{ref})^{5/7}} \tag{28}\] can be defined as a _corrected_ density. Plotting the CIII front movement against \(C\) allows to consistently account for \(P_{SOL}\) and \(L_{\parallel}^{OSP}\) variations between compared cases, according to the 2PM. Here, \(L_{\parallel}^{ref}=10\) m and \(P_{SOL}^{ref}=2.5\cdot 10^{5}\) W are considered. This is done in figure 4(b). From (27), the large \(R_{t}\) configuration should see lower target temperatures for the same value of \(C\). The CIII front movement from the target should thus happen at lower \(C\) values for the higher \(R_{t}\) cases. Given a specific front position obtained at values \(C_{(small~{}R_{t})}\) in the small \(R_{t}\) cases, the expected reduced values for \(C_{(large~{}R_{t})}^{\mathit{expected}}\) in the corresponding large \(R_{t}\) cases can be computed as \[C_{(large~{}R_{t})}^{\mathit{expected}}=C_{(small~{}R_{t})}\cdot\frac{R_{t}^{( small)}}{R_{t}^{(large)}} \tag{29}\] This is, however, not retrieved in the results shown in figure 4(b). For all the different fuelling cases, the variation in CIII front position with different \(R_{t}\) is much weaker than predicted by the 2PM. ### \(R_{t}\) scans at constant density Here the opposite scenario is investigated by sweeping poloidally the OSP at constant core density, scanning a range of \(R_{t}\) values (\(R_{t}\simeq[0.7\leftrightarrow 1.05]\) m), Figure 5: Density ramps at constant \(R_{t}\) - CIII front position analyses from MANTS along the outer leg - CIII front position is defined in terms of relative (%) poloidal distance from the X-point, where 100% is the target position. The expected shifts of large \(R_{t}\) cases curves with respect to small \(R_{t}\) cases are also plotted, computed according to (29). with both outward and inward sweeps. When \(R_{t}\) is modified, the target particle flux is also expected to vary according to the 2PM (see (4)). This prediction is tested in these experiments, using target LPs measurements. During the strike-point sweeps, \(\langle n_{e}\rangle\), \(P_{OHM}\) and \(P_{SOL}\) are kept approximately constant, figure 6, with observed variation of up to \(10-20\%\). For all cases, \(\langle n_{e}\rangle\) is always below \(\sim 6.0\cdot 10^{19}\) m\({}^{-3}\). At this density, for these experimental conditions, the CIII front in the outer leg remains close to the target and an attached state is maintained, as shown by the density ramps in 3.2, figure 5a. Thomson scattering measurements (not shown) show that the density and the temperature in the core and near the separatrix remain comparable across the strike-point sweeps. Figure 7 plots the plasma geometry and \(L_{\parallel}^{OSP}\) against \(R_{t}\). Figure 8a shows the variation of the peak parallel particle flux density at the OSP \(\Gamma_{t}\) against \(R_{t}\). \(\Gamma_{t}\) is taken from LPs measurements. However, variations in \(\langle n_{e}\rangle\), \(P_{SOL}\) and \(L_{\parallel}^{OSP}\) can influence \(\Gamma_{t}\) variations. According to the 2PM the OSP peak parallel particle flux density \(\Gamma_{t}\) (see (4), when changing the upstream control parameter from total pressure \(p_{tot,u}\) to density \(n_{u}\)[10, 18]) is proportional to \[\Gamma_{t}^{2PM}\propto R_{t}\cdot\frac{n_{u}^{2}L_{\parallel}^{4/7}}{q_{ \parallel,u}^{3/7}}\propto R_{t}\cdot\frac{\langle n_{e}\rangle^{2}(L_{ \parallel}^{OSP})^{4/7}}{P_{SOL}^{3/7}} \tag{30}\] Here, the same approximations (see (26)) employed in section 3.2 are used. From (30), the variable \[F\equiv\frac{\Gamma_{t}\ (P_{SOL}/P_{SOL}^{ref})^{3/7}}{(\langle n_{e}\rangle /\langle n_{e}\rangle^{ref})^{2}(L_{\parallel}^{OSP}/L_{\parallel}^{ref})^{ 4/7}}\propto R_{t} \tag{31}\] Figure 6: \(R_{t}\) sweeps at approximately constant line-averaged density \(\langle n_{e}\rangle\) - \(\langle n_{e}\rangle\) (a), ohmic power \(P_{OHM}\) (b), power to the SOL \(P_{SOL}\) (c) and OSP major radius \(R_{t}\) (d) variations in time. In between \(\sim\) 1.05 and 1.35 \(s\), the OSP is localised on a vessel segment for which a complete LPs coverage can not be achieved and, therefore, not of interest for the analyses and not reported here. For the USN case (black curve), only the outward sweep is available due to an early disruption. Figure 7: \(R_{t}\) sweeps at approximately constant line-averaged density \(\langle n_{e}\rangle\) - (a) separatrix geometries for the unbaffled cases, showing the minimum and maximum \(R_{t}\) achieved; (b) parallel connection length \(L_{\parallel}^{OSP}\) (taken at the outboard midplane, 5 \(mm\) from the separatrix) variation against \(R_{t}\). can be defined as a _corrected_ parallel particle flux density. Plotting \(F\) against \(R_{t}\) consistently accounts for \(\langle n_{e}\rangle\), \(P_{SOL}\) and \(L_{\parallel}^{OSP}\) variations between compared cases, according to the 2PM. Here, \(\langle n_{e}\rangle^{ref}=5.5\cdot 10^{19}\) m\({}^{-3}\), \(L_{\parallel}^{ref}=10\) m and \(P_{SOL}^{ref}=2.5\cdot 10^{5}\) W are considered. From (31), \(F\) is expected to increase linearly with \(R_{t}\) which is, however, not observed in the experiments, figure 8b. For all the different cases, the variation of \(F\) with \(R_{t}\) is much weaker than predicted by the 2PM. Significant discrepancies from the 2PM predictions, consistent with this result, are also observed for the integrated particle flux (not shown). ## 4 Modelling of parallel flows in the divertor SXD experiments in TCV, section 3.2 - 3.3, showed much weaker total flux expansion effects than predicted by the 2PM. Parallel flows can potentially explain part of this discrepancy, section 2.3 - 2.4. As a direct, reliable measurement of parallel flows was not available in the experiments, analytical and numerical modeling are presented in this section to assess if this effect can be significant in the experimental conditions. ### Mach number evolution and possibility for supersonic flows The impact of parallel flows on total flux expansion effects increases with higher values of the Mach number \(M\) in the divertor, by definition of \(M_{eff}\), (15) - (17). The evolution equation for \(M\) along a SOL flux tube is presented here, obtained by combining particle and momentum balances. The simple case of a single hydrogenic ion species (\(n_{e}=n_{i}=n\)) is considered \[(1-M^{2})\partial_{s}(M) = \frac{1+M^{2}}{nc_{s}}S_{par}\] \[+\frac{M(1+M^{2})}{c_{s}}\partial_{s}(c_{s})\] \[+A_{\perp}M\partial_{s}(\frac{1}{A_{\perp}})-\frac{M}{m_{i}nc_{s} ^{2}}S_{mom}\] where \(s\) is a length coordinate along the flux tube, increasing from upstream to target (\(s=s_{t}\)). \(S_{par,mom}\) are effective sources/sinks in the flux tube, respectively for particles and momentum, related to volumetric sources and cross-field effects, see (8)-(9). \(c_{s}=\sqrt{T_{e}+T_{i}/m_{i}}\) is the local sound speed. The derivation of (32) is shown in appendix D. It is important to note that (32) must satisfy the Bohm condition [32], _i.e._\(M\geq 1\), at the target, as the target corresponds to the sheath entrance in this fluid model. Qualitatively, (32) shows that: * four main drivers are responsible for \(M\) variation along a field line: particle and momentum effective sources/sinks (both volumetric sources and cross-field effects), sound speed \(c_{s}\) variation and total flux expansion. * the effect of these drivers is reversed when \(M\) is lower or higher than 1, _i.e._ whether the plasma flow is subsonic or supersonic. * a necessary (but not sufficient) condition for a supersonic transition is a change of sign of the right-hand-side in (32). Moreover, the constraint provided by the Bohm condition at the target allows to extract a sufficient (but not necessary) condition for the development of supersonic flows at the target. Taking a region \([s_{t}-\Delta s,s_{t}]\) before the target: _if, in this region, the right-hand-side of (32) is negative, then the flow must be supersonic_. This case is interesting for the SXD configuration. Considering the ideal case where \(S_{par,mom}\) and \(\partial_{s}(c_{s})\) are negligible across the region \([s_{t}-\Delta s,s_{t}]\), the right-hand-side of (32) is then negative in the same region in the outer leg, due to total flux expansion, and supersonic flows would arise. The idea that the convergent-divergent magnetic structure of a flux tube, such as in the outer leg of a SXD configuration, can Figure 8: \(R_{t}\) sweeps at approximately constant line-averaged density \(\langle n_{e}\rangle\) - peak parallel particle flux density \(\Gamma_{t}\) (a) and variable \(F\) (b) against \(R_{t}\). For shots in the LSN configuration (cyan and brown lines), two lines are reported representing the two sweeps performed (outward and inward). favour supersonic flows at target has already been addressed before [33, 34, 35]. The possibility for supersonic flows to arise at the OSP for the SXD configuration was already demonstrated numerically before [36, 37]. In consequence, \(M_{eff}\) and parallel flow effects on total flux expansion are suggested to be potentially significant for the SXD configuration. Moreover, when the other drivers are considered, for low target temperature (_i.e._\(T_{t}\lesssim 5~{}eV\)) as required in detached conditions, in front of the target: * \(S_{par}\) is negative: at low temperatures the ionisation front moves away from the target and the the only effective particle sources/sinks will be radial transport4 and recombination that both make \(S_{par}\) negative. Footnote 4: Here and in the following radial particle and momentum transport are considered negative contributions to \(S_{par,mom}\). This is generally true for the hottest channels in the common flux region of the SOL. * \(\partial_{s}(c_{s})\) is negative. * \(S_{mom}\) is negative due to charge exchange, recombination and radial transport (thus \(-S_{mom}\) will be positive). In the outer leg of the SXD configuration, 3 out of 4 terms on the right-hand-side of (32) are therefore negative in the outer leg of a detached SXD configuration. This type of analysis can be also applied to other divertor configurations, even with negligible total flux expansion, and supersonic flows can arise for similar target conditions [38, 39, 40, 41]. ### SOLPS-ITER modelling of SXD experiments in TCV A SOLPS-ITER simulation of TCV is used to study the patterns of parallel flows and \(M_{eff}\) in the divertor region. SOLPS-ITER (B2.5-Eirene) is a transport code that couples the B2.5 multi-fluid solver with the kinetic Monte Carlo model neutral code Eirene [17, 42]. SOLPS-ITER is one of the most established SOL plasma simulators and it has been used for the design of the ITER divertor [43, 6]. The simulation discussed in this work was already presented in [44], where details of the simulation setup are reported. The simulation features a baffled LSN geometry, figure 9, with parameters typical of an ohmically-heated L-mode plasma in TCV, such as the experiments presented in section 3. Drift effects are not included in this work, so radial transport is purely anomalous and incorporated by an artificial cross-field diffusion. The fuelling rate is varied to allow the analyses of different divertor conditions. At the targets, a Dirichlet boundary condition satisfying the marginal Bohm criterion [32] is applied, _i.e._ the parallel ion velocity at the sheath entrance is forced to match the plasma sound velocity (accounting for carbon impurities resulting from wall sputtering). This means that a Mach number \(M=1\) at the target is imposed, excluding, _a priori_, supersonic flows at the target (see section 4.1). This implies that the following evaluation of \(M_{eff}\) is conservative: \(M_{eff}\) could potentially have higher values in reality. To compute \(M_{eff}\), the common flux region of the outer leg is considered in the simulation, taking as the upstream location the divertor entrance, figure 9. This is also a conservative choice: the value of \(M_{eff}\) usually has a minimum for a choice of upstream location which is close to the X-point (see appendix B). For each flux tube in the analysed domain, \(M_{eff}\) is evaluated according to (15) - (17). Its value varies both with the radial position of the flux tube, figure 10, and with divertor conditions, figure 10, as higher values are achieved for lower target temperatures. For the intermediate and higher fuelling rates, where divertor conditions are similar to the experiments presented in section 3, \(M_{eff}\geq 0.5\) for all the flux tubes. In consequence, this SOLPS-ITER simulation suggests that \(M_{eff}\) and parallel flow effects on total flux expansion are significant in these conditions, even with the conservative choices in the present analyses. Figure 9: B2.5 and Eirene meshes for the SOLPS simulation. The pink arrow indicates the location for the fuelling. The green shaded area indicates the domain considered for the analyses of the outer leg. ## 5 Conclusions In this paper, the role of total flux expansion on the total pressure balance, neglected in the 2PM, is made explicit, by including the effect of parallel flows. This effect is quantified by a newly defined lumped parameter, the effective Mach number \(M_{eff}\), characterising each flux tube. Its introduction allows to decouple geometrical effects from cross-field and sources/sinks effects in the momentum loss factor \(f_{mom-loss}\). In consequence, 2PM target quantity expressions can be rewritten and their dependence on total flux expansion, through the ratio \(R_{u}/R_{t}\), is now modified and varying with \(M_{eff}\). For increasing \(M_{eff}\), total flux expansion effects on target quantities is reduced and eventually qualitatively reversed, starting with the particle flux. The same modifications are applied to the detachment model by Lipschultz _et al._, showing how the dependence of the detachment window on total flux expansion weakens for increasing \(M_{eff}\). Physically, this is ascribed to the fact that a negative static pressure gradient is established towards the target due to total flux expansion. Experiments on the SXD configuration are carried out in the TCV tokamak, testing 2PM predictions. These are all ohmically-heated L-mode discharges, in SN configuration, with \(I_{p}\sim 250kA\) and the ion \(\nabla B\) drift directed away from the X-point. In core density ramps, featuring a baffled geometry and different fuelling locations, the CIII front movement in the outer leg, used as a proxy for the plasma temperature, shows variations with the outer strike point radius \(R_{t}\) much weaker than 2PM predictions, especially when variations in \(P_{SOL}\) and \(L_{\parallel}\) are taken into account. In OSP sweeps, at approximately constant core density, the peak particle flux density at the OSP remains rather independent of \(R_{t}\) variations, while a linear increase was predicted by the 2PM. To understand if parallel flow effects can be significant in the experiments presented in this work, in the absence of experimental parallel flow measurements, both analytical and numerical modeling are employed. It is shown that supersonic flows, and therefore larger values of \(M_{eff}\), are favoured in a SXD configuration due to the convergent-divergent magnetic structure of flux tubes in the outer leg. Moreover, the analyses of a SOLPS-ITER simulation of a baffled LSN geometry in TCV, with parameters typical of an ohmically-heated L-mode plasma, show that \(M_{eff}\geq 0.5\) in the outer leg, for divertor conditions similar to the ones present in the experiments, even with conservative choices in its evaluation. The modeling then suggests that paral Figure 10: (a) Mach number \(M=v_{\parallel}/c_{s}\) map in the divertor region, for the intermediate fuelling rate case, where \(v_{\parallel}\) is the parallel velocity of the main plasma species \(D^{+}\) and \(c_{s}\) is the plasma sound speed, accounting for C impurities resulting from wall sputtering. (b) effective Mach number \(M_{eff}\) for different flux tubes, in the case of intermediate fuelling rate, mapped against their radial distance from the separatrix at the OMP \(dr_{exp}^{OMP}\). The flux tube with higher target temperature is indicated by the red vertical line. (c) effective Mach number \(M_{eff}\) for the flux tube with higher target temperature (\(\simeq\) Min \(M_{eff}\)), against Max \(T_{c}^{OT}\). el flows are, at least partially, causing the discrepancy between the 2PM predictions and the experiments. ## Acknowledgements This work has been carried out within the framework of the EUROfusion Consortium, via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion) and funded by the Swiss State Secretariat for Education, Research and Innovation (SERI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union, the European Commission, or SERI. Neither the European Union nor the European Commission nor SERI can be held responsible for them. ## Appendix A Derivation of 2PM expressions for target quantities In this appendix the expressions (2)-(4) are derived. To simplify the final expressions with respect to those reported in [10], it is assumed: * (S-I) only hydrogenic ion species (_i.e._\(n=n_{e}=n_{i}\)) and no net current (_i.e._\(v_{\parallel}=v_{e,\parallel}=v_{i,\parallel}\)). * (S-II) thermal equilibration is achieved in the flux tube (_i.e._\(T=T_{e}=T_{i}\)). An additional general assumption is needed: * (A-I) the target corresponds to the sheath entrance (_i.e._\(q_{\parallel,t}=\gamma n_{t}T_{t}M_{t}\sqrt{2T_{t}/m_{i}}\), where \(M_{t}=v_{\parallel,t}/c_{s,t}=v_{\parallel,t}/\sqrt{2T_{t}/m_{i}}\) is the Mach number at the target and \(\gamma\) is the sheath heat transmission coefficient). Note that, by Bohm condition at the sheath entrance, \(M_{t}\geq 1\) must hold. Introducing the standard definitions of power and momentum loss factors (5)-(6) and using the above assumptions \[(1-f_{cooling})q_{\parallel,u}R_{u}= \gamma n_{t}T_{t}M_{t}\sqrt{\frac{2T_{t}}{m_{i}}}R_{t} \tag{11}\] \[(1-f_{mom-loss})p_{tot,u}= 2(1+M_{t}^{2})n_{t}T_{t} \tag{12}\] where \(p_{tot,t}=p_{tot,t}^{e}+p_{tot,t}^{i}=2n_{t}T_{t}+m_{i}n_{t}v_{\parallel,t}^{ 2}=2n_{t}T_{t}(1+M_{t}^{2})\). The factor \(n_{t}T_{t}\) is isolated in (12) and substituted into (11), before isolating \(T_{t}\) to obtain \[T_{t} =\frac{2m_{i}(1+M_{t}^{2})^{2}}{\gamma^{2}M_{t}^{2}}\cdot\frac{q_ {\parallel,u}^{2}}{p_{tot,u}^{2}} \tag{13}\] \[\qquad\cdot\frac{(1-f_{cooling})^{2}}{(1-f_{mom-loss})^{2}}\cdot \left(\frac{R_{u}}{R_{t}}\right)^{2}\] \(n_{t}\) is then obtained from (12) and (13) \[n_{t} =\frac{\gamma^{2}M_{t}^{2}}{4m_{i}(1+M_{t}^{2})^{3}}\cdot\frac{p_ {tot,u}^{3}}{q_{\parallel,u}^{2}} \tag{14}\] \[\qquad\cdot\frac{(1-f_{mom-loss})^{3}}{(1-f_{cooling})^{2}}\cdot \left(\frac{R_{t}}{R_{u}}\right)^{2}\] Finally, \(\Gamma_{t}\) is obtained as \(n_{t}v_{\parallel,t}=M_{t}n_{t}\sqrt{2T_{t}/m_{i}}\) \[\Gamma_{t} =\frac{\gamma M_{t}^{2}}{2m_{i}(1+M_{t}^{2})^{2}}\cdot\frac{p_{ tot,u}^{2}}{q_{\parallel,u}} \tag{15}\] \[\qquad\cdot\frac{(1-f_{mom-loss})^{2}}{(1-f_{cooling})}\cdot \left(\frac{R_{t}}{R_{u}}\right)\] Note that \(\gamma=\gamma(M_{t})\simeq 7.5+M_{t}^{2}\)[18, 19], therefore \(\gamma_{0}=\gamma(M_{t}=1)\simeq 8.5\). The target quantities can then be rewritten, by grouping the terms directly depending on \(M_{t}\) as factors of unitary value when \(M_{t}=1\) \[T_{t} =\left(\frac{8.5(1+M_{t}^{2})}{2(7.5+M_{t}^{2})M_{t}}\right)^{2} \cdot\frac{8m_{i}}{\gamma_{0}^{2}}\cdot\frac{q_{\parallel,u}^{2}}{p_{tot,u}^ {2}} \tag{16}\] \[\qquad\cdot\frac{(1-f_{cooling})^{2}}{(1-f_{mom-loss})^{2}}\cdot \left(\frac{R_{u}}{R_{t}}\right)^{2}\] \[n_{t} =\left(\frac{8(7.5+M_{t}^{2})^{2}M_{t}^{2}}{8.5^{2}(1+M_{t}^{2})^ {3}}\cdot\frac{\gamma_{0}^{2}}{32m_{i}}\cdot\frac{p_{tot,u}^{3}}{q_{\parallel,u }^{2}}\right.\] (17) \[\qquad\cdot\frac{(1-f_{mom-loss})^{3}}{(1-f_{cooling})^{2}}\cdot \left(\frac{R_{t}}{R_{u}}\right)^{2}\] \[\Gamma_{t} =\left(\frac{4(7.5+M_{t}^{2})M_{t}^{2}}{8.5(1+M_{t}^{2})^{2}} \right)\cdot\frac{\gamma_{0}}{8m_{i}}\cdot\frac{p_{tot,u}^{2}}{q_{\parallel,u }}\] (18) \[\qquad\cdot\frac{(1-f_{mom-loss})^{2}}{(1-f_{cooling})}\cdot \left(\frac{R_{t}}{R_{u}}\right)\] These expressions recover (2)-(3)-(4) when \(M_{t}=1\), that is hypothesis (S-III) in section 2.1. Appendix B Further comments and insights on total flux expansion effects on momentum balance and on the effective Mach number \(M_{eff}\) _The synergy between parallel flows and total flux expansion on total pressure balance_ A short insight on the physical intuition behind the synergy between parallel flows and total flux expansion is provided here, highlighting the difference with the power balance. Starting from (11) it is possible to notice that, contrary to the power balance expression (10), the total flux expansion effect \(-R^{-1}\partial_{s}(R)\) on the local total pressure variation is weighted by \(\kappa=mmv_{\parallel}^{2}/p_{tot}=M^{2}/(1+M^{2})\). In other words, the local flux expansion effect is re-scaled according to the local parallel flow conditions, in terms of \(M\). In particular, for \(M\ll 1\), total flux expansion effects can be neglected. The physical intuition is that the only component of the total pressure which is subject to the effect of locally varying cross-section in the flux tube is the dynamic pressure \(mnv_{\parallel}^{2}\). This is because, in this work, the static pressure (\(p=nT\)) is considered isotropicSS, while the dynamic pressure is anisotropic, with a preferential direction along the flux tube. Mathematically, this is reflected in (9) by the fact that dynamic pressure enters the balance via the divergence operator while the static pressure via the gradient operator. Footnote 5: This assumption may be questionable in some conditions, and the anisotropy of pressure, especially for ions, in parallel and radial directions might play a direct role on total flux expansion effects [37]. _Mathematical definition of_ \(M_{eff}\), _counter-intuitive values and its dependence on the upstream location_ From (15) - (17), \(M_{eff}\) (or \(\kappa_{eff}\)) can be defined as: _the value of \(M\) (or \(\kappa\)) which, when constant from upstream to target, would provide the same total pressure variation \(p_{tot,t}/p_{tot,u}\) due to total flux expansion_. Despite \(\kappa=M^{2}/(1+M^{2})\in[0,1)\), from (15) it is clear that \(\kappa_{eff}\) can in principle take on any real value, due to the averaging process against \(R^{-1}\partial_{s}(R)\). This reflects in \(M_{eff}\rightarrow+\infty\) for \(\kappa_{eff}\to 1^{-}\) or \(M_{eff}\) assuming imaginary values for \(\kappa_{eff}\notin[0,1]\) (see (17)). Despite being counter-intuitive, this does not pose a direct problem to the mathematical formulation: \(M_{eff}\) always enters the expressions presented in this work as \(M_{eff}^{2}/(1+M_{eff}^{2})=\kappa_{eff}\in\mathbb{R}\). The remaining problem is when \(\kappa_{eff}\rightarrow\pm\infty\), that can happen for \(R_{u}\to R_{t}\). In this case, in the expressions presented in this work, the indeterminate form \((R_{u}/R_{t})^{\kappa_{eff}}\) would appear. This is a consequence of forcing the geometrical term in the total pressure variation (\(1-f_{mom-loss}\)) to take the form of a power of \((R_{u}/R_{t})\) (see (13) - (16)). However, this was necessary to maintain a simple form compatible with the algebraic expressions of the 2PM. Here, a pathological example is provided to discuss the meaning of infinite or imaginary values for \(M_{eff}\), which could be difficult to understand in terms of the \(M_{eff}\) definition provided above. This also shows how the \(M_{eff}\) value depends on the upstream location. Consider a LSN geometry and focus on computing \(M_{eff}\) for a flux tube in the outer divertor leg, varying the upstream location from the OSP to the OMP. A parallel length coordinate \(s\) is defined, increasing from \(s=s_{OMP}\) at the OMP to \(s=s_{t}\) at the OSP. Assume that: * the Mach number is unitary between the X-point and the OSP and null elsewhere, that is \(M=1\cdot\chi[s_{x},s_{t}]\) * \(R_{x}<R_{t}<R_{OMP}\), where \(R\) is the major radius. Figure 11 shows a graphical visualisation of this example. Despite the \(R\) variation, the total pressure \(p_{tot}\) does not vary due to total flux expansion where \(M=0\), _i.e._ between the OMP and the X-point. \(p_{tot}\) then gradually decreases, due to total flux expansion, between the X-point and the OSP, as \(M=1\) and \(R\) increases (see (11)). In this simple case, \(\kappa_{eff}\) can be computed analytically for varying upstream location \(s_{u}\in[s_{OMP},s_{t}]\) by using (15) \[k_{eff} =0.5\cdot\frac{\ln\left(R_{t}/R_{x}\right)}{\ln\left(R_{t}/R_{u} \right)}\quad\text{for }s_{u}\in[s_{OMP},s_{x})\] \[k_{eff} =0.5\quad\quad\quad\quad\quad\quad\text{for }s_{u}\in[s_{x},s_{t}]\] \(M_{eff}\) can be then computed by (17), together with the geometrical factor in the total pressure variation \(p_{tot,t}/p_{tot,u}\) (see (16)) \[\left(\frac{p_{tot,t}}{p_{tot,u}}\right)_{geom} =\left(\frac{R_{u}}{R_{t}}\right)^{0.5\cdot\ln(R_{t}/R_{x})/\ln(R _{t}/R_{u})}=\] \[=e^{-0.5\cdot\ln(R_{t}/R_{x})}=\] \[=\left(\frac{R_{x}}{R_{t}}\right)^{0.5}\quad\text{for }s_{u}\in[s_{ OMP},s_{x})\] \[\left(\frac{p_{tot,t}}{p_{tot,u}}\right)_{geom} =\left(\frac{R_{u}}{R_{t}}\right)^{0.5}\quad\text{for }s_{u}\in[s_{x},s_{t}]\] The results are represented in figure 14. For a choice of upstream location between the OSP and the X-point, where \(M=1\), \(M_{eff}\) and \(k_{eff}\) are constants. The total pressure variation, due to total flux expansion, is reflected in the variation in \((p_{tot,u}/p_{tot,t})_{geom}\). When the upstream location is shifted beyond the X-point and towards the OMP, as the total pressure no longer varies due to total flux expansion, \((p_{tot,u}/p_{tot,t})_{geom}\) is constant. However, as \((R_{u}/R_{t})\) keeps varying in this region (in this example, increasing towards the OMP), \(M_{eff}\) also varies to accommodate this change. When \((R_{u}/R_{t})\) increases above a given threshold (where \(\kappa_{eff}=1\)), a positive \(M_{eff}\) can no longer accommodate this variation and imaginary values are obtained. This is understandable in terms of the definition provided above: taking for example the OMP as the upstream location, for which \(R_{u}/R_{t}>1\), there exists no constant value of \(M\in\mathbb{R}\) which would result in a total pressure decrease towards the target, as in this example. Similar results can be obtained in more realistic cases, such as for example the SOLPS-ITER simulation analysed in section 4.2. Also in this case, flux tubes feature a convergent-divergent magnetic structure, between the OMP and the OSP, and a monotonically increasing \(M\) towards the OSP, figures 9(a) and 10. These conditions tend to push the minimum for \(M_{eff}\) close to the poloidal location where \(R\) is minimum, that is often the X-point location for the standard geometry of outer legs in diverted configurations, figure 14. This justifies why the choice of the divertor entrance, as the upstream location to evaluate \(M_{eff}\) in section 4.2, was termed as conservative. _Dependence of \(M_{eff}\) on the divertor leg geometry_ \(M_{eff}\) is derived, through \(\kappa_{eff}\), from a weighted average of \(\kappa=M^{2}/(1+M^{2})\) along the flux tube, where the weighting factor is the local relative variation of the flux tube area \(R^{-1}\partial_{s}(R)\) (see (15) - (17)). This implies that for a given \(M\) distribution, from upstream to target, the local flux expansion distribution along the leg influences the value of \(M_{eff}\) and, therefore, the magnitude by which total flux expansion effects are reflected on total pressure variation, target quantities and detachment window. In other words, the divertor leg geometry influences the sensitivity to total flux expansion effects. Here, a couple of pathological examples are provided to better highlight this point. Two cases, with the same total flux expansion \(R_{t}/R_{u}\), are considered in which the local flux expansion is constant and focused only: (case A) in the region where \(M=0\); (case B) in the region where \(M=1\). Consider a SOL flux tube and a field aligned length coordinate \(s=[0,L]\), where \(s=0\) corresponds to the upstream position (with major radius \(R_{u}\)) and \(s=L\) corresponds to the target position (with major radius \(R_{t}\)). Assume the following profiles for the Mach number along the flux tube \[M=1\cdot\chi[L-\Delta,L]\] and for local relative flux expansion \[\text{(case A)}\ \ \frac{1}{R}\partial_{s}(R)=\ln\frac{R_{t}}{R_{u}} \cdot\frac{\chi[0,L-\Delta]}{(L-\Delta)}\] \[\text{(case B)}\ \ \frac{1}{R}\partial_{s}(R)=\ln\frac{R_{t}}{R_{u}} \cdot\frac{\chi[L-\Delta,L]}{\Delta}\] where \(\chi[s_{1},s_{2}]\) is a function which equals 1 in between \(s_{1}\) and \(s_{2}\) and 0 elsewhere, and \(\Delta\in(0,L)\). In practice, it is imposed that \(M\) will increase instantaneously from 0 to 1 in the portion \([L-\Delta,L]\) of the flux tube in front of the target. Notice that in both cases the total flux expansion \(R_{t}/R_{u}\) is the same. Figure 15 shows a graphical visualisation of these examples, for the outer leg of a single-null configurations (taking the X-point as upstream). Computing now \(\kappa_{eff}\) by (15) and \(M_{eff}\) by (17), it is found \[\text{(case A)}\ \ \ \kappa_{eff}=0\ \ \ \rightarrow\ \ M_{eff}=0\] \[\text{(case B)}\ \ \ \kappa_{eff}=0.5\ \ \rightarrow\ \ M_{eff}=1\] It is then clear the drastic change in \(M_{eff}\) depending on the geometry of the flux tube, considering the same total flux expansion and flows pattern. ## Appendix C Derivation of detachment window expression The derivation of (24) is presented. This is similar to the original derivation reported in [11]. In addition, the same hypothesis of thermal equilibration in the flux tube (_i.e._\(T=T_{e}=T_{i}\)) is adopted, as in appendix A. Therefore, the plasma static pressure is \(p=2nT\). Consider the total steady-state energy balance in a flux tube. Assume (a) cross-field transport effects are negligible. Assume also that (b) the ratio \(f_{cond}\) of conducted to total parallel power density is constant, and (c) Spitzer's formulation for parallel heat conductivity can be used: \(\kappa_{\parallel}=\kappa_{0}T^{5/2}\). The power balance is then \[H=-\frac{1}{f_{cond}}B\partial_{s}\left(\frac{\kappa_{\parallel}}{B}\partial _{s}(T)\right) \tag{25}\] where \(s\) is the length coordinate along a field line from target to upstream, here considered as corresponding to the X-point (\(s:\ [0,s_{z}]\)). It is assumed that (d) \(H=-n^{2}f_{I}Q(T)\), which means the local effective power sources/sinks can be approximated with their only radiation-related component. Here \(n\) is the plasma density, \(f_{I}\) is the impurity fraction (\(f_{I}=n_{I}/n\)) and \(Q(T)\) is a radiation efficiency function. The radiation efficiency \(Q(T)\) is assumed (e) to be a function which peaks sharply just in a range \([T_{c},T_{h}]\) (with \(T_{c}<T_{h}\)) and it's null outside of it. The following change of variable is introduced \[dz=\frac{B_{x}}{B}ds \tag{26}\] Practically, \(z=\int_{0}^{z}dz^{\prime}=\int_{0}^{s(z)}\frac{B_{x}}{B}ds^{\prime}\) will be the volume (\(ds/B\propto dV\)) of the flux tube contained from the target (\(s,z=0\)) up to the point of interest, normalized by a reference perpendicular area (\(\propto 1/B_{x}\)), where the upstream/X-point is taken as this reference. Defining \[\kappa=\kappa_{\parallel}\frac{B_{x}^{2}}{B^{2}} \tag{11}\] (12) becomes \[\partial_{z}q = H\] (13) with \[q = -\frac{1}{f_{cond}}\kappa\partial_{z}T \tag{14}\] \(q=(1/f_{cond})q_{\parallel,cond}B_{x}/B\) is then the total parallel power \(Q_{\parallel}\propto(1/f_{cond})q_{\parallel,cond}/B\) normalized by the same reference perpendicular area \(\propto 1/B_{x}\). Taking (13) and multiplying both sides by \(q\), then integrating from \(z(T_{c})\) to \(z(T_{h})\) (note that \(z(T_{h})>z(T_{c})\), in the chosen coordinate system) \[[q^{2}]_{z(T_{c})}^{z(T_{h})}=-\int_{T_{c}}^{T_{h}}\frac{2}{f_{cond}}\kappa(T^ {\prime})H(T^{\prime})dT^{\prime} \tag{15}\] Using assumptions (b)-(e), the square root of the integral on the right hand side of this equation becomes \[\Delta q_{rad}\equiv\sqrt{\frac{2\kappa_{0}}{f_{cond}}\int_{T_{c}}^{T_{h}} \frac{B_{x}^{2}}{B^{2}}T^{5/2}n^{2}f_{I}Q(T)dT} \tag{16}\] Assume (f) the radiation region (_i.e._ the region in between \(z(T_{c})\) and \(z(T_{h})\)) is so narrow that \(B\) and \(f_{I}\) variations are negligible in it. Assuming also that (g) volumetric processes and cross-field transport effects on momentum balance and total pressure redistribution are negligible in this region, this implies \(p^{2}=4n^{2}T^{2}=p_{tot}^{2}/(1+M^{2})^{2}\) can be taken out of the integral as its variation will be then linked just to total flux expansion effects (hence, \(B\) variation), negligible by assumption (f). Therefore \[\Delta q_{rad}=\frac{B_{x}}{B_{z(T_{h})}}p_{z(T_{h})}\sqrt{\frac{\kappa_{0}}{ 2f_{cond}}f_{I}\mathcal{F}} \tag{17}\] with \(\mathcal{F}=\int_{T_{c}}^{T_{h}}\sqrt{T}Q(T)dT\). The pressure at the detachment front entrance \(p_{z(T_{h})}\) is linked with pressure upstream/at the X-point \(p_{u}\) using (16), substituting \(B\propto R^{-1}\). It is assumed that (h) volumetric processes and cross-field transport effects on momentum balance are negligible in the region between the X-point and the detachment front entrance. It then holds \[\frac{p_{tot,z(T_{h})}}{p_{tot,x}} = \frac{1+M_{z(T_{h})}^{2}}{1+M_{x}^{2}}\frac{p_{z(T_{h})}}{p_{u}}=\] \[= \left(\frac{B_{z(T_{h})}}{B_{x}}\right)^{\kappa_{eff}^{z\to z (T_{h})}}\] (17) becomes then \[\Delta q_{rad} = \frac{B_{x}}{B_{z(T_{h})}}\frac{1+M_{x}^{2}}{1+M_{z(T_{h})}^{2}}\] \[\cdot\left(\frac{B_{z(T_{h})}}{B_{x}}\right)^{\kappa_{eff}^{z \to z(T_{h})}}p_{u}\sqrt{\frac{\kappa_{0}}{2f_{cond}}f_{I}\mathcal{F}}=\] \[= \frac{1+M_{x}^{2}}{1+M_{x(T_{h})}^{2}}\left(\frac{B_{x}}{B_{z(T_{ h})}}\right)^{1-\kappa_{eff}^{z\to z(T_{h})}}\] \[\cdot p_{u}\sqrt{\frac{\kappa_{0}}{2f_{cond}}f_{I}\mathcal{F}}\] Finally, to obtain a model for the operational window for different control parameters, it is assumed that (i) the power leaving the cold detachment front is negligible. This will imply \(q_{z(T_{h})}=-\Delta q_{rad}\) by (15). The power entering the hot detachment front must then match the power entering upstream/at the X-point, thanks to assumption (d), and the latter can be expressed as \(q_{i}=-P_{SOL}\), by definition of \(q\). Now one can equate \(q_{z(T_{h})}\) and \(q_{i}\) and solve in terms of the control parameters \(\zeta=[p_{u},f_{I},P_{SOL}]\). The front position \(z(T_{h})\) is then set to be at the X-point first and then at the target to find the corresponding values \(\zeta_{x,t}\) (leaving the others parameters constant). Dividing these two values, the detachment window is obtained \[\frac{\zeta_{x}}{\zeta_{t}} = \left(\left(\frac{B_{tot,x}}{B_{tot,t}}\right)^{1-k_{eff}}\frac{1 +M_{x}^{2}}{1+M_{t}^{2}}\right)^{\beta}\] \[\text{with }\beta=[1,2,-1].\] ## Appendix D Derivation of Mach number evolution equation Consider the steady-state ion particle balance and plasma momentum balance along a flux tube \[B\partial_{s}\left(\frac{nv_{s}}{B}\right) = S_{par} \tag{17}\] \[B\partial_{s}\left(\frac{m_{i}nv_{s}^{2}}{B}\right) = -\partial_{s}(nT^{*})+S_{mom} \tag{18}\] where \(s\) is a length reference coordinate along the flux tube and \(S_{par,mom}\) includes contributions from volumetric sources and cross-field transport effects. A single hydrogenic ion species and quasi-neutrality (\(n_{e}=n_{i}=n\)) are considered. For the sake of simplicity in the notation, \(T^{*}=T_{e}+T_{i}\) is introduced. Start rewriting the pressure term in (18) \[B\partial_{s}\left(\frac{m_{i}nv_{s}^{2}}{B}\right)= - B\partial_{s}\left(\frac{nT^{*}}{B}\right)\] \[- \frac{nT^{*}}{B}\partial_{s}(B)+S_{mom}\] In both (44) and (45), isolate \(\partial_{s}(n)\) \[\partial_{s}(n)=\frac{S_{par}}{v_{s}}-\frac{nB}{v_{s}}\partial_{s} \left(\frac{v_{s}}{B}\right) \tag{46}\] \[m_{i}v_{s}^{2}\partial_{s}(n)+nB\partial_{s}\left(\frac{mv_{s}^{2} }{B}\right)=-T^{*}\partial_{s}(n)\] (47) \[\qquad\qquad-nB\partial_{s}\left(\frac{T}{B}\right)-\frac{nT^{*} }{B}\partial_{s}(B)+S_{mom}\] Reordering and inserting (46) into (47) \[-\frac{nB}{v_{s}}(m_{i}v_{s}^{2}+T^{*})\partial_{s}\left(\frac{v_ {s}}{B}\right)+(m_{i}v_{s}^{2}+T^{*})\frac{S_{par}}{v_{s}}=\] \[=-nB\partial_{s}\left(\frac{m_{i}v_{s}^{2}+T^{*}}{B}\right)\] \[\quad-\frac{nT^{*}}{B}\partial_{s}(B)+S_{mom} \tag{48}\] Introducing \(c_{s}=\sqrt{T^{*}/m_{i}}\) and reordering \[-B(v_{s}^{2}+c_{s}^{2})\partial_{s}\left(\frac{v_{s}}{B}\right)+ v_{s}B\partial_{s}\left(\frac{v_{s}^{2}+c_{s}^{2}}{B}\right)=\] \[=-(v_{s}^{2}+c_{s}^{2})\frac{S_{par}}{n}-v_{s}\frac{c_{s}^{2}}{B} \partial_{s}(B)+\frac{v_{s}S_{mom}}{m_{i}n} \tag{49}\] The left-hand-side of this equation is equivalent to \[-(c_{s}^{2}-v_{s}^{2})\partial_{s}(v_{s})+2v_{s}c_{s}\partial_{s}(c_{s}) \tag{50}\] Exploiting this and introducing \(M=v_{s}/c_{s}\), one obtains \[\frac{1-M^{2}}{c_{s}}\partial_{s}(v_{s})=2\frac{M}{c_{s}}\partial_ {s}(c_{s}) \tag{51}\] \[\qquad+(1+M^{2})\frac{S_{par}}{nc_{s}}+\frac{M}{B}\partial_{s}(B )-\frac{MS_{mom}}{m_{i}nc_{s}^{2}}\] which can be rewritten as \[\partial_{s}(v_{s})=\partial_{s}(Mc_{s})=M\partial_{s}(c_{s})+c_{s}\partial(M) \tag{52}\] Exploiting this in (51) and using \(B\propto(A_{\perp})^{-1}\), it is finally possible to retrieve (32) \[(1-M^{2})\partial_{s}(M) =\frac{1+M^{2}}{nc_{s}}S_{par} \tag{53}\] \[\quad+\frac{M(1+M^{2})}{c_{s}}\partial_{s}(c_{s})\] \[\quad+A_{\perp}M\partial_{s}(\frac{1}{A_{\perp}})\] \[\quad-\frac{M}{m_{i}nc_{s}^{2}}S_{mom}\]
2309.08137
Small scale creation for 2D free boundary Euler equations with surface tension
In this paper, we study the 2D free boundary incompressible Euler equations with surface tension, where the fluid domain is periodic in $x_1$, and has finite depth. We construct initial data with a flat free boundary and arbitrarily small velocity, such that the gradient of vorticity grows at least double-exponentially for all times during the lifespan of the associated solution. This work generalizes the celebrated result by Kiselev--{\v{S}}ver{\'a}k to the free boundary setting. The free boundary introduces some major challenges in the proof due to the deformation of the fluid domain and the fact that the velocity field cannot be reconstructed from the vorticity using the Biot-Savart law. We overcome these issues by deriving uniform-in-time control on the free boundary and obtaining pointwise estimates on an approximate Biot-Savart law.
Zhongtian Hu, Chenyun Luo, Yao Yao
2023-09-15T04:07:50Z
http://arxiv.org/abs/2309.08137v2
# Small scale creation for 2D free boundary Euler equations ###### Abstract In this paper, we study the 2D free boundary incompressible Euler equations with surface tension, where the fluid domain is periodic in \(x_{1}\), and has finite depth. We construct initial data with a flat free boundary and arbitrarily small velocity, such that the gradient of vorticity grows at least double-exponentially for all times during the lifespan of the associated solution. This work generalizes the celebrated result by Kiselev-Sverak [15] to the free boundary setting. The free boundary introduces some major challenges in the proof due to the deformation of the fluid domain and the fact that the velocity field cannot be reconstructed from the vorticity using the Biot-Savart law. We overcome these issues by deriving uniform-in-time control on the free boundary and obtaining pointwise estimates on an approximate Biot-Savart law. ## 1 Introduction The 2D incompressible free boundary Euler equations describe the motion of a fluid in two dimensions with a free boundary separating the moving fluid region \(\mathcal{D}_{t}\) and the vacuum region. In the fluid region, the fluid velocity \(u(t,x)\) and the pressure \(p(t,x)\) satisfy the incompressible Euler equations: \[\begin{cases}\partial_{t}u+u\cdot\nabla u+\nabla p=0,&\text{ in }\mathcal{D}_{t},\\ \nabla\cdot u=0,&\text{ in }\mathcal{D}_{t}.\end{cases} \tag{1.1}\] We consider the setting where the whole spatial domain is \(\mathbb{T}\times\mathbb{R}_{+}\), where \(\mathbb{T}=[-1,1)\) has periodic boundary condition. Assume the fluid domain \(\mathcal{D}_{t}\) consists of an upper moving boundary \(\Gamma_{t}\) and a fixed flat bottom \(\Gamma_{b}=\mathbb{T}\times\{x_{2}=0\}\). Here the free boundary \(\Gamma_{t}\) evolves according to the fluid velocity \(u(t,x)\), namely, its normal velocity \(V\) is given by \[V=u\cdot\mathcal{N}\quad\text{ on }\Gamma_{t}, \tag{1.2}\] where \(\mathcal{N}\) is the outward unit normal to \(\Gamma_{t}\). Throughout this paper, we assume the presence of surface tension, i.e., the pressure on the free boundary obeys \[p=\sigma\mathcal{H}\quad\text{ on }\Gamma_{t}, \tag{1.3}\] where \(\sigma>0\) is the surface tension constant, and \(\mathcal{H}\) is the mean curvature of the free boundary. On the fixed boundary, we impose the no-flow boundary condition \[u\cdot n=0\quad\text{on }\Gamma_{b}, \tag{1.4}\] where \(n=(0,-1)\) is the outward unit normal to \(\Gamma_{b}\). For simplicity, let the initial free boundary be a straight line \(\Gamma_{0}=\mathbb{T}\times\{x_{2}=2\}\), so the initial fluid domain is \[\mathcal{D}_{0}=\mathbb{T}\times(0,2). \tag{1.5}\] The system (1.1)-(1.4) is also referred to as the 2D capillary water wave system. This system has been under very active investigation for the past two decades. The local well-posedness for the free-boundary Euler equations with surface tension is well-known, which can be found in [1, 5, 6, 9, 10, 16, 17, 18, 19]. Unlike the Euler equations in a fixed domain, the local well-posedness for free-boundary Euler equations does not come directly from the a priori estimate since the linearized equations lose certain symmetry on the moving boundary. This issue is resolved by introducing carefully designed approximate equations that are asymptotically consistent with the a priori estimate. In addition, for certain large initial data, it is known that the solution to the water wave system with or without surface tension can form a splash singularity in finite time; see [2, 3, 4, 7]. Beyond local well-posedness, a natural question is whether solutions with small initial data stay small for all times. For _irrotational_\(u_{0}\) (i.e. \(\nabla\times u_{0}=0\)) in a domain with infinite depth, a positive answer was given independently by Ifrim-Tataru [13] (for either periodic or asymptotically flat free boundary) and Ionescu-Pusateri [14] (for asymptotically flat free boundary), where they showed that small initial data leads to a global-in-time small solution. The key strategy there is to reduce the system (1.1)-(1.3) to a new system of equations defined on the moving boundary \(\Gamma_{t}\), owing to the fact that \(u\) is both divergence- and curl-free. See also Deng-Ionescu-Pusader-Pusateri [8] for global-in-time irrotational solutions of the gravity-capillary water-wave system in 3D. However, for _rotational_\(u_{0}\) it is unknown whether solutions with small initial data always remain small for all times. The goal of this work is to give a negative answer to this question in the finite-depth case - namely, we construct smooth initial data with a flat free boundary and arbitrarily small velocity, where \(\|\nabla\omega(t)\|_{L^{\infty}}\) grows at least double-exponentially for all times during the lifespan of the solution. For 2D Euler equations in a disk, such double-exponential growth of \(\|\nabla\omega(t)\|_{L^{\infty}}\) was constructed in a celebrated result by Kiselev-Sverak [15]. Similar ideas were applied to the torus \(\mathbb{T}^{2}\) by Zlatos [21] to obtain exponential growth of vorticity gradient, and applied to smooth domains with an axis of symmetry by Xu [20]. In this paper, we aim to extend the construction of [15] to the free boundary setting. Our main result is as follows, which is stated for the \(\sigma=1\) case for simplicity: **Theorem 1.1**.: _Consider the 2D free boundary Euler equations (1.1)-(1.4) with \(\sigma=1\), whose initial domain \(\mathcal{D}_{0}\) is given by (1.5). There exists a smooth velocity field \(v_{0}\in C^{\infty}(\mathcal{D}_{0})\) and universal constants \(\varepsilon_{0},c_{1},c_{2}>0\), such that for any \(\varepsilon\in(0,\varepsilon_{0})\), the solution1 to (1.1)-(1.4) with initial velocity \(u_{0}:=\varepsilon v_{0}\) satisfies the following for its vorticity \(\omega:=\partial_{1}u_{2}-\partial_{2}u_{1}\):_ Footnote 1: Here and throughout, a _solution_ always means a \(H^{s}\)–solution, for some fixed \(s\geq 4\). Since our initial data is smooth, the local existence of such a solution is guaranteed by [6, 16]. \[\|\nabla\omega(t,\cdot)\|_{L^{\infty}(\mathcal{D}_{t})}\geq\varepsilon\exp(c _{1}\exp(c_{2}\varepsilon t))\quad\text{for all }t\in[0,T), \tag{1.6}\] _where \(T\) is the lifespan of the solution._ **Remark 1.2**.: _(1) In other words, we have constructed smooth small initial data of size \(\varepsilon\ll 1\), such that \(\|u(t)\|_{W^{2,\infty}}\) grows to order one by time \(O(\varepsilon^{-1}\ln\ln\varepsilon^{-1})\), unless a singularity occurs before this time. This result in 2D can be readily extended to the periodic 3D setting, by setting \(u_{0}\) independent of the \(x_{3}\) variable. Thus small initial data will not lead to a small solution for all times, which is a sharp contrast to the irrotational case with infinite depth [8, 13, 14]._ _(2) Theorem 1.1 can be easily generalized to all \(\sigma>0\) (with \(\varepsilon_{0},c_{1},c_{2}\) depending on \(\sigma\) now). A simple scaling argument shows that if \((u(t,\cdot),\mathcal{D}_{t})\) is a solution to (1.1)-(1.4) with \(\sigma=1\), then \((\sqrt{\sigma}u(\sqrt{\sigma}t,\cdot),\mathcal{D}_{\sqrt{\sigma}t})\) solves (1.1)-(1.4) for a given \(\sigma>0\)._ To prove Theorem 1.1, a natural starting point is to enforce the same symmetry as [15], with \(u_{01}\) odd-in-\(x_{1}\) and \(u_{02}\) even-in-\(x_{1}\) respectively. One can easily check that such symmetry holds for all times (so vorticity remains odd-in-\(x_{1}\) for all times). The proof is standard and we include it in Lemma 2.1 for the sake of completeness. In addition, if we set \(\omega_{0}=1\) at most of the points in the right half of \(\mathcal{D}_{0}\) (except a small measure, since \(\omega_{0}\) has to smoothly transition to \(0\) at \(x_{1}=0\) and \(x_{1}=1\) due to its oddness), one can check that such property also holds for \(\mathcal{D}_{t}\), since in the free boundary setting the vorticity is also preserved along the characteristics. However, despite these similarities, one faces two major challenges to adapt the proof of [15] to the free boundary setting: The first issue is the deformation of the domain. Since \(\omega_{0}\not\equiv 0\) and \(\mathcal{D}_{t}\) is evolving in time, it may deform a lot from the initial domain \(\mathcal{D}_{0}\). In general, the free boundary \(\Gamma_{t}\) might get very close to the origin, and the nonlinear coupling between \(\Gamma_{t}\)'s evolution and the velocity field in the bulk of the fluid could destroy the small-scale creation mechanism near the origin in [15]. That being said, we show that this can never happen at any time for small initial data. This is because the free boundary Euler equation with surface tension is known to have a conserved energy \(E(t)=K(t)+\sigma L(t)\), where \(K(t)\) is the kinetic energy and \(L(t)\) is the length of the free boundary. (The energy conservation was shown in [16, 17], and we derive it in Proposition 2.4 for the sake of completeness.) Using this conserved energy, we make the simple but important observation that a flat initial free boundary and a small initial kinetic energy guarantees that the free boundary Figure 1: Illustrations of our initial data \(u_{0},\Gamma_{0}\) and its evolution after time \(t\). Here the red and blue colors represent positive and negative vorticity respectively. As we show in Proposition 3.1, for small initial velocity, the free boundary \(\Gamma_{t}\) will be confined within \(\frac{3}{2}<x_{2}<\frac{5}{2}\) for all time during the lifespan of a solution. This allows us to estimate \(u(t,\cdot)\) near the point \((0,0)\). always stays close to \(\Gamma_{0}\), thus can never get close to the origin - see Proposition 3.1 for a precise statement, and see Figure 1 for an illustration. A more serious problem is the lack of Biot-Savart law in the free boundary setting. Recall that for a _fixed domain_\(D\), given the vorticity \(\omega(t,\cdot)\) in \(D\) at any moment, the velocity field \(u(t,\cdot)\) is uniquely determined by the Biot-Savart law \(u=\nabla^{\perp}\varphi\), where the stream function \(\varphi\) solves the elliptic equation \(\Delta\varphi=\omega\) in \(D\) with \(\varphi=0\) on \(\partial D\). This Biot-Savart law was crucial in [15] to derive pointwise estimates of \(u(\cdot,t)\). In contrast, in the _free boundary_ setting, even with \(\omega(t,\cdot)\) and \(\mathcal{D}_{t}\) given at some time \(t\), it is not sufficient to uniquely determine \(u(t,\cdot)\) - one also needs to know the normal velocity of the free boundary to determine \(u(t,\cdot)\) in the fluid domain. To overcome this challenge, we show that \(u(t,\cdot)\) can still be somewhat determined by \(\omega(t,\cdot)\) by an _approximate Biot-Savart law_ in Section 3.2, which contains an error term that remains regular and small for all times near the origin. This allows us to obtain a pointwise estimate of \(u\) similar to the key lemma in [15, Lemma 3.1], leading to the double-exponential growth of \(\|\nabla\omega(t)\|_{L^{\infty}(\mathcal{D}_{t})}\). #### Notations * Let \(B_{r}\) be the open disk centered at the origin with radius \(r\). In Section 3, we define \(\Omega:=\mathbb{T}\times[0,1]\). We also define \(\Omega^{+}\) and \(\mathcal{D}_{t}^{+}\) as the right half of \(\Omega\) and \(\mathcal{D}_{t}\), i.e. \(\Omega^{+}:=[0,1]\times[0,1]\) and \(\mathcal{D}_{t}^{+}:=\mathcal{D}_{t}\cap\{x_{1}\in[0,1]\}\). * We denote by \(C\) universal constants whose values may change from line to line. Any constants with subscripts, such as \(C_{i}\), stay fixed once they are chosen. #### Acknowledgements CL is supported by the Hong Kong RGC grant No. CUHK-24304621 and CUHK-14302922. YY is partially supported by the NUS startup grant, MOE Tier 1 grant A-0008491-00-00, and the Asian Young Scientist Fellowship. ZH thanks the hospitality of the Chinese University of Hong Kong. The authors thank Tarek Elgindi for suggesting this problem, and Alexander Kiselev for helpful discussions. ## 2 Preliminary results In this section, we collect a few preliminary results on the free boundary Euler equations with surface tension. In Section 2.1, We first demonstrate a symmetry to which 2D free boundary Euler equations conform. Such symmetry corresponds to the odd-in-\(x_{1}\) symmetry of vorticity in the fixed boundary case [15], and is crucial to our construction. In Section 2.2, we show the conservation of vorticity and an energy balance involving the bulk kinetic energy as well as the length of the free boundary. ### Symmetry in 2D free boundary Euler equations To begin with, we discuss some symmetry properties of the 2D free boundary Euler equations. For the 2D Euler equation in fixed domains, the conservation of odd-in-\(x_{1}\) symmetry in vorticity is crucial in the proof of small scale formations, as seen in [15, 20, 21]. Below we show that a similar symmetry is also preserved for free boundary Euler equations; the difference is that we state the symmetry assumptions in terms of the velocity rather than the vorticity, since for the free boundary Euler equation one cannot uniquely determine the velocity using the vorticity at a given moment due to the kinematic boundary condition (1.2). **Lemma 2.1**.: _Let \((u_{0},\mathcal{D}_{0})\) be the initial data of (1.1)-(1.4), where \(\mathcal{D}_{0}\) is given by (1.5) and \(u_{0}=(u_{01},u_{02})\) satisfies_ \[u_{01}(-x_{1},x_{2})=-u_{01}(x_{1},x_{2}),\quad u_{02}(-x_{1},x_{2})=u_{02}(x_ {1},x_{2}). \tag{2.1}\] _Then for all time during the lifespan of a solution, the solution \((u,\mathcal{D}_{t})\) satisfies the same symmetry, i.e._ \[-u_{1}(t,-x_{1},x_{2})=u_{1}(t,x_{1},x_{2}),\quad u_{2}(t,-x_{1},x_{2})=u_{2}(t,x_{1},x_{2}), \tag{2.2}\] _and the moving fluid domain \(\mathcal{D}_{t}\) remains even in \(x_{1}\), i.e.,_ \[\mathcal{D}_{t}=\widetilde{\mathcal{D}}_{t}:=\{(-x_{1},x_{2}):(x_{1},x_{2}) \in\mathcal{D}_{t}\}. \tag{2.3}\] **Remark 2.2**.: _As a direct consequence of (2.2), we know the vorticity \(\omega(t,x)=\nabla^{\perp}\cdot u(t,x)\) stays odd in \(x_{1}\) for all time during the lifespan of a solution._ Proof.: First, setting \[v(t,x_{1},x_{2})=(v_{1}(t,x_{1},x_{2}),v_{2}(t,x_{1},x_{2})) =(-u_{1}(t,-x_{1},x_{2}),u_{2}(t,-x_{1},x_{2})), \tag{2.4}\] \[q(t,x_{1},x_{2}) =p(t,-x_{1},x_{2}), \tag{2.5}\] it suffices to show that \((v,q,\widetilde{\mathcal{D}}_{t})\) also verifies the system (1.1)-(1.4) due to uniqueness of solution. Fixing \((x_{1},x_{2})\in\widetilde{\mathcal{D}}_{t}\), a direct computation shows that \[(\partial_{t}v_{1}+v\cdot\nabla v_{1}+\partial_{1}q)|_{(t,x_{1},x _{2})} =-(\partial_{t}u_{1}+u\cdot\nabla u_{1}+\partial_{1}p)|_{(t,-x_{1},x_{2})},\] \[(\partial_{t}v_{2}+v\cdot\nabla v_{2}+\partial_{2}q)|_{(t,x_{1}, x_{2})} =(\partial_{t}u_{2}+u\cdot\nabla u_{2}+\partial_{2}p)|_{(t,-x_{1},x_{2})},\] which implies that \(\partial_{t}v+v\cdot\nabla v+\nabla q=0\) in \(\widetilde{\mathcal{D}}_{t}\). Similarly, we have \[\nabla\cdot v|_{(t,x_{1},x_{2})}=\nabla\cdot u|_{(t,-x_{1},x_{2})}=0.\] Second, we need to check the boundary conditions. Since \(\partial\widetilde{\mathcal{D}}_{t}=\widetilde{\Gamma}_{t}\cup\widetilde{ \Gamma}_{b}\), where \[\widetilde{\Gamma}_{t}=\{(x_{1},x_{2}):(-x_{1},x_{2})\in\Gamma_{t}\}, \tag{2.6}\] and \(\widetilde{\Gamma}_{b}=\Gamma_{b}\), then it is straightforward to check that \(v\cdot n=0\) on \(\widetilde{\Gamma}_{b}\). Furthermore, denoting by \(\widetilde{\mathcal{N}}=(\widetilde{\mathcal{N}}_{1},\widetilde{\mathcal{N}}_ {2})\) the outward unit normal to \(\widetilde{\Gamma}_{t}\), we infer from (2.6) that \[\widetilde{\mathcal{N}}_{1}(t,x_{1},x_{2})=-\mathcal{N}_{1}(t,-x_{1},x_{2}), \quad\widetilde{\mathcal{N}}_{2}(t,x_{1},x_{2})=\mathcal{N}_{2}(t,-x_{1},x_{ 2}). \tag{2.7}\] This yields \(v\cdot\widetilde{\mathcal{N}}|_{(t,x_{1},x_{2})}=u\cdot\mathcal{N}|_{(t,-x_{1},x_{2})}\). Finally, we define \(\widetilde{\mathcal{H}}\) to be the mean curvature of \(\widetilde{\Gamma}_{t}\). By definition, \(\mathcal{H}=\overline{\nabla}\cdot\mathcal{N}\), where \(\overline{\nabla}\) is the spatial derivative tangent to \(\Gamma_{t}\), whose components read \[\overline{\nabla}_{j}=\nabla_{j}-\mathcal{N}_{j}(\mathcal{N}\cdot\nabla), \quad j=1,2.\] This implies \(\widetilde{\mathcal{H}}=\overline{\overline{\nabla}}\cdot\widetilde{\mathcal{N}}\), where \(\overline{\overline{\nabla}}_{j}=\nabla_{j}-\widetilde{\mathcal{N}}_{j}( \widetilde{\mathcal{N}}\cdot\nabla)\). Then, in light of (2.7), a direct computation yields that for any \((x_{1},x_{2})\in\Gamma_{t}\), \[\widetilde{\mathcal{H}}(t,x_{1},x_{2})=\overline{\overline{\nabla}}\cdot \widetilde{\mathcal{N}}|_{(t,x_{1},x_{2})}=\overline{\nabla}\cdot\mathcal{N}| _{(t,-x_{1},x_{2})}=\mathcal{H}(t,-x_{1},x_{2}). \tag{2.8}\] Thanks to (2.5), we have \[q=\sigma\widetilde{\mathcal{H}},\quad\text{ on }\widetilde{\Gamma}_{t}.\] This concludes the proof. ### Conservation of vorticity and a conserved \(L^{2}\)-energy In this subsection, we aim to show two conserved quantities satisfied by the free-boundary Euler equations (1.1) on both vorticity and velocity sides. The first result below shows that, identical to the classical fixed-boundary Euler equations, any \(L^{p}\) norm of the vorticity is conserved. **Proposition 2.3**.: _For any \(1\leq p\leq\infty\), we have \(\|\omega(t,\cdot)\|_{L^{p}(\mathcal{D}_{t})}=\|\omega_{0}\|_{L^{p}(\mathcal{D }_{0})}\) for all times during the lifespan of the solution._ Proof.: By applying the operator \(\nabla^{\perp}\cdot\) to the velocity equation in (1.1) and using the divergence-free property of \(u\), \(\omega\) satisfies the following transport equation \[\partial_{t}\omega+u\cdot\nabla\omega=0\quad\text{ in }\mathcal{D}_{t}.\] This vorticity equation together with the divergence-free property of \(u\) yields the result. The following proposition shows that free boundary Euler equations with surface tension have a conserved \(L^{2}\)-energy. It plays a pivotal role in quantifying the constraining effect of the surface tension on the behavior of the free boundary, as we will see in Section 3.1. **Proposition 2.4**.: _Let_ \[E(t):=K(t)+\sigma L(t), \tag{2.9}\] _where_ \[K(t):=\frac{1}{2}\int_{\mathcal{D}_{t}}|u(t,x)|^{2}\,dx\quad\text{ and }\quad L(t):=\int_{\Gamma_{t}}dS_{t}\] _are the kinetic energy of the fluid and the length of \(\Gamma_{t}\) respectively. Then_ \[E(t)=E(0) \tag{2.10}\] _for all times during the lifespan of the solution._ Proof.: We will verify the identity (2.10) by direct computation. We start from \[\frac{d}{dt}K(t)=\int_{\mathcal{D}_{t}}(\partial_{t}u+u\cdot\nabla u)\cdot u \,dx=-\int_{\mathcal{D}_{t}}\nabla p\cdot u\,dx,\] and then apply the divergence theorem and the boundary conditions to obtain \[-\int_{\mathcal{D}_{t}}\nabla p\cdot u\,dx=-\int_{\Gamma_{t}}p(u\cdot \mathcal{N})\,dS_{t}-\int_{\Gamma_{b}}p\underbrace{(u\cdot n)}_{=0}\,dx_{1}+ \int_{\mathcal{D}_{t}}p\underbrace{(\nabla\cdot u)}_{=0}\,dx=-\int_{\Gamma_{t }}p(u\cdot\mathcal{N})\,dS_{t}.\] Now, invoking the boundary conditions \(p=\sigma\mathcal{H}\), and \(u\cdot\mathcal{N}=V\) on \(\Gamma_{t}\), we have \[-\int_{\Gamma_{t}}p(u\cdot\mathcal{N})\,dS_{t}=-\int_{\Gamma_{t}}\sigma\mathcal{ H}V\,dS_{t}. \tag{2.11}\] On the other hand, since \(\frac{d}{dt}\int_{\Gamma_{t}}dS_{t}=\int_{\Gamma_{t}}\mathcal{H}(u\cdot \mathcal{N})\,dS_{t}\) (whose proof can be found in [11, Chapter 4]), we have \[\frac{d}{dt}L(t)=\int_{\Gamma_{t}}\mathcal{H}V\,dS_{t}.\] Combining this with (2.11), we arrive at \[\frac{d}{dt}E(t)=\frac{d}{dt}(K(t)+\sigma L(t))=0,\] finishing the proof. ## 3 Uniform-in-time estimates for the free boundary problem In this section, we obtain some uniform-in-time estimates of the free boundary and the velocity field, which are at the heart of the proof of the main theorem of the paper. ### Uniform-in-time control of the free boundary and kinetic energy The following result shows that if the initial free boundary is flat and the initial kinetic energy is sufficiently small, the free boundary \(\Gamma_{t}\) stays constrained in a small neighborhood around the initial profile \(\Gamma_{0}\) for all times, and the kinetic energy at time \(t\) always stays below the initial kinetic energy. **Proposition 3.1**.: _Let \(\sigma>0\). Consider the solution to the system (1.1)-(1.4) with initial fluid domain \(\mathcal{D}_{0}\) given by (1.5), where the initial velocity \(u_{0}\) is smooth and has a small kinetic energy \(K(0)\leq\frac{\sigma}{20}\). Then we have_ \[\Gamma_{t}\subset\mathbb{T}\times\left(\frac{3}{2},\frac{5}{2}\right)\quad \text{for all }t\in[0,T) \tag{3.1}\] _and_ \[K(t)\leq K(0)\quad\text{for all }t\in[0,T), \tag{3.2}\] _where \(T>0\) is the lifespan of the solution._ Proof.: In light of (2.10) in Proposition 2.4, we obtain \[K(t)+\sigma L(t)=K(0)+\sigma L(0) \tag{3.3}\] for all times during the lifespan of the solution. Since \(K(t)\geq 0\), we have \[L(t)=L(0)+\frac{K(0)-K(t)}{\sigma}\leq L(0)+\frac{K(0)}{\sigma}\leq 2.05, \tag{3.4}\] where the last inequality follows from the assumptions (1.5) (so \(L(0)=2\)) and \(K(0)\leq\frac{\sigma}{20}\). Also, we deduce from the incompressibility that \(\mathcal{D}_{t}\) has the same area as \(\mathcal{D}_{0}\), so during the lifespan of the solution, \(\Gamma_{t}\) must intersect with \(\Gamma_{0}=\mathbb{T}\times\{2\}\) at least once. In addition, \(\Gamma_{t}\) is a closed curve in \(\mathbb{T}\times\mathbb{R}_{+}\), and its projection onto the \(x_{1}\) axis is the whole set \(\mathbb{T}\). For any closed curve satisfying the properties above, if it intersects either \(\mathbb{T}\times\{\frac{3}{2}\}\) or \(\mathbb{T}\times\{\frac{5}{2}\}\), an elementary computation shows that it must have length at least \(2\sqrt{1+(\frac{1}{2})^{2}}=\sqrt{5}\approx 2.236\). Since the length of \(\Gamma_{t}\) stays below \(2.05\) for all times due to (3.4), we conclude that \(\Gamma_{t}\) must be contained in \(\mathbb{T}\times\left(\frac{3}{2},\frac{5}{2}\right)\) for all times, which proves (3.1). To show (3.2), note that (3.3) gives \(K(t)=K(0)+\sigma(L(0)-L(t))\), where \(L(0)=2\). During the lifespan of the solution, the projection of \(\Gamma_{t}\) onto the \(x_{1}\) axis is the whole set \(\mathbb{T}=[-1,1)\), thus \(\Gamma_{t}\) has length at least \(2\). This yields \(L(t)\geq 2=L(0)\), thus \(K(t)\leq K(0)\). ### Error estimates of an approximate Biot-Savart law As we have described in the introduction, a major issue in obtaining pointwise velocity estimates in the free boundary setting is the lack of Biot-Savart law, namely, to determine \(u(t,\cdot)\), it is not sufficient to know \(\omega(t,\cdot)\) and \(\mathcal{D}_{t}\). To overcome this challenge, we introduce an "approximate Biot-Savart law" which only uses the information of \(\omega(t,\cdot)\) in the set \(\Omega:=\mathbb{T}\times[0,1]\), which leads to an approximate velocity field \(U(t,\cdot)\) in \(\Omega\). We will then use the uniform-in-time estimates in Proposition 3.1 to obtain a precise estimate on the error between the actual velocity \(u\) and the approximate velocity field \(U\) - it turns out the error is quite regular and small near the origin. Recall the notations \(\Omega:=\mathbb{T}\times[0,1]\), and \(B_{r}\) as the open disk centered at the origin with radius \(r\). We emphasize that as long as the initial kinetic energy is small, we have \(\Omega\subset\mathcal{D}_{t}\) for all times during the lifespan of the solution due to Proposition 3.1. For any \(t\geq 0\) during the lifespan of the solution, we define an _approximate velocity field_\(U(t,\cdot):\Omega\to\mathbb{R}^{2}\) as \[U(t,\cdot):=\nabla^{\perp}\Phi(t,\cdot)\quad\text{ in }\Omega, \tag{3.5}\] where \(\Phi(t,\cdot)\) solves the following elliptic equation at the fixed time \(t\): \[\begin{cases}\Delta\Phi(t,\cdot)=\omega(t,\cdot)&\text{ in }\Omega\\ \Phi(t,\cdot)=0&\text{ on }\partial\Omega,\end{cases} \tag{3.6}\] where \(\omega(t,\cdot)=\nabla^{\perp}\cdot u(t,\cdot)\) is the vorticity of the solution \(u(t,\cdot)\). Note that \(U(t,\cdot)\) is uniquely determined by \(\omega(t,\cdot)|_{\Omega}\) using the usual Biot-Savart law for 2D Euler equation in the fixed domain \(\Omega\), hence the name "approximate Biot-Savart law". To estimate the error between \(U\) and the actual velocity field \(u(t,\cdot)|_{\Omega}\) restricted to \(\Omega\) (note that \(u|_{\Omega}\) is well defined since \(\Omega\subset\mathcal{D}_{t}\) for all times by Proposition 3.1), we define the error \(e(t,\cdot):\Omega\to\mathbb{R}^{2}\) as \[e(t,\cdot):=u(t,\cdot)|_{\Omega}-U(t,\cdot). \tag{3.7}\] The following proposition plays a key role in our proof of small scale creation. It says that the error \(e\) is very regular in \(B_{1/2}\cap\Omega\), and \(\nabla e\) is pointwise bounded above by \(C\sqrt{K(0)}\) for all times. (In fact, the same estimate holds for any higher derivative of \(e\), at the expense of having a larger \(C\) - but controlling the first derivative of \(e\) is sufficient for us.) **Proposition 3.2**.: _Let \(\sigma>0\). Consider the solution to the system (1.1)-(1.4) with initial fluid domain \(\mathcal{D}_{0}\) given by (1.5), where the initial velocity \(u_{0}\) is smooth and has small kinetic energy \[\int_{\Omega}|\nabla\Phi(t,x)|^{2}+|\nabla F(t,x)|^{2}dx=\|u(t,\cdot)\|^{2}_{L^{2}( \Omega)}\leq 2K(t)\leq 2K(0),\] from which one obtains \(\|\nabla F(t,\cdot)\|^{2}_{L^{2}(\Omega)}\leq 2K(0)\). Now, since \(F\equiv 0\) on \(\Gamma_{b}\), we infer from Poincare inequality that \[\|F(t,\cdot)\|^{2}_{H^{1}(\Omega)}\leq CK(0) \tag{3.11}\] for some universal constant \(C\). To show the \(C^{2}\) bound for \(F\), recall that \(F\) is harmonic in \(\Omega=\mathbb{T}\times[0,1]\) and satisfies the boundary condition \(F=0\) on \(\Gamma_{b}\). Let us oddly extend \(F\) to \(\tilde{\Omega}:=\mathbb{T}\times[-1,1]\): \[\tilde{F}(x):=\begin{cases}F(x_{1},x_{2}),&x_{2}\geq 0,\\ -F(x_{1},-x_{2}),&x_{2}<0\end{cases}\quad\text{ for }x=(x_{1},x_{2})\in\tilde{\Omega}.\] By the Schwarz Reflection Principle for real harmonic functions, \(\tilde{F}:\tilde{\Omega}\to\mathbb{R}\) is harmonic in \(\tilde{\Omega}\), thus it is also harmonic in the unit disk \(B_{1}\subset\tilde{\Omega}\), and obeys the bound \[\|\tilde{F}\|_{H^{1}(B_{1})}^{2}\leq\|\tilde{F}\|_{H^{1}(\tilde{ \Omega})}^{2}=2\|F\|_{H^{1}(\Omega)}^{2}\leq 2CK(0)\] by (3.11). Applying the standard Calderon-Zygmund estimate (see [12, Chapter 2]) and Sobolev embedding, we conclude that \[\|\tilde{F}\|_{C^{2}(B_{1/2})}\lesssim\|\tilde{F}\|_{H^{4}(B_{1/2 })}\lesssim\|\tilde{F}\|_{H^{1}(B_{1})}\leq C\sqrt{K(0)},\] which finishes the proof of (3.8) after recalling \(\nabla e=\nabla(\nabla^{\perp}\tilde{F})\) in \(B_{1/2}\cap\Omega\). Note that Proposition 3.2 only uses the smallness of initial kinetic energy, and we have not used the symmetry of initial velocity yet. Under additional symmetry assumptions in Lemma 2.1, we arrive at the following: **Proposition 3.3**.: _Let the initial data of the system (1.1)-(1.4) satisfy the assumptions of both Proposition 3.2 and Lemma 2.1. Then \(e_{1}(t,\cdot)\) is odd in \(x_{1}\) and \(e_{2}(t,\cdot)\) is even in \(x_{1}\) for all times during the lifespan of the solution. Moreover, there exists a universal constant \(C\) such that for any \(x\in B_{1/2}\cap\Omega\), we have_ \[|e_{j}(t,x)|\leq C\sqrt{K(0)}|x_{j}|,\quad j=1,2. \tag{3.12}\] Proof.: Using Lemma 2.1, we know \(u_{1}(t,x)\) is odd in \(x_{1}\) and \(u_{2}(t,x)\) is even in \(x_{1}\) for all times in \(\mathcal{D}_{t}\), thus \(\omega(t,x)\) is odd in \(x_{1}\) in \(\mathcal{D}_{t}\). Since \(\Omega\subset\mathcal{D}_{t}\) by Proposition 3.1, this immediately implies that \(\Phi(t,x)\) in (3.6) is also odd in \(x_{1}\) in \(\Omega\), due to uniqueness of solution of (3.6). Using \(U(t,x)=\nabla^{\perp}\Phi(t,x)\), \(U_{1}(t,x)\) is odd in \(x_{1}\) and \(U_{2}(t,x)\) is even in \(x_{1}\) for all times. Recalling \(e=u-U\), we know \(e_{1}(t,x)\) and \(e_{2}(t,x)\) must satisfy the asserted symmetries in \(\Omega\). To show the estimate (3.12), let us fix any \(x\in B_{1/2}\cap\Omega\). First using the fact that \(e_{1}(t,0,x_{2})=0\) for \(x_{2}\in[0,1]\) due to symmetry, we have \[|e_{1}(t,x)|=|e_{1}(t,x_{1},x_{2})-e_{1}(t,0,x_{2})|\leq\|\partial_{1}e_{1}\| _{L^{\infty}(B_{1/2}\cap\Omega)}|x_{1}|\leq C\sqrt{K(0)}|x_{1}|,\] where we used (3.8) in the last inequality. On the other hand, since \(e_{2}(t,x_{1},0)=0\) for \(x_{1}\in\mathbb{T}\) by (3.9), a similar argument to the above yields (3.12) with \(j=2\). ### Estimating \(u\) using integral of \(\omega\) With the error estimate above, we are finally ready to state and prove a pointwise velocity estimate that parallels the lemmas in [15, Lemma 3.1] and [21, Lemma 2.1]. In the following, let \(\Omega_{+}:=\mathbb{T}_{+}\times[0,1]\). **Proposition 3.4**.: _Let the initial data of the system (1.1)-(1.4) satisfy the assumptions of both Proposition 3.2 and Lemma 2.1. Then for any \(x\in B_{1/2}\cap\Omega_{+}\), the following holds for all time during the lifespan of the solution \([0,T)\):_ \[u_{j}(t,x)=(-1)^{j}\frac{4}{\pi}\left(\int_{Q(2x)}\frac{y_{1}y_{2}}{|y|^{4}} \omega(t,y)dy+B_{j}(t,x)\right)x_{j},\quad j=1,2, \tag{3.13}\] _where \(Q(x):=[x_{1},1]\times[x_{2},1]\), and \(B_{1}\) and \(B_{2}\) satisfies_ \[\begin{split}|B_{1}(t,x)|\leq C_{0}\left(\|\omega_{0}\|_{L^{ \infty}(\Omega)}\left(1+\log\left(1+\frac{x_{2}}{x_{1}}\right)\right)+\sqrt{K (0)}\right),\\ |B_{2}(t,x)|\leq C_{0}\left(\|\omega_{0}\|_{L^{\infty}(\Omega)} \left(1+\log\left(1+\frac{x_{1}}{x_{2}}\right)\right)+\sqrt{K(0)}\right)\end{split} \tag{3.14}\] _for some universal constant \(C_{0}\)._ Proof.: Recall that (3.7) gives \[u(t,x)=U(t,x)+e(t,x)\quad\text{ for }x\in B_{1/2}\cap\Omega_{+},t\in[0,T).\] In Proposition 3.3, we have already obtained an estimate for the error term \(e(t,x)\), namely \[|e_{j}(t,x)|\leq C\sqrt{K(0)}x_{j},\quad\text{ for }j=1,2.\] (Note that \(x_{1},x_{2}\geq 0\) since \(x\in B_{1/2}\cap\Omega_{+}\)). Therefore to show (3.13)-(3.14), it suffices to prove that \[U_{j}(t,x)=(-1)^{j}\frac{4}{\pi}\left(\int_{Q(2x)}\frac{y_{1}y_{2}}{|y|^{4}} \omega(t,y)dy+\tilde{B}_{j}(t,x)\right)x_{j},\quad j=1,2, \tag{3.15}\] where \(\tilde{B}_{1},\tilde{B}_{2}\) satisfy (3.14) without the terms \(\sqrt{K(0)}\) on the right hand side. To show this, let \(\tilde{\omega}\) be the odd-in-\(x_{2}\) extension of \(\omega\) from \(\Omega=\mathbb{T}\times[0,1]\) to \(\mathbb{T}^{2}\), i.e. \[\tilde{\omega}(t,x_{1},x_{2}):=\begin{cases}\omega(t,x_{1},x_{2}-2n_{2}),&x_{ 2}\in(2n_{2},1+2n_{2})\\ -\omega(t,x_{1},-(x_{2}-2n_{2})),&x_{2}\in(-1+2n_{2},2n_{2})\end{cases}\quad \text{ for all }n_{2}\in\mathbb{Z}.\] Since \(\tilde{\omega}\) is odd in \(x_{1}\) (by Lemma 2.1) and odd in \(x_{2}\) (by definition of \(\tilde{\omega}\)), there exists a unique odd-odd solution \(\Psi(t,\cdot)\) to the equation \[\Delta\Psi(t,\cdot)=\tilde{\omega}(t,\cdot)\quad\text{ in }\mathbb{T}^{2}, \tag{3.16}\] and \(\Psi\in C^{1,\alpha}(\mathbb{T}^{2})\) for any \(\alpha\in(0,1)\). Note that \(\Psi=0\) on both \(\mathbb{T}\times\{x_{2}=0\}\) and \(\mathbb{T}\times\{x_{2}=1\}\) since \(\tilde{\omega}\) is odd about both lines. This implies \(\Psi=0=\Phi\) on \(\partial\Omega\). Combining this with \(\Delta\Psi=\omega=\Delta\Phi\) in \(\Omega\) leads to \(\Psi=\Phi\) in \(\Omega\), therefore \(U=\nabla^{\perp}\Phi=\nabla^{\perp}\Psi\). Note that for any \(x\in\Omega\), we can express \(\Psi(t,x)\) using the Newtonian potential as \[\Psi(t,x)=\frac{1}{2\pi}\sum_{n\in\mathbb{Z}^{2}}\int_{[-1,1]^{2}}\ln|x-y-2n| \,\tilde{\omega}(t,y)dy\] (note that the sum converges since \(\tilde{\omega}\) has mean zero in \([-1,1]^{2}\)), which leads to the following representation of \(U\): \[U(t,x)=\nabla^{\perp}\Psi(t,x)=\frac{1}{2\pi}\sum_{n\in\mathbb{Z}^{2}}\int_{[-1,1 ]^{2}}\frac{(x_{2}-y_{2}-2n_{2},-x_{1}+y_{1}+2n_{1})}{|x-y-2n|^{2}}\tilde{\omega }(t,y)dy.\] This is exactly the Biot-Savart law for 2D Euler equation in \(\mathbb{T}^{2}\), therefore we can directly use the estimate in [21, Lemma 2.1] to obtain (3.15), where \(\tilde{B}_{1}\) and \(\tilde{B}_{2}\) satisfy \[|\tilde{B}_{1}(t,x)|\leq C\|\omega_{0}\|_{L^{\infty}(\Omega)} \left(1+\min\left\{\log\left(1+\frac{x_{2}}{x_{1}}\right),x_{2}\frac{\|\nabla \omega(t,\cdot)\|_{L^{\infty}([0,2x_{2}]^{2})}}{\|\omega_{0}\|_{L^{\infty}( \Omega)}}\right\}\right), \tag{3.17}\] \[|\tilde{B}_{2}(t,x)|\leq C\|\omega_{0}\|_{L^{\infty}(\Omega)} \left(1+\min\left\{\log\left(1+\frac{x_{1}}{x_{2}}\right),x_{1}\frac{\|\nabla \omega(t,\cdot)\|_{L^{\infty}([0,2x_{1}]^{2})}}{\|\omega_{0}\|_{L^{\infty}( \Omega)}}\right\}\right) \tag{3.18}\] for some universal constant \(C\). This finishes the proof: note that we can simply drop the second argument in the minimum to arrive at \(|\tilde{B}_{j}(t,x)|\leq C\|\omega_{0}\|_{L^{\infty}}(1+\log((1+\frac{x_{3-j}} {x_{j}}))\) for \(j=1,2\). ## 4 Proof of the main theorem Once Proposition 3.4 is established, the rest of the proof is largely parallel to the proof of [15, Theorem 1.1]. However, the situation is slightly more delicate here due to the presence of \(\varepsilon\): recall that our initial velocity \(u_{0}=\varepsilon v_{0}\) depends on \(\varepsilon\) since we want to show double-exponential growth can happen for arbitrarily small \(\varepsilon\ll 1\). In our proof, we need to construct \(v_{0}\) that is independent of \(\varepsilon\), and we need to carefully justify that the double-exponential growth phenomenon happens for any small \(\varepsilon\), and quantify the growth rate (which depends on \(\varepsilon\)). Proof of Theorem 1.1.: Recall that the initial velocity is set as \(u_{0}=\varepsilon v_{0}\) with \(\varepsilon\) sufficiently small, where \(v_{0}\in C^{\infty}(\mathcal{D}_{0})\) is a fixed velocity field independent of \(\varepsilon\). We define \(v_{0}\) as \(v_{0}:=\nabla^{\perp}\phi\), where \(\phi\) solves \[\begin{cases}\Delta\phi=f&\text{in }\mathcal{D}_{0},\\ \phi=0&\text{on }\partial\mathcal{D}_{0}.\end{cases}\] Here \(f\in C^{\infty}(\mathcal{D}_{0})\) is odd in \(x_{1}\), and satisfies \(0\leq f\leq 1\) in \(\mathcal{D}_{0}^{+}\) and \(f(x_{1},x_{2})=1\) for \(x_{1}\in[\kappa^{10},1-\delta]\), where \(\kappa\) and \(\delta\) are small universal constants satisfying \(0<\kappa<\delta<\frac{1}{2}\), and they will be fixed momentarily. Since \(\|f\|_{L^{\infty}(\mathcal{D}_{0})}=1\) regardless of \(\kappa\) and \(\delta\), a standard elliptic estimate gives \(\|v_{0}\|_{L^{2}(\mathcal{D}_{0})}\leq\|\phi\|_{H^{1}(\mathcal{D}_{0})}\leq C\) for some universal constant \(C\), which implies \[K(0)=\frac{1}{2}\|u_{0}\|_{L^{2}(\mathcal{D}_{0})}^{2}=\frac{\varepsilon^{2}}{ 2}\|v_{0}\|_{L^{2}(\mathcal{D}_{0})}^{2}\leq C_{1}\varepsilon^{2} \tag{4.1}\] for some universal \(C_{1}\). Therefore, setting \(\varepsilon_{0}:=(20C_{1})^{-1/2}\), we have \(K(0)\leq\frac{1}{20}\) for all \(\varepsilon\in(0,\varepsilon_{0})\). Thus for all \(\varepsilon\in(0,\varepsilon_{0})\), the initial velocity \(u_{0}\) constructed as above satisfies the small kinetic energy assumption in Proposition 3.1 and 3.2 (recall that we set \(\sigma=1\) in the assumption of Theorem 1.1). As a result, Proposition 3.1 implies \(\Omega\subset\mathcal{D}_{t}\) for all \(t\in[0,T)\). Due to the odd-in-\(x_{1}\) symmetry of \(f\), \(u_{0}\) also satisfies the symmetry assumption in Lemma 2.1. Since the initial vorticity is \(\omega_{0}=\varepsilon f\) in \(\mathcal{D}_{0}\), the set \(\{x\in\mathcal{D}_{0}^{+}:\omega_{0}(x)\neq\varepsilon\}\) has area less than \(2\delta\). Using the incompressibility of the flow and the conservation of \(\omega\) along the flow map, for any time \(t\in[0,T)\), the set \(\{x\in{\cal D}_{t}^{+}:\omega(t,x)\neq\varepsilon\}\) has area less than \(2\delta\). This fact allows us to obtain a lower bound of the integral in (3.13) using a similar argument as [15, Eq.(3.15)]: for any \(t\in[0,T)\) and \(x\in B_{\delta}\cap\Omega_{+}\), \[\int_{Q(2x)}\frac{y_{1}y_{2}}{|y|^{4}}\omega(y)dy\geq\frac{1}{4}\int_{2\delta} ^{1}\int_{\pi/6}^{\pi/3}\frac{\omega(r,\theta)}{r}d\theta dr\geq\frac{ \varepsilon}{4}\int_{4\sqrt{\delta}}^{1}\int_{\pi/6}^{\pi/3}\frac{1}{r}d\theta dr =\frac{\varepsilon\pi}{48}(\log\delta^{-1}-2\log 4), \tag{4.2}\] where the first inequality uses the definition of \(Q(2x)\) and the fact that \(\omega\geq 0\) in \(Q(2x)\), and the second inequality uses \(|\{x\in{\cal D}_{t}^{+}:\omega(t,x)\neq\varepsilon\}|<2\delta\): in the polar integral of \(\omega/r\) if we remove a set with area \(2\delta\) closest to the origin from the integral domain \(\{r\in(2\delta,1),\theta\in(\frac{\pi}{6},\frac{\pi}{3})\}\) to reflect the worst-case scenario that minimizes the integral, the remaining set would have inner radius less than \(4\sqrt{\delta}\). Also, applying (4.1) together with the fact that \(\|\omega(t,\cdot)\|_{L^{\infty}}=\|\omega_{0}\|_{L^{\infty}}=\varepsilon\), we can control the terms \(B_{1}\) and \(B_{2}\) in (3.14) as \[|B_{1}(t,x)| \leq C_{0}\varepsilon(2+\sqrt{C_{1}})=:C_{2}\varepsilon\quad\text { in }B_{\delta}\cap\Omega_{+}\cap\{0\leq x_{2}\leq x_{1}\} \tag{4.3}\] \[|B_{2}(t,x)| \leq C_{0}\varepsilon(2+\sqrt{C_{1}})=:C_{2}\varepsilon\quad\text { on }B_{\delta}\cap\Omega_{+}\cap\{x_{2}=x_{1}\}. \tag{4.4}\] From now on, we fix \(\delta\in(0,\frac{1}{2})\) as a small universal constant such that \[\frac{4}{\pi}\left(\frac{\pi}{48}\left(\log\delta^{-1}-2\log 4\right)-C_{2} \right)>1. \tag{4.5}\] With such definition, combining the estimates (3.13) and (4.2)-(4.4), we have \[-u_{1}(t,x) \geq\varepsilon x_{1}\quad\text{ in }B_{\delta}\cap\Omega_{+}\cap \{0\leq x_{2}\leq x_{1}\} \tag{4.6}\] \[u_{2}(t,x) \geq\varepsilon x_{2}\quad\text{ on }B_{\delta}\cap\Omega_{+}\cap\{x_{2}=x_{1}\}. \tag{4.7}\] In particular, (4.6) implies the flow map starting from \((\delta,0)\) (denote it by \(\eta(t,\delta,0)\)) satisfies \[\eta_{1}(t,\delta,0)\leq\delta e^{-\varepsilon t}, \tag{4.8}\] where we used the fact that \(\eta(t,\delta,0)\) stays on the bottom boundary \(\Gamma_{b}\) for all times. Since \(\omega(t,\eta(t,\delta,0))=\omega_{0}(\delta,0)=\varepsilon\), we know \(\|\nabla\omega(t)\|_{L^{\infty}}\) at least increases exponentially for all times during the lifespan of a solution: \[\|\nabla\omega(t,\cdot)\|_{L^{\infty}({\cal D}_{t})}\geq\frac{|\omega(t,\eta( t,\delta,0))|}{|\eta_{1}(t,\delta,0)|}\geq\frac{\varepsilon}{\delta e^{- \varepsilon t}}=\varepsilon\delta^{-1}e^{\varepsilon t}.\] To upgrade the exponential growth to double-exponential growth, we follow the same argument as [15], except that we have to keep track of the dependence on \(\varepsilon\) in the growth rate. For any \(t>0\) and \(x_{1}\in(0,1)\), we define the following two velocities (which is well-defined since \(\Omega\subset{\cal D}_{t}\) for all \(t\in[0,T)\)): \[u_{1}(t,x_{1}):=\min_{(x_{1},x_{2})\in\Omega_{+},x_{2}<x_{1}}u_{1}(t,x_{1},x_{2 }),\quad\bar{u}_{1}(t,x_{1}):=\max_{(x_{1},x_{2})\in\Omega_{+},x_{2}<x_{1}}u_{ 1}(t,x_{1},x_{2}), \tag{4.9}\] where \(u_{1}\) and \(\bar{u}_{1}\) are locally Lipschitz in \(x_{1}\) during the lifespan of the solution. We then define the functions \(a(t)\), \(b(t)\) via the ODEs \[a^{\prime}(t) =\bar{u}_{1}(t,a(t)),\quad a(0)=\kappa^{10}, \tag{4.10}\] \[b^{\prime}(t) =u_{1}(t,b(t)),\quad b(0)=\kappa. \tag{4.11}\] We also define the following trapezoidal region: for \(0<x_{1}^{\prime}<x_{1}^{\prime\prime}<1\), let \[\mathcal{O}(x_{1}^{\prime},x_{1}^{\prime\prime}):=\{(x_{1},x_{2})\in\Omega^{+}\ : \ x_{1}^{\prime}<x_{1}<x_{1}^{\prime\prime},\ 0\leq x_{2}\leq x_{1}\}.\] And we set \[\mathcal{O}_{t}:=\mathcal{O}(a(t),b(t)).\] The choice of our initial data gives \(\omega_{0}\equiv\varepsilon\) in \(\mathcal{O}_{0}\). We can argue in the same way as [15, page 1215] that \(\omega(t,\cdot)\equiv\varepsilon\) in \(\mathcal{O}_{t}\): due to the definition of (4.9)-(4.11), we only need to show \(u\cdot(-1,1)>0\) along the diagonal of \(\mathcal{O}_{t}\). This is true since \(u_{1}<0\) and \(u_{2}>0\) on the diagonal \(B_{\delta}\cap\Omega^{+}\cap\{x_{1}=x_{2}\}\), which follows from (4.6)-(4.7). Using (4.6), we have \[b(t)\leq\kappa e^{-\varepsilon t}. \tag{4.12}\] To obtain a faster decay for \(a(t)\), note that \(\log a(t)\) satisfies the differential inequality \[\frac{d}{dt}\log a(t) =\frac{\bar{u}_{1}(t,a(t))}{a(t)}\leq-\frac{4}{\pi}\left(\int_{Q (2a(t),2a(t))}\frac{y_{1}y_{2}}{|y|^{4}}\omega(t,y)dy-C_{2}\varepsilon\right)\] \[\leq-\frac{4}{\pi}\left(\int_{Q(2a(t),0)}\frac{y_{1}y_{2}}{|y|^{4 }}\omega(t,y)dy-(C_{2}+C_{3})\varepsilon\right).\] where the first inequality follows from (3.13) and (4.3), and the second inequality follows from \(\omega\leq\epsilon\) and the fact that for any \(a<\frac{1}{2}\), the integral in the rectangle \(\int_{[2a,1]\times[0,2a]}\frac{y_{1}y_{2}}{|y|^{4}}dy\) is bounded by a universal constant \(C_{3}\). On the other hand, using (3.13) and (4.3), \(\log b(t)\) satisfies the differential inequality in the opposite direction: \[\frac{d}{dt}\log b(t)=\frac{y_{1}(t,b(t))}{b(t)}\geq-\frac{4}{\pi}\left(\int_ {Q(2b(t),0)}\frac{y_{1}y_{2}}{|y|^{4}}\omega(t,y)dy+C_{2}\varepsilon\right).\] Subtracting them yields the following (where we use that \(\mathcal{O}(2a(t),b(t))\subset Q(2a(t),0))\backslash Q(2b(t),0)\)): \[\frac{d}{dt}\log\frac{b(t)}{a(t)}\geq\frac{4}{\pi}\left(\int_{\mathcal{O}(2a( t),b(t))}\frac{y_{1}y_{2}}{|y|^{4}}\omega(t,y)dy-(2C_{2}+C_{3})\varepsilon \right). \tag{4.13}\] Using \(\omega(t,\cdot)\equiv\varepsilon\) in \(\mathcal{O}(2a(t),b(t))\subset\mathcal{O}_{t}\), we can bound the integral in (4.13) from below as \[\int_{\mathcal{O}(2a(t),b(t))}\frac{y_{1}y_{2}}{|y|^{4}}\omega(t,y)dy\geq \varepsilon\int_{0}^{\pi/4}\int_{2a(t)/\cos\theta}^{b(t)/\cos\theta}\frac{\sin 2 \theta}{2r}drd\theta=\frac{\varepsilon}{4}\left(\log\frac{b(t)}{a(t)}-\log 2 \right),\] and plugging it into (4.13) gives \[\frac{d}{dt}\log\frac{b(t)}{a(t)}\geq\varepsilon\left(\frac{1}{\pi}\log\frac {b(t)}{a(t)}-C_{4}\right),\] where \(C_{4}:=\frac{4}{\pi}(\frac{\log 2}{4}+2C_{2}+C_{3})\) is a universal constant. Solving this differential inequality gives \[\log\frac{b(t)}{a(t)}\geq\exp\left(\frac{\varepsilon t}{\pi}\right)\left(\log \frac{b(0)}{a(0)}-\pi C_{4}\right). \tag{4.14}\] Since \(\log\frac{b(0)}{a(0)}=9\log\kappa^{-1}\), we can choose \(\kappa\in(0,\delta)\) to be a sufficiently small universal constant such that \(\log\frac{b(0)}{a(0)}-\pi C_{4}>2\). Note that such choice of \(\kappa\) guarantees that \(2a(t)<b(t)\) for any \(t\in[0,T)\). Hence, (4.14) implies the following (where we use \(b(t)\leq 1\) for all \(t\) due to (4.12)): \[a(t)^{-1}\geq\exp\left(2\exp\left(\frac{\varepsilon t}{\pi}\right)\right)b(t)^ {-1}\geq\exp\left(2\exp\left(\frac{\varepsilon t}{\pi}\right)\right).\] Finally, using \(\omega\equiv\varepsilon\) in \(\mathcal{O}_{t}\), we have \(\omega(t,a(t),0)=\varepsilon\), thus combining it with \(\omega(t,0,0)=0\) gives \[\|\nabla\omega(t,\cdot)\|_{L^{\infty}(\mathcal{D}_{t})}\geq\frac{\varepsilon} {a(t)}\geq\varepsilon\exp\left(2\exp\left(\frac{\varepsilon t}{\pi}\right)\right)\] for all times during the lifespan of the solution. ## 5 Discussions At the end, we discuss some generalizations of Theorem 1.1, and state some open questions. 1. **Adding gravity to the system**. When a gravity force \(-ge_{2}\) is added to the first equation of (1.1), where \(g>0\) and \(e_{2}=(0,1)^{T}\), the system becomes the 2D _gravity-capillary_ water wave system. Our proof can be easily adapted to this case for \(g>0\) and \(\sigma>0\). This is because the gravity-capillary water wave system enjoys a similar conserved energy \(E(t)=K(t)+gP(t)+\sigma L(t)\), where \(P(t)=\int_{\mathcal{D}_{t}}x_{2}dx\) is the potential energy. It is simple to check that \(P(t)\geq P(0)\) for all \(t\), since among all sets with the same area as \(\mathcal{D}_{0}\), the set \(\mathcal{D}_{0}\) itself given by (1.5) has the lowest potential energy. As a result, the uniform-in-time estimates in Proposition 3.1 still hold. One can also check that adding gravity still preserves the symmetry in Lemma 2.1. The rest of the proof can be carried out without any changes, and we leave the details to interested readers. 2. **Removing surface tension**. It seems challenging to obtain growth results without surface tension. When \(\sigma=0\), the uniform-in-time estimate (3.1) on the free boundary fails, thus the free boundary could potentially get very close to the origin. This difficulty persists even with an additional gravity term - for gravity water wave without surface tension, if the initial kinetic energy is small, using the conserved energy \(K(t)+gP(t)=K(0)+gP(0)\) one can prove that the free boundary stays close to \(\Gamma_{0}\) in the \(L^{2}\) distance for all times, however, their \(L^{\infty}\) difference can still be large. 3. **Different domains**. A natural question is whether the growth result holds for different domains. When the bottom boundary is a graph \(\{x_{2}=g(x_{1}):x_{1}\in\mathbb{T}\}\) where \(g\) is smooth and even-in-\(x_{1}\), we expect the proof would still hold after some modifications, where the estimate of Biot-Savart law in domains with a symmetry axis by Xu [20] could be useful. However, adapting the proof to the infinite-depth case (where there is no bottom boundary) requires substantial new ideas. We also point out that our proof crucially relies on the periodic-in-\(x_{1}\) setting, and it is an interesting open question to prove similar results for the \(x_{1}\in\mathbb{R}\) case for finite-energy smooth initial data.
2309.17017
A new approach to string theory
In the present paper we consider quantum theories obtained by quantization of classical theories with first-class constraints assuming that these constraints form a Lie algebra. We show that in this case, one can construct physical quantities of a new type. We apply this construction to string theory. We find that scattering amplitudes in critical bosonic closed string theory can be expressed in terms of physical quantities of the new type. Our techniques can be applied also to superstring and heterotic string.
Albert Schwarz
2023-09-29T07:04:13Z
http://arxiv.org/abs/2309.17017v2
# A new approach to string theory. ###### Abstract In the present paper we consider quantum theories obtained by quantization of classical theories with first-class constraints assuming that these constraints form a Lie algebra. We show that in this case, one can construct physical quantities of a new type. We apply this construction to string theory. We find that scattering amplitudes in critical bosonic closed string theory can be expressed in terms of physical quantities of the new type. Our techniques can be applied also to superstring and heterotic string. **Keywords** Operator formalism; conformal field theory; BRST formalism ## 1 Introduction In BRST formalism we can construct physical quantities by taking correlation functions of BRST-closed operators in a physical (BRST-closed) state. (These correlation functions can be considered as polylinear functions on BRST-cohomology.) In the present paper, we consider quantum theories obtained by quantization of classical theories with first-class constraints assuming that these constraints form a Lie algebra. We show that in this case, one can construct physical quantities of a new type (Section 2). We apply this construction to string theory (Sections 5 and 6). We find that scattering amplitudes in critical bosonic closed string theory can be expressed in terms of physical quantities of the type described in Section 2. Our techniques can be applied also to superstring and heterotic string; this will be shown in a separate paper. Our results about scattering amplitudes in string theory are based on a comparison with the expression of these amplitudes in operator formalism [1],[2]. The operator formalism is closely related to Segal's definition of conformal field theory [3]. We remind this definition (or, more precisely, the modification of this definition that is used in operator formalism) and the main ideas of operator formalism (Sections 3 and 4). In the Appendix we sketch a new, simple approach to operator formalism. One of the main takeaways from our results: Knowing the one-string space of states in BRST formalism one can calculate physical quantities describing interacting strings. Neither multi-string states nor worldsheets with non-trivial topology that are necessary for other approaches are fundamental in our approach; we show that they are in some sense hidden in one-string space. The present paper is a byproduct of my attempts to formulate string theory in algebraic and geometric approaches to quantum theory (see [4] and references therein).The results of this paper show the way to solve this problem: it is sufficient to work in the one-string space. ## 2 General considerations For every supermanifold \(M\) we can construct a supermanifold \(\Pi TM\) reversing the parity in fibers of tangent bundle \(TM\). If \((x^{1},...,x^{m})\) are coordinates in \(M\) then the coordinates in \(\Pi TM\) are \((x^{1},...,x^{m},\xi^{1},...,\xi^{m})\) where the parity of \(\xi^{k}\) is opposite to the parity of \(x^{k}.\) Polynomial functions on \(\Pi TM\) are identified with differential forms on \(M\), more general functions with pesudodifferential forms. The formula \(Q=\xi^{k}\frac{\partial}{\partial x^{k}}\) specifies an odd vector field on \(\Pi TM\); the anticommutator of this vector field with itself vanishes. We can say that \(Q\) specifies a structure of \(Q\)-manifold on \(\Pi TM\); in other words, \(Q\) is a homological vector field. It defines an odd derivation \(d\) of the algebra of functions on \(\Pi TM\). The oprator \(d\) obeys \(d^{2}=0\) and can be identified with de Rham differential. There exists an invariant definition of \(\Pi TM\) that shows that the construction of \(\Pi TM\) is functorial. In other words, a map \(M\to N\) induces a map \(\Pi TM\to\Pi TN\); the induced map agrees with de Rham differential. Namely, \(\Pi TM\) can be identified with the space of maps of \((0,1)\)-dimensional superspace \(\mathbb{R}^{0,1}\) into \(M\). Every vector field on the space \(\mathbb{R}^{0,1}\) induces a vector field on the space of maps. The Lie superalgebra of vector fields on \(\mathbb{R}^{0,1}\) is \((1,1)\)-dimensional; an odd vector field on \(\mathbb{R}^{0,1}\) induces a homological vector field \(Q\) on the space of maps, an even vector field induces a grading on the algebra of functions on this space. This remark allows us to say that \(\Pi T\mathfrak{g}\) where \(\mathfrak{g}\) is a Lie superalgebra is equipped with the structure of differential Lie superalgebra. We denote this Lie superalgebra by \(\mathfrak{g}^{\prime}\) and the differential in it by \(Q\). Sometimes it is convenient to consider a semi-direct product \(\mathfrak{g}^{\prime\prime}\) of the Lie superalgebra and the Lie superalgebra of vector fields on \(\mathbb{R}^{0,1}.\) Similarly, if \(G\) is a supergroup then \(G^{\prime}=\Pi TG\) is also a supergroup (the multiplication in \(G\) induces a multiplication in the space of maps \(\mathbb{R}^{0,1}\to G).\) The Lie superalgebra of \(G^{\prime}\) can be identified with \(\mathfrak{g}^{\prime}\) where \(\mathfrak{g}\) stands for the Lie superalgebra of \(G.\) The homological vector field on \(G^{\prime}\) induces the differential \(Q\) on \(\mathfrak{g}^{\prime}.\) If \(\mathfrak{g}\) is a Lie algebra with generators \(T_{k}\) and commutation relations \([T_{k},T_{l}]=f^{r}_{kl}T_{r}\) the Lie superalgebra \(\mathfrak{g}^{\prime}\) has even generators \(T_{k},\) odd generators \(b_{k}\) and commutation relations \([T_{k},T_{l}]=f^{r}_{kl}T_{r},\)\([T_{k},b_{l}]=f^{r}_{kl}b_{r},\)\([b_{k},b_{l}]_{+}=0.\) The generators \(T_{k}\) are \(Q\)-exact (acting by \(Q\) on \(b_{k}\) we obtain \(T_{k}\)). Every element of \(G^{\prime}\) can be represented in the form \(g\exp(\mu^{k}b_{k})\) where \(\mu_{k}\) are odd parameters, \(g\in G,\) and \(\exp\) stands for the exponential map of Lie superalgebra into the corresponding supergroup. Let us consider now a classical system that after quantization can be described by Hilbert space \(\mathcal{E}\). If a new classical system is obtained from this system by means of constraints obeying a Lie algebra \(\mathfrak{g}\) of group \(G\) then the quantized system can be described in BRST-formalism by the space \(\mathcal{E}^{\prime}\) obtained by adding ghosts to \(\mathcal{E}.\)( To obtain \(\mathcal{E}^{\prime}\) we take tensor product of \(\mathcal{E}\) by representation space of canonical anticommutation relations \([\hat{c}^{k},\hat{b}_{l}]_{+}=\delta^{k}_{l},[\hat{c}^{k},\hat{c}_{r}]_{+}=0, [\hat{b}_{l},\hat{b}_{r}]_{+}=0.)\) The constraints induce operators \(T_{k}\) in \(\mathcal{E}\); the BRST-operator \(\hat{Q}\) has the form \(\hat{Q}=T_{k}c^{k}+\frac{1}{2}f^{r}_{kl}\hat{c}^{k}\hat{c}^{l}\hat{b}_{r}\) where \(f^{r}_{kl}\) are structure constants of the algebra \(\mathfrak{g}\) and \(\hat{c}^{k},\hat{b}_{l}\) are ghosts obeying canonical anticommutation relations (in the case of infinite number of degrees of freedom we should use normal ordering; this can lead to anomalies). The operators \(\hat{T}_{k}=T_{k}+f^{r}_{kl}\hat{c}^{l}\hat{b}_{r}\) are BRST-trivial in \(\mathcal{E}^{\prime}\); this follows from relation \(\hat{T}_{k}=[\hat{Q},\hat{b}_{k}]_{+}.\) Together with operators \(\hat{b}_{k}\) they specify a representation \(\psi\) of the Lie superalgebra \(\mathfrak{g}^{\prime}\), i.e. a homomorphism of \(\mathfrak{g}^{\prime}\) into the space \(\mathcal{L}\) of linear operators acting in \(\mathcal{E}^{\prime}\); this homomorphism agrees with differentials (in this statement the space \(\mathcal{L}\) is considered as Lie superalgebra). _We assume that the representation \(\psi\) is integrable(=can be exponentiated)_, i.e. it can be obtained from a representation \(\Psi\) of the group \(G^{\prime}.\) ( Recall that \(\mathfrak{g}\) is the Lie algebra of the group \(G.\)) The representation \(\Psi\) induces a map \(\Psi^{*}\) of \(\mathcal{L}^{*}\) ( of the superspace of linear functionals on \(\mathcal{L}\)) into the space of functions on \(G^{\prime}\) ( the space of (pseudo)differential forms on \(G\)). This map agrees with differentials; this means, in particular, that it transforms \(Q\)-closed element \(\sigma\in\mathcal{L}^{*}\) into a closed (in general inhomogeneous) form \(\Psi^{*}(\sigma)\) on \(G\). (The BRST-operator acts on \(\mathcal{L}\) as (anti)commutator with \(\hat{Q}\), this action induces a BRST-operator on \(\mathcal{L}^{*}\).) Integrating \(\Psi^{*}(\sigma)\) over a cycle in \(G\) we obtain a physical quantity (the integral does not change if we add to \(\sigma\) a \(Q\)-exact term, hence it depends only on BRST cohomology class of \(\sigma\)). If \(K\) is a subgroup of \(G\) and the form \(\Psi^{*}(\sigma)\) descends to \(G/K\) we can integrate the form on \(G/K\) over a cycle in \(G/K\). ( Here \(G/K\) stands for the space of right cosets\(=\) space of orbits of left action of \(K\) on \(G\).) This construction leads to a more general class of physical quantities. Let us consider a special case when a \(Q\)-closed element \(\sigma\in{\cal L}^{*}\) is specified by the formula \[\sigma(A)=\langle\rho|A|\chi\rangle \tag{1}\] where \(A\in{\cal L}\), \(\langle\rho|\in({\cal E}^{\prime})^{*}\) and \(|\chi\rangle\in{\cal E}^{\prime}\) are \(Q\)-closed. Taking \(A\) as \(\Psi(g^{\prime})\) where \(g^{\prime}\in G^{\prime}\) we obtain a \(Q\)-closed function \[(\Psi^{*}\sigma)(g^{\prime})=\langle\rho|\Psi(g^{\prime})|\chi\rangle \tag{2}\] on \(G^{\prime}\) (a non-homogeneous closed form on \(G\)). Representing \(g^{\prime}\in G^{\prime}\) as \(g\exp(\mu^{k}b_{k})\)) where \(g\in G\) we obtain \[(\Psi^{*}\sigma)(g\exp(\mu^{k}b_{k}))=\langle\rho|\Psi(g\exp(\mu^{k}b_{k})))| \chi\rangle=\langle\rho|\Psi(g)\exp(\mu^{k}\hat{b}_{k})|)\chi\rangle \tag{3}\] ( We use the fact that \(G\) is embedded into \(G^{\prime}\), hence \(\Psi\) is defined on \(G\).) _The function (2) descends to \(G^{\prime}/K^{\prime}=(G/K)^{\prime}\) (equivalently the corresponding closed (pseudo)differential form descends to \(G/K\)) if \(\langle\rho|\) is a \(K^{\prime}\)-invariant element of \(({\cal E}^{\prime})^{*}\) (The relation \(\langle\rho|\Psi(k^{\prime})=\langle\rho|\) for \(k^{\prime}\in K^{\prime}\) implies that \((\Psi^{*}\sigma)(k^{\prime}g^{\prime})=(\Psi^{*}\sigma)(g^{\prime})\).)_ Homogeneous components of the form (3) are closed forms that can be represented as \[\langle\rho|\Psi(g)B|\chi\rangle \tag{4}\] where \(B\) is a homogeneous polynomial with respect to \(\hat{b}_{k}\). Notice that our constructions can be applied to the case when \({\cal E}\) and \({\cal E}^{\prime}\) are replaced by their \(n\)-th tensor powers; then the groups \(G\) and \(G^{\prime}\) should be replaced by direct products of \(n\) copies of these groups. One can consider a more general situation when we have two subgroups of the group \(G\) denoted by \(K\) and \(H\), the element \(|\chi\rangle\) is a \(H^{\prime}\)-invariant element of \({\cal E}^{\prime}\) (i.e. \(\Psi(h^{\prime})|\chi\rangle=|\chi\rangle\) for all \(h^{\prime}\in H^{\prime}\)) and the element \(\langle\rho|\) is a \(K^{\prime}\)-invariant element of the dual space. Then _the function (2) descends to \(H^{\prime}\backslash G^{\prime}/K^{\prime}\) (to the space of double cosets)_. Our consideration can be generalized to the case when \({\mathfrak{g}}\) is a Lie algebra of semigroup \(G\). In this case one should assume that the representation \(\psi\) is semiintegrable, i.e. it can be obtained from representation of semigroup \(G^{\prime}\) having Lie algebra \({\mathfrak{g}}^{\prime}\). Another important generalization: it is sufficient to assume that \(\langle\rho|\) is \(\mathbb{h}^{\prime}\)-invariant (i.e. \(\langle\rho|\psi(\mathbb{h}^{\prime})=0\).) Here \(\mathbb{h}\) is a Lie subalgebra of the Lie algebra \(\mathfrak{g}\). If \(\mathbb{h}\) is a Lie algebra of a connected subgroup \(K\) of semigroup \(G\) this assumption is equivalent to \(K^{\prime}\)-invariance of \(\langle\rho|\); we come back to the situation considered above. However, in the situation of the next sections the Lie algebra \(\mathbb{h}\) cannot be considered as a Lie algebra of some group. It is essy to check that \(\mathbb{h}^{\prime}\)_-invariance of \(\langle\rho|\) implies that the function (2) descends to \(G^{\prime}/\mathbb{h}^{\prime}\) (equivalently the corresponding form descends to \(G/\mathbb{h}\))._ To define the space of cosets \(G/\mathbb{h}\) we consider the left action of the Lie algebra \(\mathbb{h}\) on the semigroup \(G.\) This action specifies a foliation of \(G;\) one can define \(G/\mathbb{h}\) as the space of leaves of the foliation. Alternatively, \(G/\mathbb{h}\) can be defined as a connected manifold \(M\) where the semigroup \(G\) acts transitively with Lie stabilizer \(\mathbb{h}.\) (We say that action of \(G\) on \(M\) is transitive if it induces a surjective map \(\tau_{m}\) of the Lie algebra \(\mathfrak{g}\) to the tangent space of \(M\) at any point \(m\in M.\) The Lie stabilizer at the point \(m\) is defined as the kernel of \(\tau_{m};\) we assume that there exists a point with a Lie stabilizer \(\mathbb{h}.\)) More generally,_if \(|\chi\rangle\) is \(\mathbb{h}^{\prime}\)-invariant and \(\langle\rho|\) is \(\mathbb{h}^{\prime}\)-invariant then the function(2) descends to a function on the space of double cosets \(\mathbb{h}^{\prime}\backslash G^{\prime}/\mathbb{h}^{\prime}\) ( to a (pseudo)differential form on the space of double cosets \(\mathbb{h}\backslash G/\mathbb{h}\))._ In this statement \(\mathbb{h}\) and \(\mathbb{h}\) are Lie subalgebras of \(\mathfrak{g}\). For an appropriate choice of \(\mathfrak{g},\mathbb{h}\) and \(\mathbb{h}\) this statement can be used to obtain an expression for string amplitudes (Section 6). ## 3 CFT, TCFT, SCFT, TSFT Let us start with a reminder of some basic constructions that are used in two-dimensional conformal field theory (CFT) and in operator formalism of string theory. Recall that two Riemannian manifolds are conformally equivalent (specify the same conformal manifold) if there exists a diffeomorphism between these manifolds preserving the Riemannian metric up to multiplication by a function. A two-dimensional oriented conformal manifold can be identified with a complex manifold of complex dimension 1. Maps preserving conformal structure are either holomorphic or antiholomorphic maps of complex manifolds. We consider moduli space of complex curves (= one-dimensional compact connected complex manifolds) of genus \(\mathrm{g}\) with boundary consisting of \(n\) parametrized circles.( We assume that these circles are ordered.) This moduli space denoted by \(\mathcal{P}(\mathrm{g},n)\) can be regarded as an infinite-dimensional complex manifold. Equivalently one can define \(\mathcal{P}(\mathrm{g},n)\) as the moduli space of complex curves of genus \({\rm g}\) with \(n\) embedded standard discs. It is easy to construct a natural map \(\phi_{m,n}:{\cal P}({\rm g},n)\times{\cal P}({\rm g}^{\prime},n^{\prime})\to{ \cal P}({\rm g}+{\rm g}^{\prime},n+n^{\prime}-2)\) identifying the last circle in the first factor with the first circle in the second factor. Similarly one can construct a map \(\phi_{n}:{\cal P}({\rm g},n)\to{\cal P}({\rm g}+1,n-2)\) identifying two last circles. In particular, the map \({\cal P}(0,2)\times{\cal P}(0,2)\to{\cal P}(0,2)\) specifies a structure of semigroup on \({\cal P}(0,2)\). This semigroup was introduced independently by Neretin, Konntsevich, and Segal; we call it the semigroup of annuli and denote it by \({\cal A}\). The map \({\cal P}(0,2)\times{\cal P}({\rm g},n)\to{\cal P}({\rm g},n)\) specifies an action of \({\cal A}\) on \({\cal P}({\rm g},n)\). Notice that the Lie algebra of \({\cal A}\) can be identified with diff ( with the complexification of the Lie algebra of vector fields on a circle); in other words, this is a complex Lie algebra with generators \(l_{n}\) obeying \([l_{m},l_{n}]=(m-n)l_{m+n}\). In Segal's approach, a CFT having central charge \(c=0\) specifies a map \(\sigma_{{\rm g},n}:{\cal P}({\rm g},n)\to{\cal H}^{n}\) where \({\cal H}\) is a vector space equipped with bilinear inner product \({\cal H}\otimes{\cal H}\to\mathbb{C}.\) Using this inner product one can construct maps \(\tilde{\phi}_{m,n}:{\cal H}^{m}\otimes{\cal H}^{n}\to{\cal H}^{m+n-2}\) and \(\tilde{\phi}_{n}:{\cal H}^{n}\to{\cal H}^{n-2}.\) Segals's axioms are compatibility conditions for maps \(\sigma_{{\rm g},n}\), \(\phi_{m,n},\phi_{n},\tilde{\phi}_{m,n},\tilde{\phi}_{n}\). The action of semigroup \({\cal A}\) on \({\cal P}({\rm g},1)\) and complex conjugate action generate an action of \({\cal A}\times{\cal A}\) and corresponding Lie algebra diff\(\times\)diff on \({\cal H}\). A CFT having central charge \(c\neq 0\) specifies a map sending a point of \({\cal P}({\rm g},n)\) into a point of \({\cal H}^{n}\) defined up to a multiplication by a number. In this case, we have a projective representation of diff\(\times\)diff in \({\cal H}\), i.e. a representation of the central extension of this algebra in \({\cal H}\). The central extension of diff is called Virasoro algebra; we denote it by Vir. Let us consider CFT with a central charge \(c.\) The Lie algebra Vir \(\times\) Vir acts on its space of states \({\cal H}\). In other words, we have operators \(L_{m},\tilde{L}_{n}\) obeying \([L_{m},L_{n}]=(m-n)L_{m+n}+\frac{c}{12}(m^{3}-m)\delta_{m+n}\), \([\tilde{L}_{m},\tilde{L}_{n}]=(m-n)\tilde{L}_{m+n}+\frac{c}{12}(m^{3}-m)\delta _{m+n}\), \([L_{m},\tilde{L}_{n}]=0\). There exist many important analogs of these constructions. In particular, one can consider spaces \({\cal P}^{\prime}({\rm g},n)=\Pi T{\cal P}({\rm g},n)\) instead of \({\cal P}({\rm g},n)\). It is obvious that analogs of maps \(\phi_{m,n}\) and \(\phi_{n}\) exist for these spaces. It follows that \({\cal A}^{\prime}={\cal P}^{\prime}(0,2)\) is a semigroup acting on \({\cal P}^{\prime}({\rm g},n)\). Let us fix a \(\mathbb{Z}_{2}\)-graded vector space \({\cal H}\) equipped with inner product and parity reversing differential \(q\) respecting this product. Then topological conformal field theory (TQFT) is specified by maps \({\cal P}^{\prime}({\rm g},n)\to{\cal H}^{n}\). 1We impose compatibility conditions of these maps with analogs of maps \(\phi_{m,n},\phi_{n},\tilde{\phi}_{m,n},\tilde{\phi}_{n}\). as well as compatibility conditions with the differential \(q\) and homological vec tor field on \({\cal P}^{\prime}({\rm g},n).\) It follows that the semigroup \({\cal A}^{\prime}\times{\cal A}^{\prime}\) and its Lie algebra \({\rm diff}^{\prime}\times{\rm diff}^{\prime}\) act in \({\cal H}\). Replacing in the definition of CFT conformal manifolds with superconformal manifolds we obtain a definition of superconformal field theory (SCFT). One can define also topological superconformal field theory (TSFT); the modification that leads from SCFT to TSFT is very similar to the modification leading from CFT to TCFT. ## 4 Subalgebras, stabilizers, invariants The Lie algebra diff consists of complex vector fields on a circle. A very general way to construct Lie subalgebras of diff is based on the consideration of embedding of the circle into a complex manifold \(M\). Then complex vector fields on the circle that can be holomorphically extended to \(M\) constitute a Lie subalgebra of diff. We can get a smaller Lie subalgebra assuming that the extended vector field vanishes on some subset of \(M.\) A more concrete realization of this construction can be obtained if we take as \(M\) a one-dimensional connected complex manifold ( a complex curve) with \(n\) parametrized boundary components (\(n\) circles \(B_{1},..,B_{n}\)), \(p\) punctures (\(p\) deleted points \(x_{1},...,x_{p}\)) and \(m\) marked points \(u_{1},...,u_{m}.\) ( Equivalently one can consider a complex curve \(\underline{M}\) with \(n\) embedded disks, \(p\) punctures and \(m\) marked points, then we take as \(M\) the curve \(\underline{M}\) with deleted disks.) The moduli space of objects of this kind will be denoted by \({\cal P}(n,p,m)\), its connected components (labeled by genus \({\rm g}\) of \(\underline{M}\)) will be denoted by \({\cal P}({\rm g},n,p,m)\). (If \(p=0,m=0\) we obtain the space \({\cal P}({\rm g},n)\) considered in preceding section.) The direct product of \(n\) copies of the semigroup \({\cal A}\) (hence also the direct product of \(n\) copies of Lie algebra diff) acts on these moduli spaces. Let us fix one of boundary components (say the first one ) and consider the action of corresponding semigroup \({\cal A}\) on \({\cal P}({\rm g},n,p,m)\). The Lie stabilizer \({\tt h}_{M}\subset\)diff at the point \(M\in{\cal P}({\rm g},n,p,m)\) can be described as the Lie algebra of complex vector fields on the boundary component \(S=B_{1}\) that have a meromorphic extension to \(M\) with zeros at the marked points and singularities only in the punctures. Taking the product of \(n\) copies of the semigroup \({\cal A}\) corresponding to all boundary components and considering the Lie stabilizer \({\tt h}_{M}\subset\) diff \(\times...\times\)diff we obtain \[{\cal P}({\rm g},n,p,m)=({\cal A}\times...\times{\cal A})/{\tt h}_{M}.\] (We used the fact that \({\cal A}\times...\times{\cal A}\) acts transitively on \({\cal P}({\rm g},n,p,m).\)) The Lie stabilizer \({\tt h}_{M}\) at the point \(M\in{\cal P}({\rm g},n,p,m)\) consists of vector fields on the boundary of \(M\) that have a meromorphic extension to \(M\) with zeros at the marked points and singularities only in the punctures. Let us consider now CFT with central charge \(c=0\) in Segal's approach. In this approach, we assign a vector \(\phi_{M}\in{\cal H}^{n}\) to every point \(M\in{\cal P}({\rm g},n.)\) Here \({\cal H}\) stands for linear space equipped with non-degenerate inner product.The semigroup \({\cal A}^{n},\) hence its Lie algebra diff\({}^{n}\) acts on \({\cal H}^{n}\). If \(f\in\)diff is complex vector field on a circle then the corresponding operator acting on \(i\)-th factor of \({\cal H}^{n}\) is denoted \(L^{(i)}(f).\)The Virasoro generators acting on \(i\)-th factor (operators corresponding to vector fields \(z^{k+1}\frac{d}{dz}\)) are denoted by \(L^{(i)}_{k}.\) The Lie stabilizer \({\mathfrak{h}}_{M}\subset{\rm diff}^{n}\) consists of complex vector fields on the boundary that can be holomorphically extended to \(M.\) It is easy to check that \(\phi_{M}\) is \({\mathfrak{h}}_{M}\)-invariant. More generally, let us take \(M\in{\cal P}({\rm g},n,p=0,m).\) Fixing holomorphic coordinates at marked points (=holomorphic disks with centers at these points) we obtain a point \(\tilde{M}\in{\cal P}({\rm g},n+m)\) and a vector \(\phi_{\tilde{M}}\in{\cal H}^{n+m}.\) If \(\chi=\chi_{1}\otimes...\otimes\chi_{m}\in{\cal H}^{m}\) we can define \(\psi(\chi)\in{\cal H}^{n}\) as inner product of \(\phi_{\tilde{M}}\) and \(\chi\). (We use inner product in \({\cal H}\) to calculate the pairing of last \(m\) factors in \({\cal H}^{n+m}\) with \(\chi.\)) If \(L^{(i)}_{k}\chi_{i}=0\) for \(k\geq 0\) then \(\psi(\chi)\) does not depend on the choice of coordinate systems at marked points; it is \({\mathfrak{h}}_{M}\) -invariant. Here \({\mathfrak{h}}_{M}\) stands for the Lie algebra of complex vector fields on the boundary of \(M\) that can be extended to holomorphic vector fields on \(M\) vanishing at marked points. (It can be characterized also as Lie stabilizer of \({\cal A}^{n}\) at the point \(M\in{\cal P}({\rm g},n,p=0,m.)\) Let us formulate a similar statement in the case when we work with TCFT instead of CFT. In this case we have maps \({\cal P}^{\prime}({\rm g},n)\to{\cal H}^{n}\) where \({\cal P}^{\prime}({\rm g},n)=\Pi T{\cal P}{\rm g},n)\) and \({\cal H}\) is equipped by a differential \(q\). The algebra diff\({}^{\prime}\) is represented in \({\cal H}\) by operators \(L(f),b(f)\) where \(f\in\)diff. They obey \([L(f),L(g)]=L([f,g],[L(f),b(g)]=b([f,g]),[b(f),b(g)]_{+}=0,L(f)=[q,b(f)]_{+}\). This action induces an action of diff\({}^{\prime}\)\({}^{n}\) on \({\cal H}^{n}\); the operators acting on \(i\)-th factor are denoted by \(L^{(i)}(f),b^{(i)}(f)\) or by \(L^{(i)}_{k},b^{(i)}_{k}\) if \(f=z^{k+1}\frac{d}{dz}.\) Let us consider \(M\in{\cal P}({\rm g},n,p=0,m)\) and a vector \(\kappa=\kappa_{1}\otimes...\otimes\kappa_{m}\in{\cal H}^{m}\) obeying \(q\kappa_{i}=0\) and \[L^{(i)}_{k}\kappa_{i}=0,b^{(i)}_{k}\kappa_{i}=0 \tag{5}\] for \(k\geq 0\). Then slightly modifying the above construction we can define a vector \(\tau(\kappa)\in{\cal H}^{n}\). This vector is \({\mathfrak{h}}^{\prime}_{M}\) -invariant where \({\mathfrak{h}}_{M}\) is the Lie stabilizer of \({\cal A}^{n}\) at the point \(M\in{\cal P}({\rm g},n,p=0,m).\) ( Considering \(M\) as a point of \({\cal P}^{\prime}({\rm g},n,p=0,m)\) we can say that \({\mathfrak{h}}^{\prime}_{M}\) is the Lie stabilizer of \({\cal A}^{\prime n}\) at this point.) Using the inner product in \({\cal H}\) we can define the bra-state \(\langle\tau(\kappa)|.\)This state is also \({\mathfrak{h}}^{\prime}_{M}\) -invariant. String theory Let us consider classical CFT that gives CFT having central charge \(c\) after quantization. To obtain the corresponding string theory we impose constraints \(L_{n}=0,\tilde{L}_{n}=0\) where \(L_{n},\tilde{L}_{n}\) are classical analogs of Virasoro generators. Using the general construction of Section 2 we see that one can get the space of states of string theory (more precisely, one-string space in BRST-formalism) by adding ghosts. In other words, we should take the tensor product of Hilbert space \({\cal E}\) of CFT by the space of ghosts \({\cal E}_{gh}\) that can be considered as a space of states of CFT with central charge \(c_{gh}=-26.\) ( The space of ghosts is a tensor product of spaces of states of \(bc\)-system and \(\tilde{b}\tilde{c}\)-system.) We obtain the space \({\cal E}^{\prime}={\cal E}\otimes{\cal E}_{gh}\). _Let us consider critical closed bosonic string. This means that we assume that \(c=26\)_. Then the space \({\cal E}^{\prime}\) is a space of states of CFT with zero central charge. Generators of Virasoro algebra of this CFT will be denoted by \(\hat{L}_{n},\tilde{\hat{L}}_{m}.\) We need the following relations between operators \(\hat{L}_{n},\tilde{\hat{L}}_{m},b_{n},\tilde{b}_{n},Q\) acting in this space: \[[\hat{L}_{m},\hat{L}_{n}]=(m-n)\hat{L}_{m+n}\] \[[\tilde{\hat{L}}_{m},\tilde{\hat{L}}_{n}]=(m-n)\tilde{\hat{L}}_{ m+n}\] \[[\hat{L}_{m},b_{n}]=(m-n)b_{m+n},[b_{m},b_{n}]_{+}=0 \tag{6}\] \[[\tilde{\hat{L}}_{m},\tilde{b}_{n}]=(m-n)\tilde{b}_{m+n},[\tilde{ b}_{m},\tilde{b}_{n}]_{+}=0\] \[\hat{L}_{n}=[Q,b_{n}],\tilde{\hat{L}}_{n}=[Q,\tilde{b}_{n}],[Q,Q]_ {+}=0\] These relations indicate that by adding ghosts to CFT with critical central charge \(c=26\) we obtain TCFT (topological CFT) on the space \({\cal E}^{\prime}\). Our results can be extended to any TCFT, the assumption that TCFT is obtained from CFT adding ghosts is irrelevant. The Lie superalgebra \({\rm diff}^{\prime}\) is represented in \({\cal E}^{\prime}\) by linear operators \(L({\bf v}),b({\bf v})\) obeying \[[L({\bf v}),L({\bf v}^{\prime})]=L([{\bf v},{\bf v}^{\prime}]),[L({\bf v}),b( {\bf v}^{\prime})]=b([{\bf v},{\bf v}^{\prime}]),[b({\bf v}),b({\bf v}^{ \prime})]_{+}=0.\] Operators \(\tilde{L}({\bf v}),\tilde{b}({\bf v})\) obey similar relations, they give a second representation of \({\rm diff}^{\prime}\), commuting with the first one. (Here \({\bf v},{\bf v}^{\prime}\) are complex-valued vector fields on circle:\({\bf v},{\bf v}^{\prime}\in{\rm diff}.\)) The first four lines of (6) describe the representation of generators of \({\rm diff}^{\prime}\times{\rm diff}^{\prime}\) in \({\cal E}^{\prime}.\) This representation can be extended to a representation of \({\rm diff}^{\prime\prime}\times{\rm diff}^{\prime\prime}.\) (Recall that one can obtain \({\rm diff}^{\prime\prime}\) adding nilpotent generator and ghost number to the generators \(L_{n},b_{n}\) of \({\rm diff}^{\prime}.\)) Let us consider the diagonal part of the Lie algebra \({\rm diff}^{\prime}\times{\rm diff}^{\prime}\) ( the Lie subalgebra generated by operators \(L_{n}+\tilde{L}_{n},b_{n}+\tilde{b}_{n}\)). _We assume that the action of the diagonal part of the Lie algebra \(\mathrm{diff}^{\prime}\times\mathrm{diff}^{\prime}\) in \(\mathcal{E}^{\prime}\) can be integrated and gives an action of \(\mathcal{A}^{\prime}\) on \(\mathcal{E}^{\prime}\) ( a homomorphism \(\Psi\) of the semigroup \(\mathcal{A}^{\prime}\) into the space \(\mathcal{L}\) of linear operators in \(\mathcal{E}^{\prime}\))._ This is a standard assumption that lies in the basis of Segal's definition of CFT ( see Section 3). We can apply general considerations of Section 2 taking \(G=\mathcal{A}\). Let us consider the case when a form on \(G=\mathcal{A}\) ( a function on \(G^{\prime}=\mathcal{A}^{\prime}\)) is specified by (3). The semigroup \(\mathcal{A}\) is homotopy equivalent to \(S^{1}\) therefore an integral of the closed form (3) over any cycle of of dimension \(>1\) vanishes. To get non-trivial physical quantities we construct the form (3) in such a way that it descends to \(G/\mathfrak{h}\) where \(\mathfrak{h}\) is an appropriate Lie subalgebra of the Lie algebra diff of the semigroup \(G=\mathcal{A}\) (or, more generally, a Lie subalgebra of the Lie algebra \(\mathrm{diff}^{n}\) of the semigroup \(G=\mathcal{A}^{n}\)). Examples of subalgebras \(\mathfrak{h}\) and corresponding quotient spaces were constructed in Section 3. ## 6 String amplitudes We start the construction of string amplitudes fixing a one-dimensional compact complex manifold \(P_{0}\in\mathcal{P}(\mathrm{g},1)\) ( a complex curve of genus \(\mathrm{g}\) with parametrized boundary diffeomorphic to a circle \(S^{1}\)). Let us denote by \(\mathfrak{h}\) a Lie algebra consisting of vector fields on the boundary that can be extended to holomorphic vector fields on \(P_{0}.\) The semigroup \(\mathcal{A}\) acts on the moduli space \(\mathcal{P}(\mathrm{g},1)\), hence we can consider the corresponding action of its Lie algebra diff on this space. The Lie algebra \(\mathfrak{h}\) can be characterized as a Lie stabilizer of this action at \(P_{0}.\) The action of \(\mathcal{A}\) on \(\mathcal{P}(\mathrm{g},1)\) is transitive, hence \(\mathcal{P}(\mathrm{g},1)\) can be identified with \(\mathcal{A}/\mathfrak{h}\). The Lie stabilizer \(\mathfrak{h}_{P}\) of diff at the point \(P\in\mathcal{P}(\mathrm{g},1)\) is a Lie subalgebra of diff consisting of vector fields that can be holomorphically extended from boundary to \(P.\) This construction can be generalized to the case when \(P_{0}\in\mathcal{P}(\mathrm{g},n)\) (i.e. it has a boundary consisting of \(n\) parametrized circles; we assume that the orientation of boundary circles agrees with the orientation of \(P_{0}\)). The group \(\mathcal{A}^{n}\) and its Lie algebra \(\mathrm{diff}^{n}\) (the direct sum of \(n\) copies of the Lie algebra diff) act on \(\mathcal{P}(\mathrm{g},n)\) The Lie algebra \(\mathfrak{h}_{P}\) can be defined as the Lie stabilizer of this action at \(P;\) if \(P=P_{0}\) we use the notation \(\mathfrak{h}_{P}=\mathfrak{h}.\) The Lie algebra \(\mathfrak{h}_{P}\) consists of complex vector fields on the boundary that can be holomorphically extended from boundary to \(P.\) The action of \(\mathcal{A}^{n}\) on \(\mathcal{P}(\mathrm{g},n)\) is transitive, hence \(\mathcal{P}(\mathrm{g},n)\) can be identified with \(\mathcal{A}^{n}/\mathfrak{h}.\) All these statements are particular cases of statements formulated in Section 3. Notice that these objects appear in operator formalism in string theory. The main object of operator formalism is an element of \({\cal E}^{\prime}\) depending on \(P\in{\cal P}({\rm g},1)\) (more generally, we have a map \({\cal P}({\rm g},n)\rightarrow{\cal E}^{\prime n},\) where \({\cal E}^{\prime n}\) stands for tensor product of \(n\) copies of \({\cal E}^{\prime}\)). In notations of [1] this map sends \(P\) into \(\phi_{P}.\) It is well known that \(\phi_{P}\)_is \({\sf h}^{\prime}_{P}\)-invariant_ (see formula (5.1) of [1] or formula (7.33) of [5]). Notice that \(\phi_{P}\) appears also in Segal's approach to CFT, \({\sf h}^{\prime}_{P}\)-invariance of \(\phi_{P}\) follows immediately from this approach (see Section 4). In what follows we apply the considerations of Section 2 to the case when \(G={\cal A}^{n},P\in{\cal P}({\rm g},n)\), and \(\Psi_{n}\) denotes the map of \({\cal A}^{\prime n}\) into into the space of linear operators in \({\cal E}^{\prime n}.\) ( We have a representation \(\psi_{n}\) of the Lie algebra \(({\rm diff}^{\prime}\oplus{\rm diff}^{\prime})^{n}\) in this space. This representation is a homomorphism \(\psi_{n}\) of the Lie algebra \(({\rm diff}^{\prime}\oplus{\rm diff}^{\prime})^{n}\) into the space of linear operators in \({\cal E}^{\prime n}\) considered as Lie algebra. It is obtained as the tensor product of \(n\) copies of the homomorphism \(\psi\) of \({\rm diff}^{\prime}\oplus{\rm diff}^{\prime}\) into the space of linear operators in \({\cal E}^{\prime}\). On the diagonal part of \(({\rm diff}^{\prime}\oplus{\rm diff}^{\prime})^{n}\) the homomorphism \(\psi_{n}\) is specified by operators \(L_{k}({\bf v})+\widetilde{L}_{k}({\bf v}),b_{k}({\bf v})+\tilde{b}_{k}({\bf v})\) where \(k=1,...,n.\) The representation \(\psi_{n}\) can be integrated to give a representation \(\Psi_{n}\) of the diagonal part of \({\cal A}^{\prime n}\times{\cal A}^{\prime n};\) later we are using the notations \({\cal A}^{n}\) and \({\cal A}^{\prime n}\) for diagonal parts.) One can verify that \(P=gP_{0}\) where \(g\in{\cal A}^{n}\) implies \[\phi_{P}=(\Psi_{n}(g))(\phi_{P_{0}}) \tag{7}\] The CFT with space of states \({\cal E}^{\prime}\) has central charge \(c=0.\) The map \({\cal P}({\rm g},n)\rightarrow{\cal E}^{\prime n}\) of operator formalism is the Segal's map \(\sigma_{{\rm g},n}:{\cal P}({\rm g},n)\rightarrow{\cal H}^{n}\) in the case \({\cal H}={\cal E}^{\prime}.\) The formula (7) immediately follows from Segal's axioms. Let us consider the form (3) obtained from (1) where \(\langle\rho|\) is \({\sf h}^{\prime}\)-invariant. As a \({\sf h}^{\prime}\)-invariant element \(\langle\rho|\) we take the bra-state corresponding to \(\phi_{P_{0}}\) where \(P_{0}\in{\cal P}({\rm g},n).\) Then the expression (3) looks as follows \[(\Psi_{n}^{*}\sigma)(g\exp(\mu_{k}^{r}(b_{r}^{(k)}+\tilde{b}_{r}^ {(k)})))=\] \[\langle\rho|\Psi_{n}(g)\Psi_{n}(\exp(\mu_{k}^{r}(b_{r}^{(k)}+ \tilde{b}_{r}^{(k)})|\chi)= \tag{8}\] \[\langle\phi_{P}|\exp(\mu_{k}^{r}(b_{r}^{(k)}+\tilde{b}_{r}^{(k)}) )|\chi\rangle\] ( We used (7).) The expression (8) can be considered as inhomogeneous closed differential form on \(G={\cal A}^{n};\) it descends to \(G/{\sf h}={\cal P}({\rm g},n)\) because \(\phi_{P}\) is \({\sf h}^{\prime}_{P}\)-invariant. Homogeneous components of the form (8) are closed forms on \(G/{\sf h}={\cal P}({\rm g},n)\) that can be represented as \[\langle\phi_{P}B|\chi\rangle \tag{9}\] where \(B\) is a homogeneous polynomial with respect to \(b_{r}^{(k)}+\tilde{b}_{r}^{(k)}\) The expression (9) coincides with formulas of operator formalism. Let us show that the differential form (9) descends to some quotients of \(G\); integrating with respect to cycles in the quotients we obtain string amplitudes. It follows from the considerations above that this expression descends to a closed form on \({\cal P}({\rm g},n)\). Moreover, by imposing some conditions on \(\chi\) one can prove that it descends further to a closed form \(\omega_{B}\) on \(\hat{\cal P}({\rm g},n)={\cal P}({\rm g},n)/(S^{1})^{n}\). (The action of the group \((S^{1})^{n}\) on \({\cal P}({\rm g},n)\) is defined in terms of rotations of boundary circles.) Namely, we should assume that \(\chi\in{{\cal E^{\prime}}^{n}}\) can be represented as a tensor product \(\chi=\chi^{(1)}\otimes...\otimes\chi^{(n)}\) where \((L_{0}^{(k)}-\tilde{L}_{0}^{(k)})\chi^{(k)}=0\), \((b_{0}^{(k)}-\tilde{b}_{0}^{(k)})\chi^{(k)}=0.\) This condition means that we can apply the statement at the very end of Section 2 taking the Lie algebra of the group \((S^{1})^{n}\) as \(\mathfrak{h}\). There exists a natural map \(\hat{\cal P}({\rm g},n)\to{\cal M}({\rm g},n)\) where \({\cal M}({\rm g},n)\) is the moduli space of complex curves (one-dimensional compact complex manifolds) of genus \({\rm g}\) with \(n\) marked points. This map is a homotopy equivalence, hence it induces an isomorphism of homology groups. This allows us to integrate forms \(\omega_{B}\) over homology classes of \({\cal M}({\rm g},n).\) (Of course we can get a non-zero answer only if the dimension of the form is equal to the dimension of the homology class. Notice that equivalently we can integrate the original non-homogeneous form; the answer depends only on the homogeneous component of degree equal to the dimension of the integration cycle.) We obtain a formal expression for string amplitudes integrating \(\omega_{B}\) over the fundamental homology cycle of \({\cal M}({\rm g},n)\) (one should take \(B\) having degree equal to the dimension of \({\cal M}({\rm g},n)\)). This is a formal divergent expression; the physical explanation of divergence is the presence of tachyon in the spectrum of bosonic string. From mathematical viewpoint, the problem lies in the non-compactness of \({\cal M}({\rm g},n)\) (fundamental homology class is a locally finite cycle, to guarantee convergence we should integrate over a finite cycle or to work with Deligne-Mumford compactification). However, integrals of forms \(\omega_{B}\) over genuine homology classes of \({\cal M}({\rm g},n)\) exist. (Notice that these forms where used in string field theory [5].) ## 7 Conclusions and modifications In the present paper, we have shown that starting with the one-string space of states in BRST formalism one can get an expression for string amplitudes: one should integrate (9) over some cycles in appropriate quotients of \(G={\cal A}^{n}\). The above constructions can be modified in various ways. Our considerations were based on the statement at the end of Section 2; we assumed that \(G={\cal A}^{n}\), the Lie subalgebra \(\mathfrak{h}\) is a Lie stabilizer of \(G\) at the point of \({\cal P}({\rm g},n)\) and the Lie subalgebra \({\mathfrak{h}}\) is the Lie algebra of \((S^{1})^{n}.\) One can take other subalgebras \({\mathfrak{h}},{\mathfrak{h}}.\); in particular, one can take one or both of these subalgebras as Lie stabilizers of \(G\) at the points of \({\cal P}({\rm g},n,p,m).\)( For example, in the situation described at the end of Section 4 we can take \({\mathfrak{h}}={\mathfrak{h}}_{M}\) and \(\langle\rho|=\langle\tau(\kappa)|.\)) One can hope to get closed forms with integrals related to interesting physical quantities (for example, to inclusive cross-sections or to mass renormalization [6]). One more way to get new quantities is based on the remarks at the end of the Appendix where it is shown that one can construct an analog of operator formalism in terms of \(L\)-functionals. Other modifications allow us to consider scattering in superstring and heterotic string. They are based on the consideration of superconformal manifolds and supersymmetric analog of the semigroup \({\cal A}.\) Notice that in the present paper, we tacitly assumed that we consider left-right symmetric conformal field theories; of course, considering heterotic string and other theories with independent left and right sectors we se should drop this assumption. (In these cases it is useful to apply the ideas of [7].) More details will be given in the follow-up paper entitled " A new approach to superstring". **Acknowledgments**I am deeply indebted to M. Movshev and A. Rosly for very useful discussions. **Appendix** Let us start with some general remarks about quantization of symplectic vector spaces. In appropriate coordinates we can write the symplectic form on such a space either as \(\omega=\sum dp_{k}dq^{k}\) (real Darboux coordinates \(p_{k},q^{k}\)) or as \(\omega=\sum da_{k}^{*}da_{k}\) ( complex Darboux coordinates \(a_{k}^{*},a_{k}\)). (Notice, that our considerations can be applied also in the case when the number of indices is infinite or, more generally, in the case when \(k\) takes values in some measure space; in the latter case one should replace sums by integrals.) In real Darboux coordinates we can represent a quantum state as a vector (or, more precisely, as a ray) in the Hilbert space of square integrable functions of \(q_{k}\) (coordinate representation) or of \(p_{k}\) (momentum representation); these representations are related by Fourier transform. In complex Darboux representation we represent a state as a vector in Fock space \({\cal F}\) (in a representation of canonical commutation relations \[[\hat{a}_{k},\hat{a}_{l}]=\delta_{k,l},[\hat{a}_{k},\hat{a}_{l}]=[\hat{a}_{k}^ {*},\hat{a}_{l}^{*}]=0 \tag{10}\] where there exists a cyclic vector \(\theta\) obeying \(\hat{a}_{k}\theta=0\)). Notice that the choice of Darboux coordinates is not unique; different Darboux coordinates are related by linear canonical transformations: \[\tilde{p}_{k}=A^{l}_{k}p_{l}+B_{kl}q^{l},\tilde{q}^{k}=C^{kl}p_{l}+D^{k}_{l}q^{l}\] in real case, \[\tilde{a}_{k}=\Phi^{l}_{k}a_{l}+\Psi^{l}_{k}a^{*}_{l},\tilde{a}^{*}_{k}=\overline {\Phi}^{l}_{k}a^{*}_{l}+\overline{\Psi}^{l}_{k}a_{l}\] in complex case. (Recall that by definition canonical transformations preserve Poisson brackets in classical mechanics and commutation relations after quantization.) Let us concentrate our attention on complex case. One says that the canonical transformation \[\tilde{\tilde{a}}_{k}=\Phi^{l}_{k}\hat{a}_{l}+\Psi^{l}_{k}\hat{a}^{*}_{l}, \tilde{\tilde{a}}^{*}_{k}=\overline{\Phi}^{l}_{k}\hat{a}^{*}_{l}+\overline{ \Psi}^{l}_{k}\hat{a}_{l}\] is proper if there exists a unitary operator \(U\) obeying \[\tilde{\tilde{a}}_{k}=Ua_{k}U^{-1},\tilde{\tilde{a}}^{*}_{k}=Ua^{*}_{k}U^{-1}.\] In the case of finite number of degrees of freedom all canonical transformations are proper, hence Hilbert spaces constructed by means of different Darboux coordinates can be identified (up to a constant factor because \(U\) is defined up to such a factor). It is easy to check that a canonical transformation is proper iff there exists a a vector \(\tilde{\theta}\) in the Fock space \({\cal F}\) obeying \(\tilde{\tilde{a}}_{k}\tilde{\theta}=0\) (see[8] for more detail). The vector \(\tilde{\theta}\) corresponds to a Lagrangian subspace \(W\) in the complexification of symplectic vector space \(V\): the subspace \(W\) is defined by equations \[\Phi^{l}_{k}a_{l}+\Psi^{l}_{k}a^{*}_{l}=0.\] Conversely, a Lagrangian subspace \(W\) in the complexification ov \(V\) specifies a vector \(\theta_{W}\) in \({\cal F}\) ; this vector is defined by equations: \[\hat{w}_{k}\theta_{W}=0 \tag{11}\] where \(w_{k}\) stands for a basis of \(W.\) Notice that (11) not always has a solution, but if the solution exists it is defined up to a constant factor. The solution is not necessarily normalizable ( if \(W\) is real \(\theta_{W}\) is always non-normalizable). In general Lagrangian submanifolds correspond to vectors in Hilbert spaces (in the framework of semiclassical approximation.) This correspondence is ambiguous, but for linear symplectic spaces and linear Lagrangian submanifolds (the case we consider) the quantization is a well-defined procedure. The same construction works if the canonical commutation relations (10) are replaced by canonical anticommutation relations \[[\hat{a}_{k},\hat{a}_{l}]_{+}=\delta_{k,l},[\hat{a}_{k},\hat{a}_{l}]_{+}=[ \hat{a}^{*}_{k},\hat{a}^{*}_{l}]_{+}=0 \tag{12}\] and bosonic Fock space by fermionic Fock space. The coordinates in the analog of symplectic vector space are regarded as odd (anticommuting) variables. Let us consider now an oriented compact manifold \(M\) with boundary represented as a disjoint union of two parts: outgoing part \(\partial M_{+}\) with orientation agreeing with the orientation of \(M\) and incoming part \(\partial M_{-}\) with opposite orientation. Let us fix an action functional \(S\) on fields defined on \(M.\) Then the variation \(\delta S\) of the functional \(S\) can be written in the form \[\delta S=\int_{M}EM+\alpha_{+}-\alpha_{-} \tag{13}\] The first summand contains integration over the whole manifold, it vanishes if the fields obey the equations of motion. The second and third summands contain integration over outgoing boundary (\(\alpha_{+}\)) and incoming boundary (\(\alpha_{-}\)). We can consider all summands in (13) as one-forms on the space of fields. Let us restrict (13) to the space \({\cal E}\) of fields satisfying the equations of motion \(EM=0\). Then the first summand disappears and the difference \(\alpha_{+}-\alpha_{-}\) is equal to exact form \(\delta S.\) This means that two-forms \(\delta\alpha_{+}\) and \(\delta\alpha_{-}\) coincide on \({\cal E}\). (We use the notation \(\delta\) for de Rham differential on infinite-dimensional spaces.) We obtain a closed two-form on \({\cal E}\); if this form is non-degenerate we can consider \({\cal E}\) as a symplectic manifold; in general \({\cal E}\) is a presymplectic manifold. Let us consider in more detail the case when \(M\) is a two-dimensional manifold. Then the boundary of \(M\) consists of disjoint circles. Applying the above construction to an annular neighborhood of a circle (considering the space of solutions of equations of motion on annulus) we obtain a presymplectic manifold; let us assume that this manifold is symplectic. We identify it with the phase space and denote it by \({\cal P}\). Let us assume that that the boundary of \(M\) consists of \(n\) outgoing circles (the incoming boundary is empty). Restricting the solutions of equations of motion on \(M\) to the annular neighborhoods of boundary circles we obtain a map of the space \({\cal E}\) of solutions on \(M\) into \(n\)-th power of the phase space \({\cal P}\). It follows from the consideration above that the image of this map is a Lagrangian submanifold of \({\cal P}^{n}\). If the action functional \(S\) is quadratic the equations of motion are linear and we can apply the constructions in the beginning of Appendix to quantize \({\cal P}\) and this Lagrangian submanifold. We obtain Hilbert space \({\cal H}\) and a vector (more precisely a ray) in \({\cal H}^{n}\). If the action functional \(S\) is conformally invariant we can consider \(M\) as an element of \({\cal P}({\rm g},n)\). We obtain the map \(\sigma_{{\rm g},n}:{\cal P}({\rm g},n)\to{\cal H}^{n}\) of Segal's approach to CFT. (In general this map is defined up to a factor; this corresponds to CFT with a non-vanishing central charge.) All our considerations can be applied to the case when the action functional is defined on commuting and anticommuting fields; then we should work with symplectic superspaces and their Lagrangian submanifolds. This remark allows us to apply the above techniques to bosonic string in flat 26-dimensional Minkowski space (in BRST formalism all equations of motion are linear). In this case, we recover formulas of operator formalism of bosonic string theory [1]. Let us apply the same techniques in the formalism of \(L\)-functionals (see for example [4]). In this formalism, we assign to every vector \(\Phi\) in representation space of CCR (10) or CAR (12) a functional \[{}_{\Phi}(\alpha^{*},\alpha)=\langle e^{-\alpha\hat{a}^{*}}e^{\alpha^{*}\hat{a }}\Phi,\Phi\rangle\] or, more generally, to every density matrix \(K\) in this space a functional \[L_{K}(\alpha^{*},\alpha)=tre^{-\alpha\hat{a}^{*}}e^{\alpha^{*}\hat{a}}K.\] Here \(e^{-\alpha\hat{a}^{*}}=e^{-\alpha^{k}\hat{a}^{*}_{k}}\), where \(\alpha^{k}\) are commuting parameters in the case of CCR and anticommuting parameters in the case of CAR. Nonlinear \(L\)-functional \(L(\alpha^{*},\alpha)\) corresponds to positive linear functional on Weyl algebra ( a \({}^{*}\)-algebra with generators obeying CCR) or Clifford algebra (where CCR are replaced with CAR). For every element \(B\) of \({}^{*}\)-algebra \(\mathcal{A}\) one can define two operators acting on the space of linear functionals on \(\mathcal{A}\); one of them ( denoted by the same symbol \(B\)) transforms a linear functional \(\omega(A)\) into linear functional \(\omega(AB)\), the second one (denoted by the symbol \(\tilde{B}\)) transforms this functional into linear functional \(\omega(B^{*}A)\). If the functional \(\omega(A)\) corresponds to vector \(\Phi\) (i.e. \(\omega(A)=\langle\Phi,A\Phi\rangle\)) and \(B\Phi=0\) then \(B\omega=0\) and \(\tilde{B}\omega=0\). This remark allows us to write down the equations for functionals \(\omega\) corresponding to vectors \(\Phi\) that appear in operator formalism. Representing linear functionals on Weyl or Clifford algebra as functionals \(L(\alpha^{*},\alpha)\) we can calculate operators on these functionals corresponding to the generators \(\hat{a}_{k},\hat{a}^{*}_{k}\) (see[4]). Using this remark we obtain equations for functionals \(L(\alpha^{*},\alpha)\) appearing in operator formalism.
2309.05413
Learning noise-induced transitions by multi-scaling reservoir computing
Noise is usually regarded as adversarial to extract the effective dynamics from time series, such that the conventional data-driven approaches usually aim at learning the dynamics by mitigating the noisy effect. However, noise can have a functional role of driving transitions between stable states underlying many natural and engineered stochastic dynamics. To capture such stochastic transitions from data, we find that leveraging a machine learning model, reservoir computing as a type of recurrent neural network, can learn noise-induced transitions. We develop a concise training protocol for tuning hyperparameters, with a focus on a pivotal hyperparameter controlling the time scale of the reservoir dynamics. The trained model generates accurate statistics of transition time and the number of transitions. The approach is applicable to a wide class of systems, including a bistable system under a double-well potential, with either white noise or colored noise. It is also aware of the asymmetry of the double-well potential, the rotational dynamics caused by non-detailed balance, and transitions in multi-stable systems. For the experimental data of protein folding, it learns the transition time between folded states, providing a possibility of predicting transition statistics from a small dataset. The results demonstrate the capability of machine-learning methods in capturing noise-induced phenomena.
Zequn Lin, Zhaofan Lu, Zengru Di, Ying Tang
2023-09-11T12:26:36Z
http://arxiv.org/abs/2309.05413v1
# Learning noise-induced transitions by multi-scaling reservoir computing ###### Abstract Noise is usually regarded as adversarial to extract the effective dynamics from time series, such that the conventional data-driven approaches usually aim at learning the dynamics by mitigating the noisy effect. However, noise can have a functional role of driving transitions between stable states underlying many natural and engineered stochastic dynamics. To capture such stochastic transitions from data, we find that leveraging a machine learning model, reservoir computing as a type of recurrent neural network, can learn noise-induced transitions. We develop a concise training protocol for tuning hyperparameters, with a focus on a pivotal hyperparameter controlling the time scale of the reservoir dynamics. The trained model generates accurate statistics of transition time and the number of transitions. The approach is applicable to a wide class of systems, including a bistable system under a double-well potential, with either white noise or colored noise. It is also aware of the asymmetry of the double-well potential, the rotational dynamics caused by non-detailed balance, and transitions in multi-stable systems. For the experimental data of protein folding, it learns the transition time between folded states, providing a possibility of predicting transition statistics from a small dataset. The results demonstrate the capability of machine-learning methods in capturing noise-induced phenomena. ## I Introduction Noise-induced transitions are ubiquitous in nature and occur in diverse systems with multi-stable states [1]. Examples include switches between different voltage and current states in the circuit [2], noisy genetic switches [3], noise-induced biological homochirality of early life self-replicators [4], protein conformational transitions [5; 6], and chemical reactions [7] with the multi-stable probability distribution [8]. Learning noise-induced transitions is vital for understanding critical phenomena of these systems. In many scenarios, only time series are available without mathematical equations known in prior. To effectively learn and predict noise-induced transitions from time series, there is also a challenge of discerning dynamics with both slow and fast time scales: fast relaxation around distinct stable states and slow transitions between them, where the fast time-scale signals are often referred to noise [9; 10]. Consequently, it remains elusive to learn stochastic transitions from time series in general. Recently, many efforts have been made to learn the dynamics from data by machine-learning methods [11; 12; 13; 14; 15]. One type of approach uses the automatic differentiation for identifying nonlinear dynamics, denoising time-series data, and parameterizing the noisy probability distribution from data [16]. Due to the non-convexity of the optimization problem, the method may struggle to robustly handle large function libraries for the regression. Another type of approach employs physics-informed neural networks for data-driven solutions and discoveries of partial differential equations [17; 18], or Koopman eigenfunctions from data [19]. However, the method requires an extensive quantity of data to train the deep neural network, alongside precise adjustment and refinement of the network. Despite the broad application of the aforementioned methods, to our knowledge, they have not been utilized in studying noise-induced transitions. To investigate whether machine-learning methods can capture and predict noise-induced transitions, we start with one machine-learning architecture, reservoir computing (RC) [20; 21]. The training of reservoir computer is a simple linear regression, which is less computationally expensive than the neural network that requires the back propagation. The input layer of the reservoir transforms time series into the space of the reservoir network, while the output layer transforms the variables of the reservoir back to time series. The output layer is trained to minimize the difference between the input and output, by tuning the hyperparameters. The reservoir computing is particularly effective for learning dynamical systems [22; 23], including chaotic systems [24; 25; 26; 27]. A recent research started to apply the reservoir computing to stochastic resonance [28], however, the functional role of noise in shifting dynamics between stable states has not been investigated. There is one previous attempt on employing reservoir computing for noise-induced transitions [29]. Nevertheless, it relies on an impractical assumption on knowing the equation for
2309.13517
Lifting Theorems Meet Information Complexity: Known and New Lower Bounds of Set-disjointness
Set-disjointness problems are one of the most fundamental problems in communication complexity and have been extensively studied in past decades. Given its importance, many lower bound techniques were introduced to prove communication lower bounds of set-disjointness. Combining ideas from information complexity and query-to-communication lifting theorems, we introduce a density increment argument to prove communication lower bounds for set-disjointness: We give a simple proof showing that a large rectangle cannot be $0$-monochromatic for multi-party unique-disjointness. We interpret the direct-sum argument as a density increment process and give an alternative proof of randomized communication lower bounds for multi-party unique-disjointness. Avoiding full simulations in lifting theorems, we simplify and improve communication lower bounds for sparse unique-disjointness. Potential applications to be unified and improved by our density increment argument are also discussed.
Guangxu Yang, Jiapeng Zhang
2023-09-24T00:51:23Z
http://arxiv.org/abs/2309.13517v1
# Lifting Theorems Meet Information Complexity: Known and New Lower Bounds of Set-disjointness+ ###### Abstract Set-disjointness problems are one of the most fundamental problems in communication complexity and have been extensively studied in past decades. Given its importance, many lower bound techniques were introduced to prove communication lower bounds of set-disjointness. Combining ideas from information complexity and query-to-communication lifting theorems, we introduce a density increment argument to prove communication lower bounds for set-disjointness: * We give a simple proof showing that a large rectangle cannot be 0-monochromatic for multi-party unique-disjointness. * We interpret the direct-sum argument as a density increment process and give an alternative proof of randomized communication lower bounds for multi-party unique-disjointness. * Avoiding full simulations in lifting theorems, we simplify and improve communication lower bounds for sparse unique-disjointness. Potential applications to be unified and improved by our density increment argument are also discussed. ## 1 Introduction _Set-disjointness_ is one of the most important problems in communication complexity. Since the formulation of the communication model [20], many researchers have made great efforts to understand the communication complexity, both upper and lower bounds, of set-disjointness problems in various communication models [1, 1, 2, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 13, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 243, 244, 245, 246, 247, 258, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 30, 311, 32, 334, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 83, 84, 86, 89, 91, 84, 85, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 109, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 141, 142, 143, 144, 145, 146, 147, 148, 149, 151, 152, 154, 155, 156, 157, 158, 159, 161, 170, 171, 172, 174, 175, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 191, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 232, 243, 244, 245, 246, 247, 258, 269, 270, 281, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 32, 334, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 117, 109, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 243, 244, 245, 246, 247, 258, 269, 270, 281, 293, 294, 295, 296, 297, 298, 299, 301, 209, 209, 211, 209, 211, 209, 223, 243, 244, 246, 248, 249, 258, 269, 287, 299, 300, 311, 323, 34, 35, 36, 37, 390, 313, 34, 35, 37, 391, 314, 35, 37, 392, 301, 34, 35, 37, 393, 314, 35, 37, 393, 315, 394, 316, 395, 317, 396, 318, 397, 319, 320, 301, 302, 303, 304, 305, 306, 307, 308, 309, 319, 332, 309, 319, 321, 34, 35, 37, 393, 319, 33 [AMS99, BYJKS04, KPW21], proof complexity [GP18], game theory [GNOR15, GS17], property testing [BBM11], data structure lower bound [MNSW95], extension complexity [BM13, GW16], and more. Given the importance of this problem, many techniques were invented simply to understand communication lower bounds of set-disjointness. Some remarkable methods include rank method [Gri85, HW07, RY20], discrepancy method [RY15], corruption bound [Raz92], smooth rectangle bound [JK10, HJ13], and information complexity [CSWY01, BYJKS04, Gro09, Jay09]. Among all of these methods, the information complexity framework seems to provide the best results so far. We refer interested readers to [CP10] for a good survey on these results. In this paper, we continue the study of set-disjointness. Inspired by _simulation methods_ in _query-to-communication lifting theorems_[RM97, GPW18, LMM\({}^{+}\)22, YZZ2], we present a proof of lower bounds of set-disjointness based on _density increment arguments_ (sometimes also called structure-vs-pseudorandomness approach). Based on this method, we give several new lower bounds for set-disjointness in different communication models. Our proof can be considered as a combination of simulation methods and information complexity. Compared with previous techniques, our proof is simpler and more general. It addresses some drawbacks of both simulation methods and information complexity methods. More details will be discussed in Section 1.2. ### Our results The main contribution of this work is _"explicit proofs"_ of communication lower bounds, together with some new _unique-disjointness_ lower bounds. We call it explicit because our proof framework has several advantages compared to existing techniques: * It has fewer restrictions to communication models. * It allows us to use communication lower bound techniques in a non-black-box way. * It provides a method to analyze distributions with correlations between different coordinates. In Section 1.3, we discuss three potential applications of these advantages. Each of them corresponds to an advantage here. Our proof builds on a combination of simulation techniques from lifting theorems and information complexity. Specifically, we abstract the core idea from _Raz-McKenzie simulation_[RM97] and revise it as a density increment argument. To explain more connections and comparison with previous techniques, we present three lower bounds for unique-disjointness problem. We first study the _multi-party communication model_ (\(k\)-UDIS). In this setting, there are \(k\) parties, where each party \(j\) holds a set \(x_{j}\in\{0,1\}^{n}\) (we use a binary string to represent a set). It is promised that either all sets are pairwise disjoint, or they share a unique common element. Formally, we define * \(D_{0}:=\{(x_{1},\ldots,x_{k})\in(\{0,1\}^{n})^{k}:\forall i,x_{1}(i)+\cdots+x _{k}(i)\leq 1\}\). * \(D_{*}:=\{(x_{1},\ldots,x_{k})\in(\{0,1\}^{n})^{k}:\exists\ell,x_{1}(\ell)= \cdots=x_{k}(\ell)=1\text{ and }\forall i\neq\ell,x_{1}(i)+\cdots+x_{k}(i)\leq 1\}\). We use \(D_{0}\) to refer to the \(\mathsf{no}\) instances and \(D_{*}\) to refer to the \(\mathsf{yes}\) instances. In this setting, we prove a structure lemma that any \(0\)-large rectangle must intersect \(D_{*}\). **Theorem 1.1**.: _Let \(R\subseteq(\{0,1\}^{n})^{k}\) be a rectangle such that \(|R\cap D_{0}|\geq 2^{-n/k}\cdot|D_{0}|\), then \(R\cap D_{*}\neq\emptyset\)._ We note that Theorem 1.1 implies (and stronger than) an \(\Omega(n/k)\) deterministic communication lower bound of \(k\)-UDISJ. For any protocol with \(o(n/k)\) communication bits, we can always find a rectangle \(R\) in the partition such that \(|R\cap D_{0}|\geq 2^{-n/k}\cdot|D_{0}|\), Theorem 1.1 then tells us that \(R\) is not disjoint from \(D_{*}\). Our proof is a two-page elementary (and self-contained) proof. Furthermore, we do not even need notions like entropy or rank. This proof also reveals the main idea of query-to-communication lifting theorems. We will discuss more details in Section 1.2. Our second contribution is a new proof of randomized communication lower bounds of \(k\)-UDISJ. This problem has been extensively studied for many years. Building on a series of great papers [1, 1, 2], the optimal tight randomized communication lower bound \(\Omega(n/k)\) was finally obtained by [14, 15] through the information complexity framework. In this paper, we reprove this theorem via the density increment argument. **Theorem 1.2**.: _For any \(k\geq 2\), the randomized communication complexity of \(k\)-UDISJ is \(\Omega(n/k)\)._ We first note that Theorem 1.2 does not imply Theorem 1.1 because Theorem 1.1 indicates that every large rectangle (contains many no instance) cannot be monochromatic. However, Theorem 1.2 only proves randomized communication lower bounds. Our proof of Theorem 1.2 is a mix of information complexity and query-to-communication simulations. Roughly speaking, in the information complexity framework, we analyze the information cost for each coordinate and then apply a direct-sum argument to merge them. In our density increment argument, we merge these costs by borrowing the projection operation from query-to-communication simulations. Hence, our density increment argument can be interpreted as an alternative direct-sum argument. Several papers [13, 1, 15] pointed research directions in connecting information complexity and lifting theorems, and our proof has a great potential to unify information complexity and lifting theorems in this direction. Our last result is a tight deterministic lower bound for (two-party) _sparse unique-disjointess_ (S-UDISJ) for a large range of sparse parameters. This problem, with sparsity parameter \(s\), can be described as follows: Alice holds a set \(A\) and Bob holds a set \(B\) with \(|A|,|B|\leq s\). It is promised that either \(A\cap B=\emptyset\) or \(|A\cap B|=1\), where Alice and Bob need to distinguish the two cases with deterministic communication. Two extreme choices of \(s\) correspond to two important problems in communication complexity. If \(s=n\), this problem becomes the standard unique-disjointness problem (i.e., \(k\)-UDISJ with \(k=2\)). When \(s=1\), the problem is essentially the _EQUALITY problem_. For S-UDISJ, we prove the following theorem. **Theorem 1.3**.: _Let \(\epsilon>0\) be any small constant. For any \(s\leq n^{1/2-\epsilon}\), the deterministic communication complexity of \(s\)-sparse unique-disjointness is \(\Omega(s\cdot\log(n/s))\)._ Prior to our work, Kushilevitz and Weinreb [16] proved the same lower bound for a smaller range of \(s\leq\frac{\log n}{\log\log n}\). Then Loff and Mukhopadhyay [17] improved this range to \(s\leq n^{1/101}\). Our Theorem 1.3 further pushes this range to \(\approx n^{1/2}\). Our proof of Theorem 1.3 is built on [17] with several differences where the main difference is that we no longer fully simulate the communication tree by a decision tree. Instead, we aim to find a long path in the communication tree. The approach was suggested by Yang and Zhang [222]. We believe it is possible to further improve the range to all \(s\geq 1\) and discuss more details in Section 5. A similar task is to prove a deterministic lower bound for sparse set-disjointness without the unique requirement. In this setting, Hastad and Wigderson pointed out [10] a same \(\Omega(s\cdot\log(n/s))\) bound can be proved via the rank method in [11]. However, in the unique setting, [13] showed that the rank method is impossible to achieve such tight bounds. We emphasize that Theorem 1.3 is a lower bound only for _deterministic_ communication complexity. Allowing public randomness and a constant error, there exists a protocol that costs only \(O(s)\) bits [10]. Therefore, this \(\log(n/s)\) factor is a _separation_ between randomized communication and deterministic communication. This also implies all lower bound techniques that simultaneously imply randomized communication lower bounds, including information complexity approaches, cannot reprove our bounds. Furthermore, Braverman [1] gave a zero-error protocol for \(1\)-sparse unique-disjointness with constant information cost which can be extended to all \(s\geq 1\). **Lemma 1.4**.: _For any \(s>0\). There is a zero-error protocol for \(s\)-sparse unique-disjointness with information cost \(O(s)\)._ Overall, Theorem 1.3 demonstrates that the density increment argument has fewer restrictions on communication models and is able to circumvent such barriers by rank methods and information complexity. ### Our techniques Here we give an overview of our proof technique and discuss connections to lifting theorems and information complexity. We focus on Theorem 1.1. We recall that no instances are \[D_{0}:=\{(x_{1},\ldots,x_{k})\in(\{0,1\}^{n})^{k}:\forall i,x_{1}(i)+\cdots+x _{k}(i)\leq 1\}.\] Our main idea of Theorem 1.1 is a density increment argument. In this argument, we first define the density of a rectangle \(R\) on \(D_{0}\) by \[E(R):=\frac{|R\cap D_{0}|}{|D_{0}|}=\frac{|R\cap D_{0}|}{(k+1)^{n}}.\] It is clear that \(E(R)\leq 1\) because \(R\cap D_{0}\subseteq D_{0}\). Now Theorem 1.1 is equivalent to say that any rectangle \(R\) with density \(E(R)\geq 2^{-o(n/k)}\) cannot be monochromatic. Let \(R\) be any _monochromatic rectangle_ containing only no instances. We will perform a projection operation to convert \(R\) into another _monochromatic rectangle_\(R^{\prime}\subseteq(\{0,1\}^{n-1})^{k}\) with a larger density \(E(R^{\prime})\geq E(R)\cdot(1+1/k)\). Now, since \(R^{\prime}\) is still monochromatic, we repeat this projection for \(n\) rounds where each round increases the density by a factor of \((1+1/k)\). Let \(R^{*}\) be the rectangle after \(n\) projections, we then have \[E(R^{*})\geq E(R)\cdot(1+1/k)^{n}.\] Combining \(E(R^{*})\leq 1\) and \(E(R)\geq 2^{-o(n/k)}\), this gives a contradiction. Now we briefly explain our projection process. Let \(R=X_{1}\times\cdots\times X_{k}\subseteq(\{0,1\}^{n})^{k}\) be a monochromatic rectangle. For each party \(j\in[k]\), the projection of \(R\) on \(j\) is a rectangle \(\Pi_{j}(R)=X^{\prime}_{1}\times\cdots\times X^{\prime}_{k}\subseteq(\{0,1\}^ {n-1})^{k}\) defined by: * For each party \(j^{\prime}\neq j\), \(X^{\prime}_{j^{\prime}}:=\{x^{\prime}\in\{0,1\}^{n-1}:(x^{\prime},0)\in X_{j^{ \prime}}\}\). * For the party \(j\), \(X^{\prime}_{j}:=\{x^{\prime}\in\{0,1\}^{n-1}:\text{ either }(x^{\prime},0)\in X_{j} \text{ or }(x^{\prime},1)\in X_{j}\}\). It is not hard to see that \(\Pi_{j}(R)\) (for any \(j\)) preserves the monochromatic property of \(R\). On the other hand, we show there exists a party \(j\in[k]\), such that \(\Pi_{j}(R)\) has a larger density compared to \(R\). In fact, this density increment captures the communication cost of party \(j\). We give a full proof of Theorem 1.1 in Section 3. We suggest readers begin with Section 3, and then proceed to Section 4 and Section 5. Connections to lifting theorems.Query-to-communication lifting theorems are a generic method to lift the hardness of one-party functions (decision tree lower bounds) to two-party lower bounds in the communication model. This recent breakthrough gives many applications in diverse areas [12, 1, Different communication models.As an active research area, many techniques were invented to prove communication complexity lower bounds in past decades. However, many of these techniques are specific to only one communication model. For example, the rank method mainly applies to deterministic communication; information complexity is usually used for randomized communication. We believe our density increment arguments provide more flexibility, having less dependency on the communication models. For example, in Theorem 1.1, we study a lower bound similar to (but not exactly) corruption bounds; in Theorem 1.2, we prove randomized communication lower bounds; in Theorem 1.3, we show deterministic lower bounds (it is a separation from randomized communication). Overall, we demonstrate that (at least for the set-disjointness problems) the density increment arguments combine both advantages of lifting theorems and information complexity. Till now, some communication models are still not fully understood. For example, the \((\exists-1)\)-game is an interesting communication model with applications in extension complexity [11]. However, to our best knowledge, we still do not have a generic way to prove extension complexity lower bounds. Another example is the number-on-forehead model. It would be interesting to see if density increment arguments give new applications in various communication models. Streaming lower bounds.The connections between communication complexity and streaming lower bounds were explored by a seminal work of Alon, Matias and Szegedy [1]. [1] proved a streaming lower bound for _frequency moment estimation_ based on a reduction from unique-disjointness lower bounds. After that, many subsequent works made a lot of efforts to improve the lower bounds [1, 2, 3, 4, 5, 6] for this problem. As [5] also pointed out, any improved lower bounds of frequency moment estimation can be automatically applied to improve lower bounds for many other streaming problems. However, the optimal bound for this fundamental problem1 is still not clear2. To the best of our knowledge, all current lower bounds rely on (black-box) reductions from \(k\)-UDISJ lower bounds. As we discussed, an \(\Theta(n/k)\) bound for randomized communication of \(k\)-UDISJ is already tight, and the black-box reduction seems a dead end in achieving tight bounds for frequency moment estimation. Footnote 1: We focus on the random order streaming model. Footnote 2: [1] claimed a tight bound, however, [5] pointed there is a flaw in [1]. To resolve this barrier, we believe an important step is to open the black box. Put differently, we should extend communication complexity lower-bounds techniques into streaming models. Since our proof of \(k\)-UDISJ has fewer restrictions to models, it is reasonable to try this argument on streaming settings. Concretely, could we prove a tight lower bound of frequency moment estimation by the density increment argument? Coordinate-wise correlated hard distributions.Many proofs of randomized communication complexity lower bounds start with Yao's minimax theorem and design a hard distribution. In some important applications, the hard distribution has a strong correlation between input coordinates. A good example is _Tseitin problems_, the lower bounds of which can be converted into lower bounds in proof complexity [1], extension complexity [11], monotone computation lower bounds [12]. However, the hard distribution of Tseitin has complicated coordinate-wise correlation, which makes the information complexity argument difficult to use. Known lower bounds for randomized communication [11, 12] all lose a \(\log n\) factor (including one based on a black-box reduction from two-party unique-disjointness [11]). Again, it seems the loss cannot be avoided in black-box reductions and it is very interesting to see if our density increment arguments are able to break this barrier. Acknowledgements.Authors thank Shachar Lovett and Xinyu Mao for helpful discussions. We are grateful to Kewen Wu for reading the early versions of this paper and providing useful suggestions. ## 2 Preliminary For integer \(n\geq 0\), we use \([n]\) to denote the set \(\{1,2,\ldots,n\}\). Throughout, \(\log(\cdot)\) is the logarithm with base \(2\). For a finite domain \(X\), we use \(x\sim X\) to denote a random variable \(x\) uniformly distributed over \(X\). **Definition 2.1** (Entropy).: _Let \(D\) be a random variable on \(X\). The entropy of \(D\) is defined by_ \[\mathcal{H}(D):=\sum_{x}\Pr[D=x]\cdot\log(1/\Pr[D=x]).\] _Let \(A\) and \(B\) be two random variables on \(X\) and \(Y\) respectively. the conditional entropy of \(A\) given \(B\) is defined by_ \[\mathcal{H}(A|B)=\sum_{y\in Y}\Pr[B=y]\cdot\sum_{x\in X}\Pr[A=x|B=y]\cdot\log( 1/\Pr[A=x|B=y]).\] **Definition 2.2** (Mutual information).: _Let \(A\) and \(B\) be two (possibly correlated) random variables on \(X\) and \(Y\) respectively. The mutual information of \(A\) and \(B\) is defined by_ \[\mathcal{I}\left(A:B\right)=\mathcal{H}(A)-\mathcal{H}(A|B)=\mathcal{H}(B)- \mathcal{H}(B|A).\] _Let \(C\) be a random variable on \(Z\), the conditional mutual information of \(A\) and \(B\) given \(C\) is defined by_ \[\mathcal{I}\left(A:B|C\right)=\mathcal{H}(A|C)-\mathcal{H}(A|B,C)=\mathcal{H} (B|C)-\mathcal{H}(B|A,C).\] We use several basic properties of entropy and mutual information. **Fact 2.3**.: _Let \(A\) and \(B\) be two (possibly correlated) random variables on \(X\) and \(Y\) respectively._ 1. _Conditional entropy inequality:_ \(\mathcal{H}(B|A)\leq\mathcal{H}(B)\)_._ 2. _Chain rule:_ \(\mathcal{H}(A,B)=\mathcal{H}(B)+\mathcal{H}(B|A)=\mathcal{H}(A)+\mathcal{H}(A| B)\)_._ 3. _Nonnegativity:_ \(\mathcal{I}\left(A:B\right)\geq 0\)_._ 4. \(\mathcal{I}\left(A:B\right)\leq\max\{\mathcal{H}(A),\mathcal{H}(B)\}\)_._ Deterministic lower bound for multi-party unique-disjointness In this section, we give a simple proof for Theorem 1.1. Our proof is based on a density increment argument. We first formally define the problem. In our writing, we use binary strings to represent a set. We associate any set \(A\subseteq[n]\) with a corresponding string \(x\in\{0,1\}^{n}\) by setting \(x(i)=1\) iff \(i\in A\). **Definition 3.1** (\(k\)-Udisj, deterministic version).: _For each \(k\geq 2\) and \(n\geq 1\). We define \(D_{0}\) (no instances) and \(D_{*}\) (yes instances) as follows:_ * \(D_{0}\coloneqq\{(x_{1},\ldots,x_{k})\in(\{0,1\}^{n})^{k}:\forall i,x_{1}(i)+ \cdots+x_{k}(i)\leq 1\}\)_._ * \(D_{*}\coloneqq\{(x_{1},\ldots,x_{k})\in(\{0,1\}^{n})^{k}:\exists f,x_{1}(f)= \cdots=x_{k}(f)=1\text{ and }\forall i\neq\ell,x_{1}(i)+\cdots+x_{k}(i)\leq 1\}\)_._ _A \(k\)-party deterministic communication protocol \(C\) solves \(k\)-Udisj if,_ * _for all_ \((x_{1},\ldots,x_{k})\in D_{0}\)_,_ \(C(x_{1},\ldots,x_{k})=0\)_,_ * _for all_ \((x_{1},\ldots,x_{k})\in D_{*}\)_,_ \(C(x_{1},\ldots,x_{k})=1\)_._ Since the projection may fix some coordinates, we also define the projected instances. For a set \(I\subseteq[n]\), we define \(D_{0}^{I}\) (no instances on \(I\)) as \[D_{0}^{I}\coloneqq\{(x_{1},\ldots,x_{k})\in(\{0,1\}^{I})^{k}:\forall i\in I, x_{1}(i)+\cdots+x_{k}(i)\leq 1\},\] and define \(D_{*}^{I}\) (yes instances on \(I\)) as \[D_{*}^{I}\coloneqq\{(x_{1},\ldots,x_{k})\in(\{0,1\}^{I})^{k}:\exists i\in I, x_{1}(i)=\cdots=x_{k}(i)=1\text{ and }\forall i^{\prime}\neq i,x_{1}(i^{\prime})+\cdots+x_{k}(i^{\prime})\leq 1\}.\] We also partition the yes instances by \(D_{*}^{I}=\bigcup_{i\in I}D_{i}^{I}\), where \[D_{i}^{I}\coloneqq\{(x_{1},\ldots,x_{k})\in(\{0,1\}^{I})^{k}:x_{1}(i)=\cdots= x_{k}(i)=1\text{ and }\forall i^{\prime}\neq i,x_{1}(i^{\prime})+\cdots+x_{k}(i^{\prime})\leq 1\}.\] Now we define the density function. **Definition 3.2** (Density function).: _For each \(I\subseteq[n]\) and \(R=X_{1}\times\cdots\times X_{k}\subseteq(\{0,1\}^{I})^{k}\), we define its density function as_ \[E^{I}(R)\coloneqq\log\left(\frac{|R\cap D_{0}^{I}|}{(k+1)^{|I|}}\right).\] Note that \(E^{I}(R)\leq 0\) for any rectangle \(R\) because \(|D_{0}^{I}|=(k+1)^{|I|}\). We simplify the notation as \(E(R)\) if \(I\) is clear in the context. A crucial step in our argument is the projection operation. **Definition 3.3** (Projection).: _Let \(R=X_{1}\times\cdots\times X_{k}\subseteq(\{0,1\}^{I})^{k}\) be a rectangle. For an \(i\in I\) and \(j\in[k]\), the projection of \(R\) on \((i,j)\) is a rectangle \(\Pi_{i,j}(R)=X_{1}^{\prime}\times\cdots\times X_{k}^{\prime}\subseteq(\{0,1 \}^{\Gamma\setminus\{i\}})^{k}\) defined by:_ * _for each_ \(j^{\prime}\neq j\)_,_ \(X_{j^{\prime}}^{\prime}\coloneqq\{x^{\prime}\in\{0,1\}^{\Gamma\setminus\{i\}} :(x^{\prime},0)\in X_{j^{\prime}}\}\)_,_ * _for_ \(j,X_{j}^{\prime}\coloneqq\{x^{\prime}\in\{0,1\}^{\Gamma\setminus\{i\}}:\) _either_ \((x^{\prime},0)\in X_{j}\text{ or }(x^{\prime},1)\in X_{j}\}\)_._ _Here \((x^{\prime},0)\) is a string in \(\{0,1\}^{I}\) by extending \(x^{\prime}\in\{0,1\}^{I\setminus\{i\}}\) with \(x_{i}=0\)._ The projection operation has two useful properties. The first one is that projection preserves the monochromatic property of the rectangle. **Fact 3.4**.: _Let \(R\) be a rectangle such that \(R\cap D_{*}^{I}=\emptyset\). Then for every \(i\in I\) and \(j\in[k]\), we have_ \[\Pi_{i,j}(R)\cap D_{*}^{I\setminus\{i\}}=\emptyset.\] The proof of Fact 3.4 follows from the definition and we omit it here. The next property is phrased as the following projection lemma. **Lemma 3.5** (Projection lemma).: _Let \(R=X_{1}\times\cdots\times X_{k}\subseteq(\{0,1\}^{I})^{k}\) be a rectangle. If there is a coordinate \(i\in I\) such that \(R\cap D_{i}^{I}=\emptyset\), then there is some \(j\in[k]\) such that_ \[E^{I\setminus\{i\}}(\Pi_{i,j}(R))\geq E^{I}(R)+1/k.\] Given Lemma 3.5 and Fact 3.4, Theorem 1.1 becomes straightforward. We can simply repeat the projection \(n\) times for \(i\in[n]\), where each time we use Lemma 3.5 to choose a good \(j\) to do projection and increase the density function by \(1/k\). Now we prove Lemma 3.5. Proof of Lemma 3.5.: Let \(R\) be a rectangle such that \(R\cap D_{i}^{I}=\emptyset\). Let \(I^{\prime}:=I\setminus\{i\}\) and \[L:=\{x^{\prime}\in D_{0}^{I^{\prime}}:\exists x\in R\cap D_{0}^{I},\,x|_{I^{ \prime}}=x^{\prime}\}.\] Here \(x|_{I^{\prime}}\) is the restriction of \(x\) on \(I\setminus\{i\}\). Note that for all \(j\in[k]\), \(\Pi_{i,j}(R)\cap D_{0}^{I^{\prime}}=\Pi_{i,j}(R)\cap L\), and our goal is to show that there is a \(j\) such that \(|\Pi_{i,j}(R)\cap L|\) is large. For every \(x^{\prime}\in L\), define the extension set of \(x^{\prime}\) as \(\operatorname{ext}(x^{\prime}):=\{x\in R\cap D_{0}^{I}:x|_{I^{\prime}}=x^{ \prime}\}\). Crucially, for every \(x^{\prime}\in L\), we have \[|\operatorname{ext}(x^{\prime})|\leq k. \tag{1}\] Note that, without the condition \(R\cap D_{i}^{I}=\emptyset\), it can only be bounded by \(k+1\). Inequality (1) is proved by contradiction. If there is a \(x^{\prime}=(x^{\prime}_{1},\ldots,x^{\prime}_{k})\in L\) such that \(|\operatorname{ext}(x^{\prime})|=k+1\). Then we must have that \((x^{\prime}_{1},1)\in X_{1},\ldots,(x^{\prime}_{k},1)\in X_{k}\), contradicting \(R\cap D_{i}^{I}=\emptyset\). We now continue our proof. Partition \(L\) into two parts: \[A:=\{x^{\prime}\in L:|\operatorname{ext}(x^{\prime})|\geq 2\}\quad\text{and} \quad B:=\{x^{\prime}\in L:|\operatorname{ext}(x^{\prime})|=1\}.\] First observe that for any \(x^{\prime}=(x^{\prime}_{1},\ldots,x^{\prime}_{k})\in A\), we have \((x^{\prime}_{j},0)\in X_{j}\) for every \(j\in[k]\) as \(R\) is a rectangle. This implies \(x^{\prime}\in\Pi_{i,j}(R)\) for all \(j\in[k]\). Hence \[|A|=|A\cap\Pi_{i,j}(R)|,\quad\forall j\in[k].\] Applying (1) with \(x^{\prime}\in A\), we have \[k\cdot|A\cap\Pi_{i,j}(R)|=k\cdot|A|\geq|\{x\in R\cap D_{0}^{I}:x|_{I^{\prime} }\in A\}|,\quad\forall j\in[k].\] For \(x^{\prime}\in B\), since \(|\operatorname{ext}(x^{\prime})|=1\), we have \[|\{x\in R\cap D_{0}^{I}:x|_{I^{\prime}}\in B\}|=|B|.\] On the other hand, for every \(x^{\prime}\in L\), it always exists one \(j\in[k]\) such that \(x^{\prime}\in\Pi_{i,j}(R)\). By an average argument, there is at least one \(j\in[k]\) such that, \[k\cdot|B\cap\Pi_{i,j}(R)|\geq|B|=|\{x\in R\cap D_{0}^{I}:x|_{I^{\prime}}\in B\}|.\] As a result, for this fixed \(j\) we have \[k\cdot|L\cap\Pi_{i,j}(R)| =k\cdot|B\cap\Pi_{i,j}(R)|+k\cdot|A\cap\Pi_{i,j}(R)|\] \[\geq|\{x\in R\cap D_{0}^{I}:x^{\prime}\in A\}|+|\{x\in R\cap D_{0} ^{I}:x^{\prime}\in B\}|\] \[=|R\cap D_{0}^{I}|.\] By the definition of the density function, we have \[E^{I^{\prime}}(\Pi_{i,j}(R)) =\log\left(\frac{|\Pi_{i,j}(R)\cap D_{0}^{I^{\prime}}|}{(k+1)^{| I^{\prime}|}}\right)=\log\left(\frac{(k+1)\cdot|\Pi_{i,j}(R)\cap L|}{(k+1)^{| I|}}\right)\] \[\geq\log\left(\frac{(k+1)\cdot|R\cap D_{0}^{I}|}{k\cdot(k+1)^{| I|}}\right)=E^{I}(R)+\log(1+1/k)\] \[\geq E^{I}(R)+1/k. \text{(since $\log(1+1/k)\geq 1/k$ for all $k\geq 1$)}\] ## 4 Randomized lower bound for multi-party unique-disjointness We focus on randomized communication lower bounds in this section. By Yao's minimax theorem, this is equivalent to identifying a distribution \(\mathcal{P}\) that is hard on average for any deterministic communication protocol. We use the notation \(D_{0}\), \(D_{*},D_{0}^{I}\), \(D_{*}^{I}\) same as the previous section (Definition 3.1). Our hard distribution \(\mathcal{P}\) is supported on \(D_{0}\cup D_{*}\). **Definition 4.1**.: _For any \(n,k\geq 1\), we define the hard distribution \(\mathcal{P}\) on \((\{0,1\}^{n})^{k}\) as follows._ 1. _For every_ \(i\in[n]\)_, uniformly and independently sample_ \(\mathcal{W}_{i}\sim[k]\) _and_ \(\mathcal{A}_{i}\sim\{0,1\}\)_._ 2. _For every_ \(i\in[n]\)_,_ \(j\in[k]\)_, if_ \(\mathcal{W}_{i}=j\) _and_ \(\mathcal{A}_{i}=1\)_, then set_ \(x_{j}(i)=1\)_; otherwise set_ \(x_{j}(i)=0\)_._ 3. _Sample_ \(\mathcal{B}\sim\{0,1\}\) _and_ \(\ell\sim[n]\) _uniformly. If_ \(\mathcal{B}=1\)_, then update_ \(x_{j}(\ell)=1\) _for all_ \(j\in[k]\)_._ 4. _Output_ \(x=(x_{1},\ldots,x_{k})\)_._ _Given this hard distribution \(\mathcal{P}\), we also define the distribution \(\mathcal{Q}:=(\mathcal{P}\mid\mathcal{B}=0)\)._ Now we give some explanation of the random variables in this sampling process. * The bit \(\mathcal{B}\) determines whether to output yes instances or no instances. In particular, for \(\mathcal{B}=1\), we output a yes instance. Hence we update \(x_{j}(\ell)=1\) for all \(j\in[k]\), where \(\ell\) is uniformly sampled. * For every \(i\), the variable \(\mathcal{W}_{i}\in[k]\) captures which party \((\mathcal{W}_{i})\) may have the \(i\)-th element * For every \(i\), \(\mathcal{A}_{i}\) determines whether \(\mathcal{W}_{i}\) has the \(i\)-th element or not. It is well-known that a deterministic protocol \(C\) with communication complexity \(c\) partitions the input domain into at most \(2^{c}\) rectangles where each rectangle corresponds to a leaf in the communication tree. We then define the following random variable \(\mathcal{R}\), which is the rectangle of a random leaf induced by the input distribution \(\mathcal{Q}\). **Definition 4.2**.: _For a fixed deterministic protocol \(C\), we define a distribution \(\mathcal{R}_{C}\) on leaf rectangles (of \(C\)) as follows._ 1. _Randomly sample_ \(x\sim\mathcal{Q}\)_._ 2. _Output the rectangle_ \(R\) _of_ \(C\) _containing_ \(x\)_._ We emphasize \(\mathcal{R}\) is defined by \(\mathcal{Q}\), which is _not_\(\mathcal{P}\). Hence for a protocol \(C\) with a small error on \(\mathcal{P}\), the random rectangle \(R\sim\mathcal{R}\) should be biased towards \(D_{0}\) with high probability. If \(C\) is clear in the context, we will simply use \(\mathcal{R}\) for \(\mathcal{R}_{C}\). For any rectangle \(R\), we also use \(\mathcal{Q}_{R}\) to denote the distribution \((\mathcal{Q}\mid\mathcal{R}=R)\). Let \(\mathcal{W}=(\mathcal{W}_{1},\ldots,\mathcal{W}_{n})\) and we use \((\mathcal{Q},\mathcal{W})\) to denote the joint distribution of \(\mathcal{Q}\) and \(\mathcal{W}\). We are now ready to explain our theorems. **Theorem 4.3**.: _Let \(0<\epsilon<0.0001\) be a constant. For any deterministic protocol \(C\) with error \(\epsilon\) under \(\mathcal{P}\), i.e.,_ \[\Pr_{x\sim\mathcal{P}}\left[C(x)=k\text{-}\mathrm{UDISJ}(x)\right]\geq 1-\epsilon.\] _We have_ \[\mathcal{I}\left(\mathcal{Q},\mathcal{W}:\mathcal{R}_{C}\right)=\Omega(n/k).\] We note that Theorem 4.3 implies Theorem 1.2 because the communication complexity of \(C\) is lower bounded by \(\mathcal{H}(\mathcal{R})\), and \(\mathcal{H}(\mathcal{R})\) is an upper bound of \(\mathcal{I}\left((\mathcal{Q},\mathcal{W}):\mathcal{R}\right)\) (Fact 2.3). A similar lower bound \(\mathcal{I}(\mathcal{Q}:\mathcal{R}|\mathcal{W})\) was previously obtained by the information complexity framework [12]. We reprove it by a density increment argument. In what follows, we fix the protocol \(C\). We first give a high-level view of our proof. Sketch of the proof.We first reinterpret the proof of the deterministic lower bound (Section 3) in an entropy perspective. Then we generalize it to randomized communication lower bounds. Let \(C\) be a deterministic communication protocol for \(k\)-UDISJ. Every leaf of \(C\) is a monochromatic rectangle. Let \(R\) be any \(0\)-monochromatic rectangle (i.e., \(R\cap D_{*}=\emptyset\)) of \(C\). Then for every input \(x^{*}\in R\cap D_{0}\) and \(i\in[n]\), \[\Pr_{x=(x_{1}\ldots,x_{k})\sim\mathcal{P}}\left[x_{1}(i)=\cdots=x_{k}(i)=1\mid x \in R\text{ and }\forall i^{\prime}\neq i,x(i^{\prime})=x^{*}(i^{\prime})\right]=0 \tag{2}\] since \(R\) is \(0\)-monochromatic. Furthermore, since \(R\) is a rectangle, there is a party \(j\) such that \[\Pr_{x=(x_{1}\ldots,x_{k})\sim\mathcal{P}}\left[x_{j}(i)=1\mid x\in R\text{ and }\forall i^{\prime}\neq i,x(i^{\prime})=x^{*}(i^{\prime})\right]=0.\] Recall \(\mathcal{Q}=(\mathcal{P}\mid\mathcal{B}=0)\) samples no instances. Thus we also have \[\Pr_{x=(x_{1}\ldots,x_{k})\sim\mathcal{Q}}\left[x_{j}(i)=1\mid x\in R\text{ and }\forall i^{\prime}\neq i,x(i^{\prime})=x^{*}(i^{\prime})\right]=0.\] By the definition of \(\mathcal{R}\), it is equivalent to 3 Footnote 3: In the following part, we use replace the notation \(x\in R\) with \(\mathcal{R}=R\) when \(x\sim\mathcal{Q}\). \[\Pr_{x=(x_{1}\ldots,x_{k})\sim\mathcal{Q}}\left[x_{j}(i)=1\mid\mathcal{R}=R \text{ and }\forall i^{\prime}\neq i,x(i^{\prime})=x^{*}(i^{\prime})\right]=0.\] If we use the entropy language and recall the definition of \(\mathcal{W}=(\mathcal{W}_{1},...,\mathcal{W}_{n})\), it is equivalent to \[\mathcal{H}\left(x_{j}(i)\mid\mathcal{R}=R,\mathcal{W}_{i}=j\text{ and }\forall i^{\prime}\neq i,x(i^{\prime})=x^{*}(i^{\prime})\right)=0.\] In contrast, if we do not condition on \(R\), we have that \[\Pr_{x=(x_{1}\ldots,x_{k})\sim\mathcal{Q}}\left[x_{j}(i)=1\mid\mathcal{W}_{i} =j\text{ and }\forall i^{\prime}\neq i,x(i^{\prime})=x^{*}(i^{\prime})\right]=1/2.\] which can be written as, \[\mathcal{H}\left(x_{j}(i)\mid\mathcal{W}_{i}=j\text{ and }\forall i^{\prime} \neq i,x(i^{\prime})=x^{*}(i^{\prime})\right)=1.\] This gap captures the mutual information of \(\mathcal{R}\) and \((\mathcal{Q},\mathcal{W})\) on the \(i\)-th coordinate. For different choices of \(x^{*}\in R\cap D_{0}\), we may have different \(j\) witnessing the mutual information. But on average, we have \[\mathbb{E}_{j}\left[\mathcal{H}\left(x_{j}(i)\mid\mathcal{R}=R,\mathcal{W}_{i }=j\text{ and }x(1),\ldots,x(i-1),x(i+1),\ldots,x(n)\right)\right]\leq 1-1/k.\] In particular, there exists a \(j\in[k]\) such that \[\mathcal{H}\left(x_{j}(i)\mid\mathcal{R}=R,\mathcal{W}_{i}=j,x(1),\ldots,x(i- 1),x(i+1),\ldots,x(n)\right)\leq 1-1/k.\] Now we explain how can we view the projection as a decoupling process for this mutual information. We can decompose the projection in two steps: 1. Fix \(\mathcal{W}_{i}=j\), i.e., update \((\tilde{\mathcal{Q}},\tilde{\mathcal{W}})\leftarrow(\mathcal{Q},\mathcal{W} \mid\mathcal{W}_{i}=j)\). 2. Update the density function as \[\mathcal{H}\left(\tilde{\mathcal{Q}}_{\{n\}\setminus\{i\}},\tilde{\mathcal{W} }_{\{n\}\setminus\{i\}}\mid\mathcal{R}=R\right)-\mathcal{H}\left(\tilde{ \mathcal{Q}}_{\{n\}\setminus\{i\}},\tilde{\mathcal{W}}_{\{n\}\setminus\{i\}} \right),\] or equivalently \[\mathcal{H}\left(\mathcal{Q}_{\{n\}\setminus\{i\}},\mathcal{W}_{\{n\}\setminus\{i \}}\mid\mathcal{R}=R,\mathcal{W}_{i}=j\right)-\mathcal{H}\left(\mathcal{Q}_{\{n \}\setminus\{i\}},\mathcal{W}_{\{n\}\setminus\{i\}}\mid\mathcal{W}_{i}=j \right),\] where \(\mathcal{Q}_{\{n\}\setminus\{i\}}\) (resp., \(\tilde{\mathcal{Q}}_{\{n\}\setminus\{i\}}\)) is the marginal distribution of \(\mathcal{Q}\) (resp., \(\tilde{\mathcal{Q}}\)) on \([n]\setminus\{i\}\). In the first step, we pick the party \(j\) that has mutual information. In the second step, we decouple the mutual information by simply removing it from the density function. The projection lemma (Lemma 3.5) captures how this decoupling step increases the density function. Another crucial fact is that, for any \(0\)-monochromatic rectangle \(R\), the distribution \((\mathcal{Q}_{[n]\setminus\{i\}}\mid\mathcal{R}=R,\mathcal{W}_{i}=j)\) is also supported on \(D_{0}^{[n]\setminus\{i\}}\) (see Fact 3.4), which guarantees us to continuously increase the density by projections on different coordinates. Now we generalize this to the randomized communication setting where the rectangle \(R\) is not necessarily monochromatic. By the correctness of the protocol, most rectangle \(R\) is biased to either yes instances or no instances. For a rectangle \(R\) biased to no instances, we expect an inequality similar to (2) will hold: For most \(R\sim\mathcal{R}\), most no instance \(x^{*}\sim(\mathcal{Q}\mid\mathcal{R}=R)\) and most \(i\sim[n]\), it satisfies \[\Pr_{x=(x_{1},\ldots,x_{k})\sim\mathcal{P}}\left[x_{1}(i)=\cdots=x_{k}(i)=1 \mid x\in R\text{ and }\forall i^{\prime}\neq i,x(i^{\prime})=x^{*}(i^{\prime}) \right]\leq\delta,\] where \(\delta\) is a small constant depending on the error rate \(\epsilon\) of the protocol. On the other hand, we also need to argue that projections can be repeated. This part is slightly more complicated than the deterministic case where we can simply fix \(\mathcal{W}_{i}=j\) for some \(j\in[k]\). In the randomized case, we cannot fix it because we have to preserve the bias. This is addressed by: * Bias lemma (Lemma 4.11), a randomized variant of Fact 3.4. * Projection lemma (Lemma 4.10), a randomized variant of Lemma 3.5. ### Key definitions and lemmas Now we introduce the key definitions and lemmas (bias lemma and projection lemma) needed for the randomized communication lower bound. **Definition 4.4** (\(\rho\)-restriction).: _For \(J\subset[n]\) and \(w_{J}\in[k]^{J}\), we call \(\rho=(J,w_{J})\) an \(\rho\)-restriction, and denote \((\mathcal{Q},\mathcal{W}|_{\rho})\) as the distribution \((\mathcal{Q},\mathcal{W}\mid\forall i\in J,\mathcal{W}_{i}=w_{i})\)._ The \(\rho\)-restriction corresponds to projections in the deterministic case: For \(\rho=(J,w_{J})\), \(i\in J\) corresponds to the projection \(\Pi_{i,w_{i}}\). Now we define our new density function. **Definition 4.5** (Density function).: _Let \(R\) be a rectangle and a set \(I\subseteq[n]\). For a restriction \(\rho=(I^{c},w_{I^{c}})\) with \(I^{c}=[n]\setminus I\), its density is defined by_ \[E^{I}(R,\rho):=\mathcal{H}(\mathcal{Q}_{I},\mathcal{W}_{I}\mid\rho,\mathcal{R }=R)-\mathcal{H}(\mathcal{Q}_{I},\mathcal{W}_{I}\mid\rho).\] _The average density is defined by_ \[E^{I}:=\operatorname*{\mathbb{E}}_{(\rho,R)\sim(\mathcal{W}_{I^{c}},\mathcal{ R})}\left[E^{I}(R,\rho)\right]=\operatorname*{\mathbb{E}}_{\rho}\left[- \mathcal{I}(\mathcal{Q}_{I},\mathcal{W}_{I}:\mathcal{R}\mid\rho)\right]=- \mathcal{I}(\mathcal{Q}_{I},\mathcal{W}_{I}:\mathcal{R}\mid\mathcal{W}_{I^{c }}).\] _In particular \(E^{[n]}=-\mathcal{I}(\mathcal{Q},\mathcal{W}:\mathcal{R})\)._ The main difference between the deterministic setting and randomized setting is that, in the deterministic case, we consider \(E^{I}(R,\rho)\) for some fixed \(R\) and \(\rho\). However, in the randomized communication case, we have to consider \(E^{I}\) which does not fix \(R\) and \(\rho\) because the projection lemma (Lemma 4.10) and the bias lemma (Lemma 4.11) are not preserved under fixed \(R\) and \(\rho\). We also note that \(-E^{I}(R,\rho)\) might be negative for some \(R\) and \(\rho\). But \(-E^{I}\) is always nonnegative because it is mutual information. As we mentioned before, in the randomized setting, the leaves are no longer monochromatic but biased. Now we define the following bias definition to capture it (This is a randomized version of equation 2). **Definition 4.6**.: _Let \(R\) be a rectangle and \(I\subseteq[n]\). Let \(\rho=(I^{c},w_{I^{c}})\) be a restriction. For any \(i\in I\) and input \(x^{*}\in D_{0}^{I\setminus\{i\}}\), the bias of \(x^{*}\) on the coordinate \(i\) under \((R,\rho)\) is defined by_ \[\gamma_{i,\rho,R}^{I}(x^{*})\coloneqq\Pr_{x=(x_{1}\dots x_{k})\sim\mathcal{P }}\left[x_{1}(i)=\dots=x_{k}(i)=1\ \middle|\ \rho,x\in R,x\notin\bigcup_{\ell\in I^{c}}D_{\ell}\text{ and }\forall i^{ \prime}\in I\setminus\{i\},x(i^{\prime})=x^{*}(i^{\prime})\right],\] _where \(D_{\ell}\subseteq D_{*}\) is the yes instances with intersection witnessed by \(x(\ell)\), i.e., \(D_{\ell}\) is support of \((\mathcal{P}|\mathcal{B}=1,\ell)\).4 Then we define the average bias of a rectangle \(R\) on \(i\) as_ Footnote 4: Though \(R\) is the leaf conditioned on an input from \(\mathcal{Q}=(\mathcal{P}|\mathcal{B}=0)\), it is still possible that \(\mathcal{P}(D_{\ell}\cap R)>0\) since the protocol is allowed to err. That is why \(x\notin\bigcup_{\ell\in I^{c}}D_{\ell}\) is not implied by \(\mathcal{R}=R\). \[\gamma_{i,R}^{I}:=\operatorname*{\mathbb{E}}_{(x^{*},\rho)\sim(\mathcal{Q}_{I \setminus\{i\}},W_{I^{c}}|\mathcal{R}=R)}\left[Y_{i,\rho,R}^{I}(x^{*})\right].\] _The overall bias on \(i\) is defined by_ \[\gamma_{i}^{I}:=\operatorname*{\mathbb{E}}_{R\sim\mathcal{R}}\left[Y_{i,R}^{I }\right].\] Finally, we define the projection for randomized communication. Recall in the deterministic case, there are two steps in the projection. In the randomized case, since we put \(\rho\) in average, we can remove the first step. Then the projection can be defined as follows. **Definition 4.7** (Projection).: _Let \(I\subseteq[n]\) be the set of unrestricted coordinates. For any \(i\in I\), the projection on \(i\) is to update the density function from \(E^{I}\) to \(E^{I\setminus\{i\}}\)._ **Remark 4.8**.: _We may use different projections for different communication problems. For example, the BPP lifting theorem [1] used a very different projection because they studied low-discrepancy gadgets. We define the projection in such a way because we are working on AND gadgets. Given this flexibility, we believe the density increment arguments may provide new applications beyond the information complexity framework._ Now we introduce three key lemmas in our proof. **Lemma 4.9**.: _Let \(\epsilon\in(0,0.0001)\) be a constant. Let \(C\) be a deterministic protocol with error \(\epsilon\) under the distribution \(\mathcal{P}\). There is a constant \(\delta\in(0,0.02)\) (depending only on \(\epsilon\)) and a set of coordinates \(J\subseteq[n]\) with \(|J|=\Omega(n)\) such that \(\gamma_{i}^{[n]}\leq\delta\) holds for each \(i\in J\)._ Since \(C\) is a protocol with a small error under \(\mathcal{P}\) and \(\mathcal{R}\) is sampled according to \(\mathcal{Q}\) (no instances), we have that, for a random \(R\sim\mathcal{R}\), it is very likely that \(R\) is biased towards to no instances. Then Lemma 4.9 can be proved by an average argument. This is a generalization from the deterministic case that \(\gamma_{i}^{[n]}=0\) for all \(i\in[n]\). The proof of Lemma 4.9 is deferred to Section A as part of the proof of Lemma 4.11. **Lemma 4.10** (Projection lemma).: _Let \(\delta\in(0,0.02)\) be a constant. For any \(I\subseteq[n]\) and \(i\in I\), if \(\gamma_{i}^{I}\leq\delta\), the projection on \(i\) increases the density function by \(\Omega(1/k)\), i.e.,_ \[E^{I\setminus\{i\}}\geq E^{I}+\Omega(1/k).\] The projection lemma shows that the density function increases if we do a projection on a biased coordinate. We prove it in Section 4.2. Our last lemma shows that the bias is preserved during the projections as a counterpart to Lemma 3.4 in the deterministic case. **Lemma 4.11** (Bias lemma).: _Let \(\delta>0\) be the constant and \(J\subseteq[n]\) be the set from Lemma 4.9. For any \(I\subseteq[n]\) and distinct \(i,i^{\prime}\in I\cap J\), we have that_ \[\gamma_{i^{\prime}}^{I\setminus\{i\}}\leq\delta.\] This lemma can be by a convexity inequality and its proof is deferred to the Section A. Now we summarize these three lemmas and complete the proof of Theorem 4.3. * Lemma 4.9 shows that, if \(C\) is a communication protocol with a small error under \(\mathcal{P}\), then \(\gamma_{i}^{[n]}\) is very small for many coordinates \(i\). * Projection lemma (Lemma 4.10) converts the bias on \(i\) into the density increment of projection on the coordinate \(i\). * Bias lemma (Lemma 4.11) proves that a projection on a coordinate \(i\) preserves the bias on other coordinates \(i^{\prime}\), which shows the projection lemma can be applied many times. Proof of Theorem 4.3.: Assume \(C\) has error \(\epsilon\in(0,0.0001)\) under distribution \(\mathcal{P}\). By Lemma 4.9, there is a constant \(\delta\in(0,0.02)\) and a set of coordinates \(J\subseteq[n]\) with \(|J|=\Omega(n)\) such that \(\gamma_{i}^{[n]}\leq\delta\) for every \(i\in J\). Let \(I=[n]\setminus J\). Then iteratively applying Lemma 4.10 and Lemma 4.11 on coordinates in \(J\), we have \[E^{I}\geq E^{[n]}+\Omega(|J|/k)=E^{[n]}+\Omega(n/k).\] By the definition of the density function, we know \(E^{[n]}=-\mathcal{I}(\mathcal{Q},\mathcal{W}:\mathcal{R})\). Since \(-E^{I}\) is always non-negative, we have \[\mathcal{I}(\mathcal{Q},\mathcal{W}:\mathcal{R})=-E^{[n]}\geq-E^{I}+\Omega(n/ k)\geq\Omega(n/k).\] ### Proof of the projection lemma Now we prove Lemma 4.10. Recall that \[E^{I}=\operatorname*{\mathbb{E}}_{(\rho,R)\sim(\mathcal{W},\mathcal{R})}\left[ \mathcal{H}(\mathcal{Q}_{I},W_{I}\mid\rho,\mathcal{R}=R)-\mathcal{H}(\mathcal{ Q}_{I},\mathcal{W}_{I}\mid\rho)\right]\] and \[\gamma_{i}^{I}=\operatorname*{\mathbb{E}}_{R\sim\mathcal{R}}\left[\gamma_{i,R }^{I}\right].\] We aim to show that if \(\gamma_{i}^{t}\leq\delta\) for some \(\delta\in(0,0.02)\), then we have that \[E^{I\setminus\{i\}}\geq E^{I}+\Omega(1/k).\] In our proof, we borrow a useful lemma from [10] and [11]. In [10, 11], this lemma was used to analyze information cost. **Lemma 4.12** ([10, Theorem 3.16]).: _Let \(\delta<0.02\) be a constant and \(I\subseteq[n]\). Fix a deterministic protocol \(C\). If \(Y_{i}^{I}\leq\delta\), then_ \[\mathcal{H}(Q_{i}|\mathcal{W}_{i})-\mathcal{H}(Q_{i}|\mathcal{R},Q_{I \setminus\{i\}},\mathcal{W})=\Omega(1/k).\] Though Lemma 4.12 is not exactly the same as [10, 11], the proof is similar and we omit the proof of this lemma here. We will include the proof in our full version. Proof of Lemma 4.10.: Recall \[E^{I}=\mathcal{H}(Q_{I},\mathcal{W}_{I}\mid\mathcal{W}_{I^{e}},\mathcal{R})- \mathcal{H}(Q_{I},\mathcal{W}_{I}\mid\mathcal{W}_{I^{e}}).\] Since \(W_{I^{e}}\) is independent with \((Q_{I},W_{I})\), we have \[E^{I}=\mathcal{H}(Q_{I},\mathcal{W}_{I}\mid\mathcal{W}_{I^{e}},\mathcal{R})- \mathcal{H}(Q_{I},\mathcal{W}_{I}).\] Similarly \(\mathcal{H}(Q_{I},\mathcal{W}_{I})-\mathcal{H}(Q_{I\setminus\{i\}},\mathcal{ W}_{I\setminus\{i\}})=\mathcal{H}(Q_{i},W_{i})\) since \((Q_{i},\mathcal{W}_{i})\) and \((Q_{I\setminus\{i\}},\mathcal{W}_{I\setminus i})\) are independent. Hence, \[E^{I\setminus\{i\}}-E^{I}=\mathcal{H}(Q_{i},\mathcal{W}_{i})-\mathcal{H}(Q_{ I},\mathcal{W}_{I}\mid\mathcal{W}_{I^{e}},\mathcal{R})+\mathcal{H}(Q_{I \setminus\{i\}},\mathcal{W}_{I\setminus\{i\}}\mid\mathcal{W}_{I^{e}}, \mathcal{W}_{i},\mathcal{R}).\] Applying chain rule of entropy on \(\mathcal{H}(Q_{I},\mathcal{W}_{I}|\mathcal{R},\mathcal{W}_{I^{e}})\), i.e., \[\mathcal{H}(Q_{I},\mathcal{W}_{I}|\mathcal{R},\mathcal{W}_{I^{e}})=\mathcal{H }(\mathcal{W}_{I}|\mathcal{R},\mathcal{W}_{I^{e}})+\mathcal{H}(Q_{I\setminus \{i\}},\mathcal{W}_{I\setminus\{i\}}|\mathcal{R},\mathcal{W}_{i},\mathcal{W} _{I^{e}})+\mathcal{H}(Q_{i}|\mathcal{R},Q_{I\setminus\{i\}},\mathcal{W}),\] we have \[E^{I\setminus i}-E^{I}=\mathcal{H}(Q_{i},\mathcal{W}_{i})-\mathcal{H}( \mathcal{W}_{i}|\mathcal{R},\mathcal{W}_{I^{e}})+\mathcal{H}(Q_{i}|\mathcal{R},Q_{I\setminus\{i\}},\mathcal{W}).\] By the chain rule \(\mathcal{H}(Q_{i},\mathcal{W}_{i})=\mathcal{H}(\mathcal{W}_{i})+\mathcal{H}(Q _{i}|\mathcal{W}_{i})\) and the fact that \(\mathcal{H}(\mathcal{W}_{i})\geq\mathcal{H}(\mathcal{W}_{i}|\mathcal{R}, \mathcal{W}_{I^{e}})\), we conclude that, \[E^{I\setminus\{i\}}-E^{I}\geq\mathcal{H}(Q_{i}|\mathcal{W}_{i})-\mathcal{H} (Q_{i}|\mathcal{R},Q_{I\setminus\{i\}},\mathcal{W}).\] Finally, by Lemma 4.12 and the fact that \(\gamma_{i}^{I}\leq\delta<0.02\), we have \[E^{I\setminus\{i\}}-E^{I}\geq\Omega(1/k).\] Deterministic lower bounds for sparse unique-disjointness In this section, We discuss the sparse unique-disjointness problem. **Definition 5.1**.: _For each \(s\geq 2\) and \(n\geq 1\), the \(s\)-UDIS] problem is defined as follows:_ * _No instances:_ \(D_{0}^{(s)}:=\{(x,y):|x|,|y|\leq s\text{ and }\forall i,x(i)+y(i)\leq 1\}\)_._ * _Yes instances:_ \(D_{*}^{(s)}:=\{(x,y):|x|,|y|\leq s\text{ and }\exists\ell,x(\ell)=y(\ell)=1 \text{ and }\forall i\neq\ell,x(i)+y(i)\leq 1\}\)_._ _Here \(|x|\) is the Hamming weight of \(x\)._ Theorem 1.3 aims to show that any deterministic communication protocol for \(s\)-UDIS] requires \(\Omega(s\cdot\log(n/s))\) communication bits. To prove this theorem, we consider the following Unique-Equality problem [11, 10]. **Definition 5.2**.: _Let \(s\geq 2\) and \(n\geq 1\) be integers. Let \(B\) be a set with \((n/s)\) elements. The \(s\)-ULEQUAL problem is defined as follows:_ * _No instances:_ \(B_{0}^{(s)}:=\{(x,y)\in B^{s}\times B^{s}:\forall i\in[s],x_{i}\neq y_{i}\}\)_._ * _Yes instances:_ \(B_{*}^{(s)}:=\{(x,y)\in B^{s}\times B^{s}:\exists\ell,x_{\ell}=y_{\ell}\text{ and }\forall i\in[s]\setminus\{\ell\},x_{i}\neq y_{i}\}\)_._ There is a simple reduction from \(s\)-ULEQUAL to \(s\)-UDIS] [11]. Hence it is sufficient to prove a communication lower bound for \(s\)-ULEQUAL. In Theorem 1.3, we focus on the regime that \(s\leq n^{1/2-\epsilon}\) for any small constant \(\epsilon>0.\) Now our goal is to prove the communication complexity of \(s\)-ULEQUAL is \(\Omega(s\cdot\log(n/s))=\Omega(s\cdot\log n)\). We borrow the square idea from [10] but revise and simplify it as we do not need to fully simulate the protocol. See Section 5.2 for discussions. **Definition 5.3** (Square).: _Let \(R=X\times Y\subseteq B^{s}\times B^{s}\) be a rectangle. A square in \(R\) contains a set \(I\subseteq[s]\), a set \(S\subseteq B^{I}\), and for every \(i\in[s]\setminus I\), there is a set \(A_{i}\). We denote the family of these \(A_{i}\)'s as \(\mathcal{A}\)._ _Given \((I,S,\mathcal{A})\), we say it is a square in \(R=X\times Y\) if, for every \(z\in S\), there exists some \(x\in X\) and \(y\in Y\) such that:_ * \(x|_{I}=z\text{ and, for all }i\in[s]\setminus I\)_,_ \(x_{i}\in A_{i}\)_;_ * \(y|_{I}=z\text{ and, for all }i\in[s]\setminus I\)_,_ \(y_{i}\in B\setminus A_{i}\)_._ Same as in previous sections, we use the set \(I\) to denote unrestricted coordinates and use \([s]\setminus I\) to denote fixed coordinates. We remark that the definition above enforces that \(x_{i}\neq y_{i}\) (as \(x_{i}\in A_{i},y_{i}\in B\setminus A_{i}\)) for all \(i\in[s]\setminus I\). Hence, the fixed coordinates do not reveal any information about whether it is a yes instance or a no instance. Similar to Raz-McKenzie simulation, we also have a notion of thickness in the proof. **Definition 5.4** (Thickness).: _A set \(S\subseteq B^{I}\) is \(r\)-thick if, it is not empty and for every \(i\in I\) and \(x\in S\), we have that_ \[|\{x^{\prime}\in S:\forall j\neq i,x_{j}=x^{\prime}_{j}\}|\geq r.\] _We say that a square \((I,S,\mathcal{A})\) is \(r\)-thick if the set \(S\) is \(r\)-thick._ In our proof, we always choose \(r=10\cdot\log n\), and sometimes abbreviate \(r\)-thick as thick. The following thickness to full-range lemma is a standard fact in query-to-communication simulations. **Lemma 5.5**.: _Let \(S\subseteq B^{I}\) be a thick set. Then for every \(z\in\{0,1\}^{I}\), there is a pair \(x,y\in S\) such that_ \[\forall i\in I,\ z_{i}=1\quad\text{iff}\quad x_{i}=y_{i}.\] The proof of this lemma will be included in the full version. As a byproduct of this lemma, we have the following corollary. **Corollary 5.6**.: _Let \(R\) be a rectangle containing a square \((I,S,\mathcal{A})\) such that \(I\neq\emptyset\) and \(S\) is thick, then \(R\) is not monochromatic._ **Definition 5.7** (Average degree).: _Let \(S\subseteq B^{I}\). For each \(i\in I\), we define the set \(S_{-i}\subseteq B^{I\setminus\{i\}}\) as_ \[S_{-i}\coloneqq\{x^{\prime}\in B^{I\setminus\{i\}}:\exists x\in S,x|_{I \setminus\{i\}}=x^{\prime}\}.\] _We say that the average degree of \(S\) is a if \(|S|\geq\alpha\cdot|S_{-i}|\) holds for all \(i\). We say that a square \((I,S,\mathcal{A})\) has an average degree \(\alpha\) if the average degree of \(S\) is \(\alpha\)._ Regards to the average degree, we have a simple but useful fact. **Fact 5.8**.: _For \(\alpha,\beta>0\). Let \(S\) be a set that has average degree \(\alpha\). Then for any subset \(S^{\prime}\subseteq S\) of size \(|S^{\prime}|\geq\beta\cdot|S|\), \(S^{\prime}\) has average degree \(\alpha\cdot\beta\)._ A crucial component in Raz-McKenzie simulation connecting thickness and the average degree is the thickness lemma. In our proof, we borrow a version from [10]. **Lemma 5.9** (Thickness lemma [10]).: _Let \(\alpha,\delta>0\) be parameters. Let \(\emptyset\neq I\subseteq[s]\) and \(S\subseteq B^{I}\). If \(S\) has average degree \(\alpha\), then there is a \((\delta\cdot\alpha/s)\)-thick set \(S^{\prime}\subseteq S\) of size \(|S^{\prime}|\geq(1-\delta)\cdot|S|\)._ We also fix \(\delta=1/2\) and \(\alpha=\sqrt{n}\) together with \(r=10\cdot\log n\). Recall that \(s\leq n^{1/2-\epsilon}\) for some \(\epsilon>0\). In this regime of parameters, we have that \(\delta\cdot\alpha/s\geq n^{\epsilon}/2\geq 10\log n=r\). Hence, as long as we maintain a square with average degree \(\alpha\). we are able to apply the thickness lemma. **Lemma 5.10** (Projection lemma).: _Let \(R=X\times Y\) be a rectangle and let \(Q=(I,S,\mathcal{A})\) be a thick square in \(R\). If the set \(S\) has size more than \((3\alpha)^{|I|}\), then there is a square \(Q^{\prime}=(I^{\prime},S^{\prime},\mathcal{A}^{\prime})\) in \(R\) such that_ * \(I^{\prime}\subseteq I\) _and_ \(I^{\prime}\neq\emptyset\)_,_ * \(S^{\prime}\) _has average degree_ \(2\alpha\)_,_ * \(|S^{\prime}|\geq 0.9\cdot(3\alpha)^{|I^{\prime}|-|I|}\cdot|S|\)_._ Proof sketch.: We prove this lemma by a standard structure-vs-pseudorandomness approach. We first describe the process (Algorithm 1) to find the set \(I^{\prime}\) and \(\tilde{S}\). We note that the average degree of \(\tilde{S}\) is at least \((3\alpha)\), otherwise the algorithm would not stop. Following this algorithm, it is also clear that \(|\tilde{S}|\geq|S|\cdot(3\alpha)^{|I^{\prime}|-|I|}\). This implies that \(I^{\prime}\neq\emptyset\) because \(|S|>(3\alpha)^{|I|}\). Now, for each \(i\in I\setminus I^{\prime}\), we randomly pick a set \(A_{i}\subseteq B\) by independently including each element with probability \(1/2\). Let \(\mathcal{A}^{\prime}=\mathcal{A}\cup\{A_{i}:i\in I\setminus I^{\prime}\}\) and \(S^{\prime}\subseteq\tilde{S}\) be those strings \(z\in\tilde{S}\) such that, there exists an input \(x\in X\) and \(y\in Y\) such that: * \(x|_{I^{\prime}}=z\) and, for all \(i\in[s]\setminus I^{\prime}\), \(x_{i}\in A_{i}\). * \(y|_{I^{\prime}}=z\) and, for all \(i\in[s]\setminus I^{\prime}\), \(y_{i}\in B\setminus A_{i}\). We show that with high probability, the square \((I^{\prime},S^{\prime},\mathcal{A}^{\prime})\) is a witness for this lemma. We already argued that \(|S|\geq|S|\cdot(3\alpha)^{|I^{\prime}|-|I|}\), now we show for every \(z\in S\), \[\Pr_{\{A_{i}\}_{i\in I^{\prime}}}[z\in S^{\prime}]\geq 1-O(1/n).\] This inequality uses the fact that \(S\) is \((10\log n)\)-thick, and then a Chernoff bound on each \(A_{i}\), and a union bound on all \(i\in I\setminus I^{\prime}\). We omit the details here and will include them in the full version. Once it is established, then by an average argument, there is a choice of \(\{A_{i_{1}},\ldots,A_{i_{t}}\}\) such that * \(|S^{\prime}|\geq(1-O(1/n))\cdot|\tilde{S}|\geq 0.9\cdot|S|\cdot(3\alpha)^{|I^{ \prime}|-|I|}\); * \(S^{\prime}\) has average degree \(2\alpha\), by Fact 5.8 and \(\tilde{S}\) having average \(3\alpha\) and \(|S^{\prime}|\geq 0.9\cdot|\tilde{S}|\). Now we are ready to explain how to find a long path in the communication tree. ### Finding a long path in a communication tree Before presenting our algorithm, we first fix some notations. **Definition 5.11**.: _Let \(Q=(I,S,\mathcal{A})\) be a square in a rectangle \(R\). For any sub-rectangle \(R^{\prime}=X^{\prime}\times Y^{\prime}\) of \(R\), sub-square \(Q|_{R^{\prime}}=(I^{\prime},S^{\prime},\mathcal{A}^{\prime})\) is defined as follows:_ * _Keep_ \(I^{\prime}=I\) _and_ \(\mathcal{A}^{\prime}=\mathcal{A}\) _the same._ * \(S^{\prime}\subseteq S\) _contains all of those_ \(z\in S\) _such that, there exists inputs_ \(x\in X^{\prime}\) _and_ \(x\in Y^{\prime}\)_,_ \[x|_{I^{\prime}}=z\;\text{and, for all }i\in[s]\setminus I^{\prime},x_{i}\in A_{i},\] \[y|_{I^{\prime}}=z\;\text{and, for all }i\in[s]\setminus I^{\prime},y_{i}\in B \setminus A_{i}.\] **Definition 5.12** (Density function).: _For a square \(Q=(I,S,\mathcal{A})\), we define its density as_ \[E(Q)=\log\left(\frac{|S|}{|B|^{|I|}}\right).\] Now we describe how to find a long path in the communication tree. Recall that every node in a communication tree has an associated rectangle. Starting from the root, we find a path as follows: 1. We maintain a square in each intermediate node. 2. For each intermediate node, the path always visits the left or right child whose associated rectangle maximizes the density. The pseudo-code is given in Algorithm 2. ``` 1: Initialize \(v\leftarrow\) root of communication tree \(\Pi\) and square \(Q_{0}\leftarrow([s],B^{s},\emptyset)\) 2: Set \(t\gets 0\) 3:while\(R_{v}\) is not a monochromatic rectangle do 4: Let \(Q_{t}=(I_{t},S_{t},\mathcal{A}_{t})\) be currently maintained square. 5: Let \(v_{0},v_{1}\) be the children of \(v\) in \(\Pi\). 6:if Alice sends a bit at \(v\)then 7: Let \(X_{v_{0}},X_{v_{1}}\) be the partition of \(X_{v}\) according to Alice's partition. 8: Let \(R_{v_{0}}\gets X_{v_{0}}\times Y_{v}\) and \(R_{v_{1}}\gets X_{v_{1}}\times Y_{v}\). 9:endif 10:if Bob sends a bit at \(v\)then 11: Let \(Y_{v_{0}},Y_{v_{1}}\) be the partition of \(Y_{v}\) according to Bob's partition. 12: Let \(R_{v_{0}}\gets X_{v}\times Y_{v_{0}}\) and \(R_{v_{1}}\gets X_{v}\times Y_{v_{1}}\). 13:endif 14:if\(E(Q_{t}|_{R_{v_{0}}})\geq E(Q_{t}|_{R_{v_{1}}})\)then 15: Update \(v\gets v_{0}\) and \(Q^{\prime}_{t}\gets Q_{t}|_{R_{v_{0}}}\) 16:else 17: Update \(v\gets v_{1}\) and \(Q^{\prime}_{t}\gets Q_{t}|_{R_{v_{1}}}\) 18:endif 19: Let \(\tilde{Q}_{t}\) be a \(r\)-thick square obtained by applying Lemma 5.9 on \(Q^{\prime}_{t}\) 20:if the average degree of \(\tilde{Q}_{t}\) is smaller than \((2\alpha)\)then 21: Let \(Q_{t+1}\) be the square by applying Lemma 5.10 on \(\tilde{Q}_{t}\) 22:else 23: Let \(Q_{t+1}\leftarrow\tilde{Q}_{t}\) 24:endif 25: Update \(t\gets t+1\) 26:endwhile ``` **Algorithm 2**Finding a Long Path **Definition 5.12** (Density function).: _For a square \(Q=(I,S,\mathcal{A})\), we define its density as_ \[E(Q)=\log\left(\frac{|S|}{|B|^{|I|}}\right).\] Now we describe how to find a long path in the communication tree. Recall that every node in a communication tree has an associated rectangle. Starting from the root, we find a path as follows: 1. We maintain a square in each intermediate node. 2. For each intermediate node, the path always visits the left or right child whose associated rectangle maximizes the density. The pseudo-code is given in Algorithm 2. ``` 1: Initialize \(v\leftarrow\) root of communication tree \(\Pi\) and square \(Q_{0}\leftarrow([s],B^{s},\emptyset)\) 2: Set \(t\gets 0\) 3:while\(R_{v}\) is not a monochromatic rectangle do 4: Let \(Q_{t}=(I_{t},S_{t},\mathcal{A}_{t})\) be currently maintained square. 5: Let \(v_{0},v_{1}\) be the children of \(v\) in \(\Pi\). 6:if Alice sends a bit at \(v\)then 7: Let \(X_{v_{0}},X_{v_{1}}\) be the partition of \(X_{v}\) according to Alice's partition. 8: Let \(R_{v_{0}}\gets X_{v_{0}}\times Y_{v}\) and \(R_{v_{1}}\gets X_{v_{1}}\times Y_{v}\). 9:endif 10:if Bob sends a bit at \(v\)then 11: Let \(Y_{v_{0}},Y_{v_{1}}\) be the partition of \(Y_{v}\) according to Bob's partition. 12: Let \(R_{v_{0}}\gets X_{v}\times Y_{v_{0}}\) and \(R_{v_{1}}\gets X_{v}\times Y_{v_{1}}\). 13:endif 14:endif 15:if\(E(Q_{t}|_{R_{v_{0}}})\geq E(Q_{t}|_{R_{v_{1}}})\)then 16: Update \(v\gets v_{0}\) and \(Q^{\prime}_{t}\gets Q_{t}|_{R_{v_{0}}}\) 17:else 18: Update \(v\gets v_{0}\) and \(Q^{\prime}_{t}\gets Q_{t}|_{R_{v_{1}}}\) [MISSING_PAGE_POST] Now we argue a lower bound of \(t^{*}\) by analyzing the changes to the density function in Algorithm 2. We consider two types of density changes, which are called simulation and projection respectively. * _Simulation_. In each round \(t\), we obtain a square \(\tilde{Q}_{t}\) from the square \(Q_{t}\). For every \(t\), we have that \(|\tilde{S}_{t}|\geq|S^{\prime}_{t}|/2\) by Lemma 5.9 and \(|S^{\prime}_{t}|\geq|S_{t}|/2\) by the choice on Line 14. Hence, \[E(\tilde{Q}_{t})\geq E(Q_{t})-2.\] * _Projection_. The Line 21 is a projection. For every \(t\), if \(z=|I_{t}|-|I_{t+1}|>0\), then \[E(Q_{t+1})\geq E(\tilde{Q}_{t})+z\cdot(\log(|B|/(3\alpha))-2)\geq E(\tilde{Q}_ {t})+z\cdot\Omega(\log n).\] Note that, to apply Lemma 5.9, we need control over the average degrees; to apply Lemma 5.10, we need control over the thickness. Indeed we will inductively show that the following properties \(P_{1}(t),P_{2}(t),P_{3}(t)\) are true for all \(t\geq 0\), * \(P_{1}(t)\): the average degree of \(Q_{t}=(I_{t},S_{t},\mathcal{A}_{t})\) is at least \(2\cdot\alpha\). * \(P_{2}(t)\): the average degree of \(Q^{\prime}_{t}=(I^{\prime}_{t},S^{\prime}_{t},\mathcal{A}^{\prime}_{t})\) is at least \(\alpha\). * \(P_{3}(t)\): \(\tilde{Q}_{t}=(\tilde{I}_{t},\tilde{S}_{t},\tilde{\mathcal{A}}_{t})\) is \(r\)-thick. The base case \(P_{1}(0)\) is true because \(S_{0}=B^{s}\) and \(|B|\geq 2\cdot\alpha\). The rest part can be proved by applying the thickness lemma and the projection lemma alternatively. We skip the proof here and will include it in our full version. Finally, we observe that we must have that \(I_{t^{*}}=\emptyset\) when the algorithm terminates at step \(t^{*}\). Otherwise, it is not a monochromatic rectangle by Corollary 5.6. We note that the total density decrease before the algorithm termination is at most \(2\cdot t^{*}\). On the other hand, the total density increase before the termination is at least \(s\cdot\Omega(\log n)\). This implies \[2\cdot t^{*}\geq s\cdot\Omega(\log n),\] and the result follows. ### Discussions and open problems A very interesting follow-up open problem is to study \(s\)-UEQUAL lower bounds for \(s\gtrsim n^{1/2}\). In our proof, the main bottleneck is Lemma 5.9 (thickness lemma), which requires that \(s\leq\alpha\). Note that \(\alpha\leq|B|\) and \(s\cdot|B|=n\). Hence, Lemma 5.9 only applies to the range \(s\leq n^{1/2}\). In fact, the thickness lemma (or similar lemmas) is also the main barrier in query-to-communication lifting theorems. Lifting theorems usually require a full-range lemma (something similar to Lemma 5.5) to maintain a full simulation on the communication tree. We use the term full simulation to refer to these proofs aiming to construct a decision tree to exactly compute the Boolean functions. In contrast, we only attempt to find a long path in the communication tree. This approach was suggested by Yang and Zhang [14]. In our analysis, only Corollary 5.6 (a direct corollary of the full-range lemma) is needed. This is much weaker than the full-range requirement. Recall that the full-range lemma shows that: For every \(z\in\{0,1\}^{I}\), there is a pair \(x,y\in S\) such that \[\forall i\in I,\ z_{i}=1\quad\text{iff}\quad x_{i}=y_{i}.\] For the \(s\)-UEQUAL problem, we only care about a subset \(\{0^{I},e_{1},\ldots,e_{|I|}\}\) of \(\{0,1\}^{I}\). Here \(e_{i}\in\{0,1\}^{I}\) is the indicate vector. This observation may give a chance to avoid the full-range barrier, providing tight lower bounds for any \(s\geq 1\). Overall, we believe that finding a long-path paradigm may provide more applications beyond the full simulation paradigm.
2309.14278
Fast coherent control of nitrogen-14 spins associated with nitrogen-vacancy centers in diamonds using dynamical decoupling
A nitrogen-vacancy (NV) center in a diamond enables the access to an electron spin, which is expected to present highly sensitive quantum sensors. Although exploiting a nitrogen nuclear spin improves the sensitivity, manipulating it using a resonant pulse requires a long gate time owing to its small gyromagnetic ratio. Another technique to control nuclear spins is a conditional rotation gate based on dynamical decoupling, which is faster but unavailable for nitrogen spins owing to the lack of transverse hyperfine coupling with the electron spin. In this study, we generated effective transverse coupling by applying a weak off-axis magnetic field. An effective coupling depends on the off-axis field; the conditional rotation gate on the nitrogen-14 spins of an NV center was demonstrated within 4.2 {\mu}s under an 1.8% off-axis field and a longitudinal field of approximately 280 mT. We estimated that a population transfer from the electron to nitrogen spins can be implemented with 8.7 {\mu}s. Our method is applicable to an ensemble of NV centers, in addition to a single NV center.
Kosuke Mizuno, Ikuya Fujisaki, Hiroyoshi Tomioka, Hitoshi Ishiwata, Shinobu Onoda, Takayuki Iwasaki, Keigo Arai, Mutsuko Hatano
2023-09-25T16:42:07Z
http://arxiv.org/abs/2309.14278v1
Fast coherent control of nitrogen-14 spins associated with nitrogen-vacancy centers in diamonds using dynamical decoupling ###### Abstract A nitrogen-vacancy (NV) center in a diamond enables the access to an electron spin, which is expected to present highly sensitive quantum sensors. Although exploiting a nitrogen nuclear spin improves the sensitivity, manipulating it using a resonant pulse requires a long gate time owing to its small gyromagnetic ratio. Another technique to control nuclear spins is a conditional rotation gate based on dynamical decoupling, which is faster but unavailable for nitrogen spins owing to the lack of transverse hyperfine coupling with the electron spin. In this study, we generated effective transverse coupling by applying a weak off-axis magnetic field. An effective coupling depends on the off-axis field; the conditional rotation gate on the nitrogen-14 spins of an NV center was demonstrated within \(4.2\,\mathrm{\SIUnitSymbolMicro s}\) under an \(1.8\,\mathrm{\char 37}\) off-axis field and a longitudinal field of approximately \(280\,\mathrm{mT}\). We estimated that a population transfer from the electron to nitrogen spins can be implemented with \(8.7\,\mathrm{\SIUnitSymbolMicro s}\). Our method is applicable to an ensemble of NV centers, in addition to a single NV center. _Keywords_: NV center, nuclear spin manipulation, dynamical decoupling ## 1 Introduction A nitrogen-vacancy (NV) center in a diamond is a point defect with an isolated electron spin, which is utilized as a highly sensitive quantum magnetometer under ambient conditions [1, 2, 3]. It has received considerable attention for biological applications [4, 5, 6, 7, 8, 9, 10], biomedical applications [11], applied physics [12, 13], and material physics [14, 15]. NV centers are appealing owing to their applicability; however, they are less sensitive than optically pumped magnetometers and superconducting interference devices [16]. The limiting factor for the sensitivity of NV-based sensors is the coherence time of the electron spin, which is generally a few tens of microseconds. Storing the information of the sensor spins in a long-lasting memory is one of the solutions for improving sensor characteristics; for example, the nitrogen atom of an NV center has a spin freedom of degree with a significantly longer coherence time. Utilizing the nitrogen spin as the quantum memory has been demonstrated to extend the capabilities of individual NV centers [17, 18, 19, 20, 21]; in particular, Lovchinsky _et al._[18] achieved an order improvement in sensitivity by repetitive readout. These nuclear-assisted protocols require nuclear spin manipulations. Although nuclear spins can be simply controlled by irradiating a resonant radio-frequency (RF) pulse, it requires long gate times, typically a few hundred microseconds [19], owing to their gyromagnetic ratios being as small as approximately one-thousandth of the electron spin. An alternative method is an electron-conditional nuclear spin-rotation gate [22] designed with dynamical decoupling [23]. Dynamical decoupling consisting of multiple \(\pi\) pulses on the electron spin modifies magnetic noises that the electron undergoes [23, 24, 25, 26, 27], enabling the frequency-selective detection of magnetic field [28, 29, 30] and selective enhancement of coupling to nuclear spins [31, 32, 33, 34, 35, 36, 37]. It has also been demonstrated for controlling proximal carbon-13 spins within a few microseconds [22, 38, 39, 40, 41]. This method utilizes the electron-nuclear spin coupling to control nuclear spins instead of direct RF irradiation, leading to simplicity in an experimental setup. Although the conditional rotation gate relies on transverse coupling between the electron and target spins, the nitrogen spin does not exhibit this type of coupling with electrons. Liu _et al._[42] presented the perturbation theory of a weak off-axis magnetic field involving effective transverse coupling and demonstrated an off-axis field sensing. The sensitivity to off-axis fields could be large by the ground state level anti-crossing (GS-LAC). In this study, we generated effective transverse coupling by applying an off-axis field to demonstrate that the conditional rotation gate on the nitrogen spin within a few microseconds. Moderate effective coupling strength is still available far from the GSLAC, enabling fast coherent operations of the nuclear spin. We applied a magnetic field of 280 mT with an off-axis field of 5 mT and observed coherent oscillations of the nitrogen-14 spin. The effective coupling strength was tunable within 10-90 kHz by varying the off-axis field within 2-7 mT. Using our method, the estimated gate time for a population transfer from the electron to the nuclear spin was 8.7 us. We conducted the same experiments on an ensemble of NV centers, for which the proposed method rapidly controlled the nitrogen spins. The proposed method enables the formation of nuclear-assisted protocols for an ensemble of NV centers. Since this method is free from RF pulses and thus requires no additional instruments like proximal RF antennas, it could be a way to integrate nuclear-assisted protocols into wide-field [43, 44, 45, 46, 47, 48, 6] or large ensemble systems [5, 49, 50, 51] that are important for practical sensing applications. The rest of the paper is organized as follows. Section 2 describes NV centers and the principle of our method. Section 3 presents details of an experimental setup and samples. Section 4 shows experimental results. Section 5.1 provides a comparison of our method to previous studies. Section 5.2 describes experimental requirements of the method. Finally, Section 5.3 discuss an application for nuclear-assisted protocols. ## 2 Principle of the nitrogen spin manipulation Fig. 1(a) presents the structure of an NV center in a diamond and its energy diagram. An NV center consists of a substitutional nitrogen atom and an adjacent vacancy that localizes an electron \(S=1\) system. The spin states of the NV centers are initialized and detected using laser illumination. The Hamiltonian of this electron system is as follows: \[\hat{H}_{0}=D_{gs}\hat{S}_{z}+\gamma_{e}(B_{z}\hat{S}_{z}+B_{\perp}\hat{S}_{ \perp}), \tag{1}\] Figure 1: (a) An energy diagram of a \({}^{14}\)N–V center in a diamond. The subspace spanned by \(S=\{-1,0\}\), and \(I=\{0,+1\}\) are emphasized for the electron and nitrogen-14 spins, respectively. (b) The hyperfine interaction between the electron and nuclear spins does not include a transverse coupling \(\hat{S}_{z}\hat{I}_{\perp}\), which is required for electron-conditional nuclear spin rotation gates based on dynamical decoupling. A weak off-axis magnetic field perturbatively generates an effective transverse coupling. (c) An example of a nuclear spin control. The effective transverse interaction \(g\hat{S}_{z}\hat{I}_{\perp}\) under dynamical decoupling rotates the electron and nuclear spins in a conditioned fashion, which allows the universal control of the electron-nuclear spin system. where we focus on a subspace of \(S=\{-1,0\}\)[42], \(D_{gs}/2\pi=2.87\,\)GHz is the zero field splitting of the NV centers, and \(\gamma_{e}/2\pi=28\,\)GHz/T is the gyromagnetic ratio of the electron. \(\hat{S}_{z}\) and \(\hat{S}_{\perp}\) correspond to the spin operators along the NV center axis (\(Z\) axis) and the spin operator in the perpendicular direction, respectively. A magnetic field can be decomposed into an axial component \(B_{z}\) and an off-axis component \(B_{\perp}\). An electron undergoes hyperfine coupling to the nitrogen nuclear spin, and the total Hamiltonian is as follows: \[\hat{H} =\hat{H}_{0}+\omega_{n}\hat{I}_{z}+\hat{H}_{\rm hf},\quad\text{ with} \tag{2}\] \[\hat{H}_{\rm hf} =A_{z}\hat{S}_{z}\hat{I}_{z}+A_{\perp}\left(\hat{S}_{x}\hat{I}_{x }+\hat{S}_{y}\hat{I}_{y}\right), \tag{3}\] where \(\omega_{n}\) is the transition energy of the nuclear spin, \(\hat{I}_{i}\,(i=x,y,z)\) is the nuclear spin operator, and \(A_{z}\) and \(A_{\perp}\) correspond to the hyperfine coupling strengths of the longitudinal and transverse axes, respectively. The longitudinal hyperfine interaction (the first term in Eq.(3)) is canceled by dynamical decoupling, and the transversal hyperfine interaction (the second term in Eq.(3)) is suppressed by the rotating wave approximation. Although the nitrogen spin thus has no effect on the electron spin, a weak off-axis field \(B_{\perp}\) rotates the quantization axis and generates an effective transverse coupling \(\hat{S}_{z}\hat{I}_{\perp}\), which can be utilized with dynamical decoupling (Fig. 1(b)). Thus, the hyperfine interaction becomes the following: \[\hat{H}^{\prime}_{\rm hf}=A_{z}\hat{S}_{z}\hat{I}_{z}+\frac{\gamma_{e}B_{\perp} A_{\perp}}{D_{gs}-\gamma_{e}B_{z}}F\,\hat{S}_{z}\hat{I}_{\perp}, \tag{4}\] where \(F\) is a constant determined from the second-order perturbation theory [42]. The second term in Eq.(4) oscillates with the frequencies \(\omega_{n}\) in a rotating frame with \(\omega_{n}\hat{I}_{z}\). Since dynamical decoupling techniques can modify the magnetic spectra the electron spin undergoes, such an oscillating term can be turned on and off. The magnetic sensitivity under dynamical decoupling is maximized at a frequency \(f_{\rm DD}\) such that \(1/2f_{\rm DD}=2\tau\), where \(2\tau\) is the spacing of \(\pi\) pulses. Thus, the NV center undergoes the second term under dynamical decoupling with a resonant condition \((1/2)\cdot(\omega_{n}\pm A_{z}/2)/2\pi=2\tau\); otherwise, it is decoupled from the second term. The deviations \(\pm A_{z}/2\) from the resonant condition come from the longitudinal hyperfine coupling. Note, there are two isotopes of nitrogen: \({}^{14}\)N (\(I=1\)) and \({}^{15}\)N (\(I=1/2\)). We used \({}^{14}\)N because it has a quadrupole interaction, resulting in a higher \(\omega_{n}\) and thus frequent \(\pi\) pulses in dynamical decoupling, which achieves better coherence protection. For nitrogen-14, \(A_{z}/2\pi=2.2\,\)MHz, \(A_{\perp}/2\pi=-2.62\,\)MHz, and \(F\simeq 2.75\)[42]. Moreover, we focus on the subspace of \(I=\{0,+1\}\). Note that the nuclear spin was not actively initialized because it was not required for our experiments. Instantaneous dynamics under dynamical decoupling are complicated. Thus, we focus on the time-averaged Hamiltonian in the rotating frame as follows: \[\bar{H} = g\hat{S}_{z}\hat{I}_{\perp},\quad\text{with} \tag{5}\] \[g = \frac{\gamma_{e}B_{\perp}A_{\perp}}{\pi(D_{gs}-\gamma_{e}B_{z})}F, \tag{6}\] where \(g\) is the effective coupling strength. Fig. 1(c) demonstrates an example of the time-averaged dynamics of an NV center undergoing dynamical decoupling with the resonant condition. Starting from an initial state \(|S=0,I=+1\rangle\), the first \(\pi/2\) pulse makes a superposition state of the electron spin. The electron superposition state undergoes the Hamiltonian (Eq.(5)) and rotates around the \(Z\)-axis of the Bloch sphere. The nuclear spin simultaneously rotates around the \(\pm X\)axes in a conditioned fashion with respect to the electron spin. Denoting the number of \(\pi\) pulses of the dynamical decoupling as \(N_{p}\) and the interaction time as \(T=2\tau N_{p}\), the rotation angle \(\theta=gT\) can be tuned by the following three parameters: \(B_{z}\), \(B_{\perp}\) and \(N_{p}\). When \(\theta=\pi\), the state after the second \(\pi/2\) pulse is \(|S=0,I=0\rangle\); thus, this sequence is an \(X\) gate of the nuclear spin. Fig. 2 shows an effective coupling strength calculated by Eq.(6). While a large coupling is available around the GSLAC at 102.4 mT due to the divergence, a moderate coupling strength is still available far from the GSLAC. Assuming \(B_{z}=250\) mT and \(B_{\perp}=1\) mT, the coupling strength is larger than 10 kHz, enabling a fast conditional \(\pi\) rotation within 5 us. ## 3 Materials and Method A confocal microscope was used in this study (Fig. 3). Laser pulses at 532 nm were chopped by an acousto-optic modulator (AOM) and irradiated to NV centers through an objective lens. The fluorescence from the NV centers was collected by the objective, selected through optical filters and a pinhole, and detected by avalanche photodiodes (APDs). We suspended a permanent magnet with vertical magnetization by applying a strong magnetic field around 280 mT along the \(Z\)-axis of the NV centers on the (111) diamond substrates. A permanent magnet was installed with stepping motors. We controlled the off-axis field by displacing the planar position of the magnet. Microwave pulses for the electron spin control were generated using digital IQ modulation with a data timing generator (Tektronix, DTG5274), amplified, and then irradiated via Figure 2: Effective coupling strength (a) plotted as a function of the axial field \(B_{z}\) for several off-axis fields \(B_{\perp}\). (b) Two dimentional plot for \((B_{z},B_{\perp})\). While the effective coupling strength diverges around the GSLAC, a moderate coupling strength is available for a strong field regime far from the GSLAC. a copper wire. The typical length of a \(\pi\) pulse on an electron spin is 35 ns. Note that the length of the \(\pi\) pulses was ignored in the previous section for simplicity. For a finite \(\pi\) pulse length, \(t_{\pi}>0\), the resonant condition must be modified to \(1/2f_{\rm DD}=2\tau+t_{\pi}\). Several types of dynamical decoupling are theoretically equivalent. We used the XY8 [52] sequence owing to its simple implementation and robustness against experimental imperfections. Although the fluorescence intensity of the NV centers reflects the electron spin population, several unwanted signals (background light, shot noise in the detectors, and charge dynamics of the NV centers) were also included. To extract the electron spin population from the fluorescence intensity, the fluorescence intensities were normalized in this study. We conducted a measurement sequence, such as that shown in Fig. 1(c) and recorded the raw fluorescence intensity \(S_{0}\). We also conducted an additional measurement by substituting the phase of the last \(\pi/2\) pulse into the opposite phase (\(-X/2\) pulse was used instead of \(X/2\)) and recorded the raw fluorescence intensity \(S_{1}\). The (normalized) fluorescence intensity was then calculated as \(S=(S_{0}-S_{1})/(S_{0}+S_{1})\), mitigating the effects of the unwanted signals. Two diamond substrates were prepared. A IIa(111) diamond substrate containing individual NV centers was fabricated by implanting \({}^{15}\)N\({}^{+}\) ions accelerated at 40 keV with a dose of \(5\times 10^{8}\) cm\({}^{-2}\) and vacuum annealing at 1000 \({}^{\circ}\)C for 2 h. Although the implanted ion was \({}^{15}\)N\({}^{+}\), this sample contained several \({}^{14}\)N-V centers. The carbon atoms in the substrate were naturally abundant; thus, the NV centers potentially had several carbon-13 spins in their vicinity. An individual NV center with no strongly coupled carbon-13 spin (\(<1\) MHz) was used, which was verified the optically detected magnetic resonance and the Ramsey experiments. On the other hand, a IIa(111) diamond substrate containing high-density perfectly aligned NV centers was fabricated by chemical vapor deposition. Although we did not investigate the nitrogen distribution in this sample, the growth conditions were nearly identical to those in a previous study [36]. The nitrogen Figure 3: A home-built confocal microscopy. (111) diamond substrates were used. A suspended permanent magnet applied a magnetic field along with the NV center axes. A transverse magnetic field was controlled by changing the relative planar position between the magnet and NV centers at the focal spot. concentration should be 170 ppm and confined in a shallow layer within 10 nm from the surface. Approximately one hundred NV centers were included in the detection spot of the confocal microscope. The sample was grown using an isotopically purified carbon source (\([^{12}\mathrm{C}]\leq 99.995\,\%\)). ## 4 Results ### Experiments on a single NV center We demonstrated the generation of effective transverse coupling to the nitrogen spin of a single NV center. As indicated in the Principle Section (Fig. 1(c)), the superposition state of the electron spin after the first \(\pi/2\) pulse rotates under an XY8 sequence if the resonant condition is satisfied, and the second \(\pi/2\) pulse maps the rotation angle to the electron spin population. Thus, with a small \(N_{p}\), we can verify the existence of transverse coupling at a pulse spacing \(\tau\) and at the corresponding resonant condition \(f_{\mathrm{DD}}\). Fig. 4(a) demonstrates the fluorescence intensity of the XY8 measurement of a single NV center for several \(f_{\mathrm{DD}}\). The number of \(\pi\) pulses were \(N_{p}=80\). Two peaks were observed; the left (right) peak corresponds to the transition between \(I=\{+1,0\}\) (\(I=\{0,-1\}\)). Note, quantum interpolation [53] was used for Fig. 4(a) to obtain a fine frequency resolution. To determine the effective transverse coupling strength, we measured the fluorescence intensity with several \(N_{p}\) values under the resonant condition of the left peak (Fig. 4(b)). Coherent oscillations of the nuclear spin were obtained. The oscillation period was \(N_{p}\simeq 160\), corresponding to an effective coupling strength of \(g\simeq 59\,\mathrm{kHz}\), and the off-axis field was approximately \(5\,\mathrm{mT}\). For the tuning of an effective coupling, we conducted these experiments with different positions of the permanent magnet. Fig. 4(c) demonstrates the effective coupling strength as a function of the planar position of the permanent magnet. Effective couplings of 10-90 kHz were observed, corresponding to off-axis fields of 2-7 mT. The map of the effective coupling appears as a bowl, and the bottom of the bowl at \((X,Y)=(14.6,13.0)\) may correspond to a point where the NV center axis coincides with the field direction of the permanent magnet. Therefore, we used an off-axis magnetic field to generate an effective transverse hyperfine coupling. Figure 4: XY8 measurement of a single NV center. (a) Target frequency of the decoupling sequence versus fluorescence intensity with 80 pulses. (b) Number of pulses versus fluorescence intensity at the resonant condition of the left peak in (a), indicating a coherent oscillation of the nitrogen nuclear spin. (c) Effective coupling strength at several planar positions of the magnet. A correlation measurement was conducted to confirm that effective coupling can be utilized for the conditional rotation gate. Fig. 5(a) demonstrates the pulse sequence of the correlation measurement composed of an XY8 sequence, free evolution time, and XY8 sequence. The \(X/2\) pulse (blue) prepares the electrons in a superposition state. The XY8 sequence with a resonant condition for the nuclear spin (gray), for which the rotation angle is \(\pi/2\), entangles the electron and nuclear spins. The \(Y/2\) pulse (red) turns the electron spin back into the \(Z\)-axis of the Bloch sphere. The resultant state before the free-evolution time is an entangled state of the electron and nuclear spins. During the free evolution time, the nitrogen spin rotates with two Larmor frequencies owing to the longitudinal hyperfine coupling between the electron and nitrogen spins. The frequency difference corresponds to the longitudinal coupling \(A_{z}/2\pi\) (the first term in Eq.(4)). The second XY8 sequence between the \(X/2\) and \(Y/2\) pulses translates the rotation angle of the nuclear spin back into the electron spin population [54]. Fig. 5(b) demonstrates the fluorescence intensity of this measurement; its frequency spectrum is shown in Figure 5: (a) XY8 correlation measurement of the single NV center. Each XY8 was conducted with a resonant condition; the rotation angle was \(\pi/2\). The first XY8 entangles the electron and nuclear spins. The nuclear spin rotates with electron-dependent Larmor frequencies \(\omega_{n}\pm A_{z}/2\) during the free evolution time, which is owing to the longitudinal hyperfine coupling. The second XY8 maps the rotation angle back to the electron population. (b,c) Fluorescence intensity versus the evolution time (b) and its spectrum (c). The two peaks, which exhibited a separation of \(2.2\,\mathrm{MHz}\), coincided with the longitudinal hyperfine coupling. Fig. 5(c). The number of XY8 sequences was \(N_{p}=40\), and the expected rotation angle was approximately \(\pi/2\). Two peaks with a separation of \(2.2\,\mathrm{MHz}\) were observed, which coincides with the longitudinal hyperfine coupling \(A_{z}/2\pi\). This result indicates that XY8 sequences with a resonant condition rotate the nitrogen spin, achieving an expected conditional rotation gate. ### Experiments on ensemble \(Nv\) centers The same experiments were conducted on the ensemble of NV centers. Fig. 6(a) demonstrates the fluorescence intensity of an XY8 measurement as a function of \(f_{\mathrm{DD}}\) The number of \(\pi\) pulses was \(N_{p}=80\). Fig. 6(b, orange) demonstrates the fluorescence intensity of an XY8 measurement as a function of \(N_{p}\) at the resonant condition to the left peak shown in Fig. 6(a), indicating a coherent oscillation. The oscillation of the ensemble of NV centers decayed more quickly than that of the single NV center, which may be attributed to the difference in the coherence times. The black curve shown in Fig. 6(b) presents the fluorescence intensity of an XY8 measurement with \(N_{p}=80\) pulses, indicating a coherence time of approximately \(10\,\mathrm{\SIUnitSymbolMicro s}\). Fig. 6(c) demonstrates the frequency spectrum of the correlation measurements of the ensemble of NV centers. Two peaks with a separation of \(2.2\,\mathrm{MHz}\) were also observed. This indicates that our method, which generates an effective coupling by an off-axis field and controls the nuclear spin by dynamical decoupling, can be applied to an ensemble of NV centers. ## 5 Discussions ### Comparison to related studies Our work will be compared to previous studies. Nuclear spins can be controlled directly by resonant RF pulses, however, requiring strong pulses [55] or tailored RF antennas [56]. Otherwise, long gate operations are required, being detrimental due to the short coherence time of electron spins. Some Figure 6: Measurements with an ensemble of NV centers. (a) An XY8 measurement with varying target frequencies. (b) Orange: A coherent oscillation at the left peak of (a). Black: an XY8 measurement with varying pulse spacings with 80 pulses. (c) a spectrum of the correlation measurement exhibiting a separation of \(2.2\,\mathrm{MHz}\). Inset: time domain data. previous works have developed a combined strategy that simultaneously decouples an electron spin from the environment and manipulates a nuclear spin by an RF pulse and dynamical decoupling sequence [57; 58; 59]. Meanwhile, our method needs no RF drive for nuclear spins. Although it requires precise control of magnetic fields, as discussed below, it is free from proximal structures like lithographic patterns or RF antennas, preventing the applicability of NV centers. Besides the above straightforward ways, hyperfine interaction between an electron and a nuclear spin could enhance the effective gyromagnetic ratio of the nuclear spin, resulting in fast manipulations of nuclear spins [60; 61; 62; 63; 64]. This effect is maximized at the GSLAC condition for NV centers. Note that a larger gyromagnetic ratio simultaneously means faster dephasing [65]. Our experiments are demonstrated in a strong field region (\(B_{z}\sim 280\,\)mT, corresponding to \(|D_{gs}-\gamma_{e}B_{z}|\simeq 2\pi\times 5\,\)GHz) far from the GSLAC. This allows us to avoid GSLAC-induced nuclear spin dephasing. Moreover, our method is available in broad magnetic field conditions except for the GSLAC because Eq.(6) assumes that \(D_{gs}-\gamma_{e}B_{z}\) is large [42]. This applicability is advantageous for practical sensing applications. Although our work is inspired by Liu _et al._[42], which utilized the GSLAC (\(|D_{gs}-\gamma_{e}B_{z}|\simeq 2\pi\times 100\,\)MHz) to increase sensitivity, we demonstrate that the off-axis field can be utilized to control nuclear spin even in such a strong field condition. ### Experimental requirements There are several limitations and experimental requirements for the conditional rotation gates on the nitrogen spins. First, generating a desired transverse effective coupling is crucial for our method to achieve a conditional rotation gate with a desired rotation angle. The rotation angle can be tuned by the number of pulses and effective coupling strength; however, the tunability is discretized based on the number of pulses. Thus, precise control and stabilization of the magnetic field are particularly important. Second, the timing resolution of the microwave pulses, which was \(0.5\,\)ns in our setup, confines the pulse spacing to discrete values, limiting the tunability of the rotation angle. Quantum interpolation [53] or shaping the pulse waveform [66] will help tune the rotation angle with a higher resolution. Optimal control [67; 68; 69] can be another solution for achieving nuclear spin manipulations via effective transverse coupling. Note that the rotation angle is proportional to the effective coupling strength and thus proportional to the off-axis field \(B_{\perp}\) (see Eq.(6)). This means that a small field deviation s.t. \(g\to g(1+\epsilon)\) causes a quadratic error in fidelity [70]. Thus carefully designing an experimental setup could mitigate experimental imperfections, for example, deviation, temporal instability, spatial inhomogeneity, and gradient of the magnetic field. Additionally, composite pulses [70; 71; 72] and robust quantum control [73; 74; 75; 76] could help further mitigation for such imperfections. Our method can be applied to various experiments, including wide-field and large detection volume setups that are significant for actual sensing applications [77], owing to the simplicity of the electrical components. Such setups are based on an ensemble of NV centers which exhibits much shorter dephasing time than a single NV center. Although the dephasing time of the electron spins should limit the fidelity of the conditional rotation gates, this time corresponds to a dephasing time \(T_{\rm 2,DD}\) under dynamical decoupling sequences rather than an inhomogeneous dephasing time \(T_{2}^{*}\). Since \(T_{\rm 2,DD}\) can be significantly longer than \(T_{2}^{*}\) for high-density samples, our method is expected to be applicable to high-density NV centers. ### Nuclear-assisted sensing Finally, the conditional rotation gate helps implement nuclear-assisted protocols; for example, a repetitive readout with the nitrogen spin as a quantum memory [18]. Such nuclear-assisted protocols need a population transfer between the spins. The conditional rotation gates enable to implementation of such operations. Note that the conditional rotation gate is necessary rather than conditional phase gates [78] to implement population transfer. The latter gates additionally require direct control of nuclear spins. Implementations of this operation include a conditional \(\pi/2\) rotation gate, an unconditional \(Z\) gate, and a second conditional \(\pi/2\) rotation gate (see A). The timescale of the unconditional \(Z\) gate is the inverse of the Larmor frequency of the nuclear spin, which is significantly faster than the conditional rotation. For one of the conditions in our experiment, the conditional \(\pi/2\) rotation gate can be implemented with approximately \(T_{\rm CR}=4.2\,\mathrm{\SIUnitSymbolMicro s}\). Additionally, four \(\pi/2\) pulses on the electron spin are required at the front and end of the sequence and between the conditional and unconditional gates. Thus, we estimate that a population transfer can be implemented with a gate time of \(8.7\,\mathrm{\SIUnitSymbolMicro s}\). Note that this gate time depends on the strength of the off-axis field and the resultant effective coupling; thus, a faster gate is available. We chose this moderate operation speed mainly owing to the timing resolution, as previously indicated. Nuclear-assisted protocols require the nuclear spin to remain alive during electron spin manipulations, including initialization. The nitrogen spin state is disturbed by the laser illumination that is used to initialize the electron spin [79, 80, 81, 82]; the disturbance is smaller with a smaller off-axis magnetic field [83]. Thus, an off-axis field may deteriorate the sensitivity of the nitrogen-assisted repetitive readout. Our method requires a small off-axis field; a further study is required to determine a magnetic field condition that is sufficiently strong for nuclear spin manipulations, yet weak enough to preserve the nuclear spin. Alternatively, applying the off-axis field only during the population transfer and switching it off during laser illumination can be another solution. Note that an off-axis magnetic field degrades the photoluminescence intensity of NV centers [84] and thus affects the sensitivity. This effect is mitigated in fields far from LAC but depends on the off-axis field strength. A careful design of the magnetic field is necessary to obtain a better sensitivity. ## 6 Conclusions The nitrogen nuclear spins of NV centers are resources for quantum sensing; however, manipulating nuclear spins requires an long gate time. We demonstrated the generation of effective coupling by an off-axis field and controlling the nitrogen spin via dynamical decoupling. The estimated gate time based on our method is \(8.7\,\mathrm{\SIUnitSymbolMicro s}\) for a population transfer, which is significantly faster than the direct operations by resonant pulses on the nitrogen spins. Moreover, our method is available for an ensemble of NV centers and not only for individual NV centers, which enables nuclear-assisted protocols regarding an ensemble of NV centers. ## Acknowledgments This work is supported by the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP), Grant Numbers JPMXS0118067395. KA received funding from JST PRESTO (Grant Number JPMJPR20B1). HI received funding from JST PRESTO (Grant Number JPMJPR17G1). ## Author Contributions KM, KA, and MH conceived the project. The experiments were conducted using KM and HT. Data analyses and numerical simulations were conducted using KM, HT, and IF. The diamond samples were prepared using HI and SO. This manuscript was prepared by KM, IF, and KA, with review contributions from all the other authors. Overall supervision was performed using KA, TI, and MH. ## Data Availability The data supporting the findings of this study are available from the corresponding authors upon request. ## Code Availability The codes used in this study are available from the corresponding authors upon request. ## Appendix A Population transfer from the electron to nuclear spin The population transfer we discussed in the main text is a variant of a nuclear spin operation used in Taminiau _et al._[22]. This operation comprises three \(\pi/2\) gates on the electron, two conditional rotation gates, and a \(Z/2\) gate on the nuclear spin (Fig. A1 and A2). Note that (unconditional) \(Z\) gates on the nuclear spin can also be implemented by dynamical decoupling [22]. Let \(\theta\) be the rotation angle of the conditional rotation gates. The propagator is the following: \[\hat{U}_{\text{trans}}=\begin{pmatrix}1&0&0&0\\ 0&-i\cos\theta&\sin\theta&0\\ 0&\sin\theta&-i\cos\theta&0\\ 0&0&0&-1\end{pmatrix}. \tag{10}\] Assuming the quantum state before the transfer is \(|\psi\rangle=(c_{0}|0_{e}\rangle+c_{1}|1_{e}\rangle)\otimes|0_{n}\rangle\), this operation converts it into \(\hat{U}_{\text{trans}}|\psi\rangle=(c_{0}|0_{e}\rangle-ic_{1}\cos\theta|1_{e }\rangle)\otimes|0_{n}\rangle+c_{1}\sin\theta|0_{e}1_{n}\rangle\). By tracing out the electron spin state, the nuclear spin state is \(\tilde{p}_{0}|0_{n}\rangle\langle 0_{n}|+\tilde{p}_{1}|1_{n}\rangle\langle 1_{n}|\), where \(\tilde{p}_{0}=|c_{0}|^{2}+|c_{1}|^{2}\cos^{2}\theta\) and \(\tilde{p}_{1}=|c_{1}|^{2}\sin^{2}\theta\). Thus the electron spin population is transferred to the nuclear spin when \(\theta=\pi/2\).
2309.05375
Toward a Deeper Understanding: RetNet Viewed through Convolution
The success of Vision Transformer (ViT) has been widely reported on a wide range of image recognition tasks. ViT can learn global dependencies superior to CNN, yet CNN's inherent locality can substitute for expensive training resources. Recently, the outstanding performance of RetNet in the field of language modeling has garnered attention, surpassing that of the Transformer with explicit local modeling, shifting researchers' focus towards Transformers in the CV field. This paper investigates the effectiveness of RetNet from a CNN perspective and presents a variant of RetNet tailored to the visual domain. Similar to RetNet we improves ViT's local modeling by applying a weight mask on the original self-attention matrix. A straightforward way to locally adapt the self-attention matrix can be realized by an element-wise learnable weight mask (ELM), for which our preliminary results show promising results. However, the element-wise simple learnable weight mask not only induces a non-trivial additional parameter overhead but also increases the optimization complexity. To this end, this work proposes a novel Gaussian mixture mask (GMM) in which one mask only has two learnable parameters and it can be conveniently used in any ViT variants whose attention mechanism allows the use of masks. Experimental results on multiple small datasets demonstrate that the effectiveness of our proposed Gaussian mask for boosting ViTs for free (almost zero additional parameter or computation cost). Our code can be publicly available at https://github.com/CatworldLee/Gaussian-Mixture-Mask-Attention.
Chenghao Li, Chaoning Zhang
2023-09-11T10:54:22Z
http://arxiv.org/abs/2309.05375v2
# Toward a Deeper Understanding: RetNet Viewed through Convolution ###### Abstract The success of Vision Transformer (ViT) has been widely reported on a wide range of image recognition tasks. ViT can learn global dependencies superior to CNN, yet CNN's inherent locality can substitute for expensive training resources. Recently, the outstanding performance of RetNet in the field of language modeling has garnered attention, surpassing that of the Transformer with explicit local modeling, shifting researchers' focus towards Transformers in the CV field. This paper investigates the effectiveness of RetNet from a CNN perspective and presents a variant of RetNet tailored to the visual domain. Similar to RetNet we improves ViT's local modeling by applying a weight mask on the original self-attention matrix. A straightforward way to locally adapt the self-attention matrix can be realized by an element-wise learnable weight mask (ELM), for which our preliminary results show promising results. However, the element-wise simple learnable weight mask not only induces a non-trivial additional parameter overhead but also increases the optimization complexity. To this end, this work proposes a novel Gaussian mixture mask (GMM) in which one mask only has two learnable parameters and it can be conveniently used in any ViT variants whose attention mechanism allows the use of masks. Experimental results on multiple small datasets demonstrate that the effectiveness of our proposed Gaussian mask for boosting ViTs for free (almost zero additional parameter or computation cost). Our code can be publicly available at [https://github.com/CatworldLee/Gaussian-Mixture-Mask-Attention](https://github.com/CatworldLee/Gaussian-Mixture-Mask-Attention). ## 1 Introduction Since the success of AlexNet [22], Convolutional Neural Networks (CNNs) have become the standard for computer vision. Krizhevsky _et al._ show that convolutions are advantageous in visual tasks due to their invariance to spatial translations and their low correlative-inductive bias. Convolutions leverage three important concepts to achieve their effects: _sparse interactions_, _weight sharing_, and _equivariant representations_[15]. On the other hand, transformers are becoming increasingly popular and are a focus of modern machine learning research. Since _"Attention is All You Need"_[38], the research community has noticed an upsurge Figure 1: **Overview of Gaussian Mixture Mask (GMM) Attention Mechanism.** Firstly, feature vectors are mapped into three matrices \(Q\), \(K\), and \(V\) within the attention module (**bottom**). Subsequently, n distinct Gaussian masks are defined, which are linearly combined to form a Gaussian mixture mask. Building upon the foundational self-attention mechanism (**middle**), the shift window of the Gaussian mixture mask is unfolded and expanded into corresponding attention scores, this step resembles the feature vector undergoing an **element-wise convolution** operation, resulting in the attention map for each patch. Finally, the output patch feature is calculated as the dot product of the matrix \(V\) and the attention map (**top**). in transformer- and attention-based research [10, 29, 4]. Transformers are designed specifically for sequence modeling and translation tasks, and their significant feature is the use of attention models to model long-distance dependencies in the data. Its huge success in the language domain has prompted researchers to explore transformers in the computer vision domain [11, 7, 27, 40, 14, 17]. With the competitive modeling capacity, visual transformers have achieved remarkable performance improvements compared to CNNs on multiple benchmark tests. Recently, RetNet [32] has emerged as a potent successor to Transformer in the realm of large-scale language models by embracing three computational paradigms: _parallel_, _recurrent_, and _chunkwise recurrent_. Concurrently, Fan _et al._[12] attempts have been made in the literature to introduce the RetNet paradigm into the visual domain. This study endeavors to understand RetNet from a novel perspective of CNN and presents a new variant of the ViT paradigm that integrates the essence of RetNet. The intrinsic local inductive bias inherent in CNNs is not possessed by Transformers. CNNs have gained an advantage on spatially strong natural dataset, such as small-scale imagery and video, by employing their formidable inherent local modeling capability [26]. While Transformers can learn this inductive bias from scratch on large dataset, such as ImageNet [11] with little effort, it usually cannot attain the performance of CNNs on small-scale dataset. A common technique to tackle the small-data problem is to utilize the paradigm of pre-training followed by fine-tuning [34], enabling the model to incorporate inductive bias as well as data distribution adaptation. Nevertheless, distributions for some small datasets departs far from the mainstream situation, as with medical image datasets [37], highlighting the importance of training ViT from scratch on small datasets. It has been demonstrated that, by explicitly modeling locality in ViT can improve its performance on smaller datasets [26, 23, 16]. Attention mechanism allows the models to automatically learn associations from the input. The Attention mechanism establishes a weight matrix, i.e. the Attention score introduced by the sofmax operation, to automatically adjust the affinity information between patches. RetNet introduces a pivotal change by replacing the softmax operation, essential for Transformers' self-attention, with a Hadamard product and an innovative D-matrix, followed by GroupNorm [32]. In order to enhance the sharpness of the attention scores distribution, Lee _et al._[23] propose adding a learnable temperature parameter after the attention scores. We propose an even more direct approach to obtain better results by adding an element-wise learnable mask (ELM) after the attention scores. The results demonstrate that the resulting model performs remarkably well. As a cost of improving the local modeling capability of a model, such element-wise learnable masks will greatly increase the number of parameters to be trained, resulting in more resource consumption of computing power and time. In our preliminary experiment, element-wise learnable masks showed two characteristics: _locality_ and _extroversion_ in their training results, subsequently revealing two major issues of the self-attention mechanisms of ViTs: * _Locality_: The dependencies between adjacent patches become more obvious. Insensitive to image spatial information. Images have a stronger locality than text. This locality changes with the depth of the network, showing a stronger local correlation in shallow networks and stronger global correlation in deep networks [11, 24]. Although the position encoding in ViT can meet some spatial information requirements, it cannot meet more complex spatial information requirements. Attention Score in the attention mechanism requires stronger spatial information modeling ability. * _Extroversion_: Masks tend to suppress the impact of patches upon themselves, prompting the patches to be more reliant on other patches in the attention layers thus accelerating the flow of information. In the original ViT, the flow of information between layers is rather sluggish, as residual links preclude the model from transferring its information across patches which leads to a decrease in speed in the process of information iteration. Moreover, as the feature vector dimension is increased, the gradient of the softmax function brought on by the self-attention mechanism is disproportionately small, slowing down the learning process and further deepening the aforementioned speed problem. The addition of a scaling factor for the self-attention mechanism [38] alleviates this issue to some extent, yet there is still room to improve. To this end, we propose a Gaussian mixture mask (GMM), a dynamic learnable mask which implicitly generates a mask by learning two parameters \(\sigma\) and \(\alpha\) to modulate the locality of the attention mechanism. Research has confirmed that a particular case of Gaussian masking can be used to represent extroversion. Compared to the ELM, GMM utilizes very few parameters, and achieves better performance. In addition, theoretically speaking, as it is plug-and-play, GMM can be applicable to any variants of ViT leveraging self-attention, such as Swin [27] and CaiT [36]. In summary, our contributions are as follows: 1. We propose an Element-wise Learnable Mask to identify two features of Vision Transformer's self-attention mechanism on small datasets. 2. We propose a Gaussian Mixture Mask to boost ViTs for free on small datasets. ## 2 Related Works ### Visual Transformers Recently, with the remarkable development of transformers in the field of NLP, many works have sought to introduce visual transformers into image classification. Cordonnier _et al._[7] created an original visual transformer composed of a standard transformer and a secondary position encoding. Subsequently, Vision Transformer [11] introduced the transformer model and achieved excellent performance on the large dataset Imagenet. Following this, variants of hybrid prior knowledge emerged, including the Swin Transformer [27], which adopted a sliding window approach to achieve local and global modeling to gain multi-scale information while reducing computational complexity. To address the feature granularity neglected in ViT and the high cost of computation, researchers developed Hierarchical Transformers [44, 39, 18, 40, 42], which adopted a hierarchical modeling approach: T2T utilized the overlapping expansion operations, while PiT and CvT utilized pooling and convolution to down-sample. Subsequently, Deep Transformers [36, 49, 14, 50] were devoted to improving the learning capability by increasing the model depth, and Transformers with Self-Supervised Learning [4, 25, 1, 17, 5, 3, 43] transferred unsupervised training to the visual transformer field. ### Localness Modeling on Transformers Standard Transformer models already possess implicit or explicit local modeling which is accomplished through learnable or non-learnable position embeddings, leading Transformer to be more inclined towards local patches. Recent research in NLP fields indicates that explicit local modeling can further improve model performance [32, 30, 20, 13]. RetNet [32] employs explicit distance decay for local modeling and has emerged as a formidable successor to Transformer in large-scale language models. ASR [31] has showed that localized long sequence representations perform better in both speech modeling and natural language inference tasks in self-attention models.Moreover, T-GSA [20]'s Gaussian distance model offers superior performance compared to the prevailing Transformer-based recurrent models and LSTMs. DMAN [13] proposed a dynamic masking attention network with a learnable mask matrix that performs local modeling in an adaptive manner. In the CV field, there are also researches improved by explicit local modeling [12, 6]. RMT [12] introduces RetNet into the visual domain and combines it with the Vision Transformer, offering a model mechanism, Retentive Self-Attention (ReSA), endowed with spatial locality prior knowledge. Mask2Former [6] takes advantage of masks to locally restrict attention, and thereby reduces the research workload while improving model performance on image segmentation tasks. Notably, locality is also used to balance the modeling capability and the efficiency of Transformer [2, 46]. Longformer [2] is proposed to address the limitation of Transformer models on long sequence processing. Its attention mechanism can be linearly related to the sequence length, and thus readily tackle documents of thousands of tokens or longer. BIGBIRD [46] leverages sparse attention mechanisms, including local attention, which is inspired by the sparse techniques from the graph structure, and reduces the complexity down to linear, i.e., O(N). ### Visual Transformers on Small Datasets Recently, several methods [26, 16, 35, 23, 24] have been explored to enhance ViT on small datasets. Liu _et al._[26] proposed an auxiliary self-supervision task to extract extra information from images, thereby effectively improving the training of ViT with few samples. Hassani _et al._[16] proposed a new model architecture to boost the performance of ViT on small datasets, including utilizing small patch size, introducing convolution in shallow layers, and dropping classification tokens. Touvron _et al._[36] proposed a distillation method called DeiT, whose core idea is to upscale using a convolutional network as a teacher network, which presents better performance compared to using a Transformer-structured network as a teacher network. Li _et al._[24] proposed a similar method to DeiT, introducing local knowledge distillation to improve the performance of ViTs model on small datasets, which achieves local guiding through imitating a training convolutional neural network, making ViTs model converge faster and significantly improving the performance on small datasets. The work most closely resembling ours is that of Lee _et al._[23], who proposed Shift Patch Tokenization (SPT) and Locality Self-Attention (LSA) to effectively address the absence of local inductive bias in ViT. SPT embeds the spatial information between adjacent pixels to provide a wider receptive field to visual tokens. LSA remedies or alleviates the flatness problem of the attention scores by adding a learnable temperature and diagonal masking. In this work, we proposed an Element-wise simple Learnable Mask (ELM) as a generalization version of the LSA introduced by Lee _et al._ which proves to perform better. Furthermore, we also introduced a Gaussian Mixture Mask (GMM) for further performance improvement with reduced parameters and computation amid no extra cost. ## 3 Approach In this part, we first revisit the attention mask from the perspective of convolution. Subsequently, we introduce an Element-wise Mask (ELM) and analyze two properties of it in Section 3.1, followed by an elaboration of the definition of Gaussian Mixture Mask (GMM) in Section 3.2, together with an explanation on how GMM modifies the attention mechanism in ViTs in Section 3.3, accompanied by the pseudo-code of the algorithm. **Element-wise Conv Operation.** If the mask is a circulant matrix with multi-diagonal properties, when such a matrix is added to the attention scores using the Hadamard product, it can be perceived as performing an operation akin to an element-wise convolution. The _convolution kernel_ is the hidden feature vector, while the _feature map_ is the weight mask. Examples of 1d and 2d cases are shown in the figure 2. ### Element-wise Learnable Mask In our preliminary investigation, an element-wise learnable matrix which has the same shape as the attention score matrix \((N_{patches}\times N_{patches})\) was added behind the attention score matrix. The value of its \(i\)-th row and \(j\)-th column indicates the preference of the feature information in the \(i\)-th patch of the subsequent layer for the information of the \(j\)-th patch in the current layer. This mask layer is aimed at dynamically adjusting the tendency of patch attention. After performing numerous experiments, we discovered that the element-wise learnable mask mainly exhibits two tendencies: _locality_ and _extroversion_. 1. _Locality_: Element-wise learnable masks are always characterized by a high degree of locality. In shallow networks of ViT, the tendencies of masks are localized, and masks encourage patches to absorb information from neighboring patches. In deep networks, masks are tend to be more globalized, and masks encourage patches to absorb information from far away patches. Additionally, we find that masks not only promote the absorption of information from adjacent patches, but could show a counter-tendency, i.e., masks inhibit the absorption of adjacent information and absorb information from a distance in some cases. However, regardless of their promotion or inhibition, masks always vary according to the distance between patches. 2. _Extroversion_: Element-wise learnable masks frequently display suppressive patches that depend on their own information. Whether the mask strongly promotes or suppresses patches to depend on the surrounding patches, it always suppresses patches to rely on themselves. This suppressive phenomenon does not vary with the distance between patches and the amount of patches. This extroversion can be considered as frequently appearing locality cases that have extremely close distances. The illustration in Fig. 3 presents above two patterns, the single-layer ELM on the left, and an attention map of each patch position on the right. Observably, the concentration of the brightness surrounding each patch decays with distance, with the patch itself appearing particularly faint. This reflects the phenomenon of both locality and extroversion. ### Gaussian Mixture Mask We first define \(K\) Gaussian weight matrices, each of which has two learnable parameters \(\alpha_{k}\) and \(\sigma_{k}\). The size of each Gaussian weight matrix is \(N_{\text{patch}}^{\frac{1}{2}}\times 2-1\). \(x\) and \(y\) are the offsets of the horizontal and vertical coordinates Figure 3: **Left:** A 64\(\times\)64 simple yet learnable element-wise mask trained from a Tiny-ViT on CIFAR-10 dataset reaching up to 94.02% Top-1 accuracy, is presented. Each pixel in the mask represents the value in the mask with brighter areas having higher values and darker areas having lower values. **Right:** The left figure is folded along the row into 64 attention maps, and then arranged into an \(8\times 8\) grid, with each grid consisting of an \(8\times 8\) pixel attention map. In the right figure, the attention map in each grid represents the attention map of the corresponding patch position. Figure 2: Four examples are provided to intuitively demonstrate this convolution operation. (a) Windowed attention in the 1d case. (b) Local attention in the 1d case. (c) Windowed attention in the 2d case. (d) Local attention in the 2d case. from the center point, respectively. The size of each location in the Gaussian weight matrix is expressed as: \[M_{xy}=\sum_{k=1}^{K}\alpha_{k}e^{-\frac{\sigma^{2}+\lambda^{2}}{2\sigma_{k}^{2}+ \epsilon}1}\quad M\in\mathbb{R}^{N\times N} \tag{1}\] Let \(x\) and \(y\) be two variables, whose range lies between \((-N_{\text{patch}}^{\frac{1}{2}},N_{\text{patch}}^{\frac{1}{2}})\). The GMM is created through the coalescence of \(K\) Gaussian weight matrices. Then, a window of size \(N_{\text{patch}}^{\frac{1}{2}}\) commences a sliding movement from the lower right corner to the upper right corner, and the resulting output is unfolded row by row. The unfolded results are subsequently spliced in the row direction. The mapping between the horizontal coordinate \(i\) and the vertical coordinate \(j\) of the GMM after unfolding and the center offset of the horizontal coordinate \(x\) and the vertical coordinate offset \(y\) before unfolding can be formulated as: \[\begin{split} x&=\left|i\%N_{\text{patch}}^{\frac {1}{2}}-j\%N_{\text{patch}}^{\frac{1}{2}}\right|\\ y&=\left|i//N_{\text{patch}}^{\frac{1}{2}}-j//N_ {\text{patch}}^{\frac{1}{2}}\right|\end{split} \tag{2}\] In Figure 4, we illustrate how two Gaussian masks can be employed in a manual way to fit the learned ELM that was shown in Figure 3. We first extract the extroversion trait from the ELM to obtain a mask, with the remaining contributing to the locality mask. Extroversion can be obtained with a small \(\sigma\) and a negative \(\alpha\), which constitutes a particular case of a Gaussian mask. Thereafter, we fit locality using a Gaussian mask, and finally, blend the two masks together to obtain a manually tailored GMM that incorporates both the traits of locality and extroversion. ### GMM Attention We simply add the obtained GMM to the preceding softmax operation and verify it through experiments. Adding it before and after the softmax has no significant impact on the result. In order to maintain a structure similar to that of the original paper, we treat this operation as a simple masking operation to obtain the Gaussian mixture attention mechanism. In order to improve the generalizability of GMM, we employ Multi-Head GMM Attention, which consists of an independent dynamic GMM mask for each head in self-attention mechanism. Figure 4: The **leftmost** figure shows the result of learning the simple learning mask, while the **rightmost** figure shows the result of the Gaussian mask mixture. The **middle** figure shows the fitting process, which first divides the simple learnable model into two parts, and then respectively fits them with two Gaussian masks, one with parameters \(\alpha_{2}=-0.8\) and \(\sigma_{2}=0.2\), and the other with parameters \(\alpha_{1}=0.6\) and \(\sigma_{1}=2\). The two features can be simultaneously approximated by two Gaussian masks. Figure 5: Gaussian Mixture Attention Mechanism We abandon the use of class tokens and adopt global pooling instead. This is mainly done to keep the attention weights matrix regular while reducing the computational parameters and not sacrificing the performance of the model. The feature vector \(x_{p}\in\mathbb{R}^{N\times D}\) is used as input to the multi-head Gaussian mixture attention part, and is projected through three projection matrices \(W_{q},W_{k},W_{v}\in\mathbb{R}^{D\times D}\) and corresponding biases \(b_{q},b_{k},b_{v}\in\mathbb{R}^{D}\) to obtain three matrices \(Q\) (query), \(K\) (key) and \(V\) (value). \[Q =W_{q}x_{p}+b_{q} \tag{3}\] \[K =W_{k}x_{p}+b_{k}\] (4) \[V =W_{v}x_{p}+b_{v} \tag{5}\] Then, the attention weights matrix can be computed as: \[A=\frac{QK^{T}}{\sqrt{d_{k}}}\qquad A\in\mathbb{R}^{N\times N} \tag{6}\] with the position corresponding to the multiplication \[B_{ij}=A_{ij}\circ M_{ij} \tag{7}\] Finally, after Sofamax and another linear projection, the result is obtained. \[\text{out}=W_{o}\operatorname{Softmax}(B)V+b_{o} \tag{8}\] Overall: \[\text{GMMAttention}(Q,K,V)=\operatorname{Softmax}\left(\frac{QK^{T}}{\sqrt{d_ {k}}}\circ M\right)V \tag{9}\] **Algorithm:** The algorithm construction of GMM is as shown in Algorithm 1: ``` Input: Number of patches \(N\), number of kernels \(K\) \(\alpha_{k}=\mu_{\alpha}+\sigma_{\alpha}z\), \(\sigma_{k}=\mu_{\sigma}+\sigma_{\sigma}z\) where \(z\sim\mathcal{N}(0,1)\) \(M_{K\times N\times N}=\textbf{0}\) for i = 0 to N-1do for j = 0 to N-1do \(\Delta_{x}=i\%N^{\frac{1}{2}}-j\%N^{\frac{1}{2}}\) \(\Delta_{y}=i//N^{\frac{1}{2}}-j//N^{\frac{1}{2}}\) for k = 0 to K-1do \(M_{ij}+=\alpha_{k}e^{-\frac{\Delta_{x}^{2}+\Delta_{y}^{2}}{2\sigma_{k}^{2}+ \epsilon}}\) end for end for return\(M\) ``` **Algorithm 1**Gaussian Mixture Mask ## 4 Experiments Our experiments mainly carried out on the small-scale datasets such as CIFAR-10, CIFAR-100, SVHN and Tiny-ImageNet. Firstly, we conducted pruning experiments on a wide range of ViTs in Section 4.1. Secondly, we conducted experiments on the settings of GMM hyperparameters in Section 4.2. Lastly, we compared the applicability performance of GMM in different visual deformations in Section 4.3, focusing on the comparison between GMM and the mainstream local hierarchical ViT-Swin and deep ViT-CaiT. ### Main Results **Experimental setup**. We implemented image classification Top-\(1\) accuracy experiments on the small datasets CIFAR-10, CIFAR-100, SVHN, and Tiny-ImageNet using the Timm library[41] and Vision Transformer for Small-Size Datasets [23]. The base configuration and training configuration of the vision transformer were adopted for all experiments. Specifically, the patch size was determined based on the size of the input image: \(4\) for \(32\times 32\) images (CIFAR-10, CIFAR-100, and SVHN) and \(8\) for \(64\times 64\) image (Tiny-ImageNet). For training regime, CutMix [45], Mixup [47], Auto Augment [8], Repeated Augment [9], Label Smoothing [33], Stochastic Depth [19], Random Erase [48], AdamW [21], and Cosine Learning Rate Scheduler [28] were all employed. **Implement details.** GMM can theoretically be added to any ViTs with self-attention mechanism, but the implementation is different among different variants. In the vanilla ViT [11], to conform the GMM matrix, the class token is dropped and the global pool is used as the final classifier. In Swin [27], GMM is applied to each attention window, and the size of the GMM matrix is determined by the size of the window. In the Shifted window attention module of Swin, GMM is placed after the scaling operation and before applying the shifted window attention mask. In CaiT [36], the specialty is the last class attention layer and GMM is only adopted in the self-attention layers preceding the class attention layer. In PiT [18], the class token is retained and the size of the GMM matrix varies from level to level. In the implementation of T2T [44], the T2T Transformer structure is preserved, and GMM is inserted directly in the self-attention. On the last transformer structure, the class token is dropped and replaced with the global pooling, and GMM is added. The results of above five VT and its GMM variants are presented in the table 1. ### Hyperparameter Setting **Parameter initialization.** Through simple experiments without deliberate adjustment, it was found that when the learnable parameters \(\alpha\) and \(\sigma\) are initialized, \(\alpha\) should be initialized to a normal distribution with mean 0 and standard deviation 2, and \(\sigma\) should be initialized to a normal distribution with mean 10 and standard deviation 10, which yields better results. \[\alpha\sim\mathcal{N}(0,4)\] \[\sigma\sim\mathcal{N}(10,100)\] **The number of Gaussian mixture kernels** applied in the model will affect its local modeling ability and effectiveness. We train GMM-ViT on the CIFAR-100 and Tiny-ImageNet datasets, controlling the number of masks for each GMM from 1 to 10. Experiments on small datasets show that as the number of Gaussian kernels in the mixture model starts to increase from one, the precision of the model will quickly rise and eventually reach saturation. When an appropriate amount of mask is applied to each layer, the generalization performance of the model will be better on different types of dataset. The line plots in Figure 6 show that insufficient Gaussian kernels cannot fulfil the local modelling requirements of the model. When the number of Gaussian kernels reaches around 5, the Gaussian kernels reach a redundant state in terms of performance on small dataset, and further increasing the number of Gaussian kernels will not improve the performance of the model. ### GMM for Different Variants of ViT **Comparison to Swin.** After adding a GMM with 150 parameters to a 2.5M ViT, the performance on CIFAR10 is close to a 7.1M Swin Transformer. Applying the GMM to the Swin model can still increase the accuracy of the Swin model. From the table 2, we can see that the Top-1 accuracy of the GMM-ViT model is 95.06%, which is higher than the standard ViT model of 93.65%, and the GMM-Swin model's Top-1 accuracy is 95.42%, which is higher than the standard Swin model's 95.11% accuracy. This proves \begin{table} \begin{tabular}{l|c c c c|c c c} \hline \hline **Model** & **CIFAR-10** & **CIFAR-100** & **SVHN** & **Tiny-ImageNet** & **Parameters** & **MACs** & **Depth** \\ \hline ViT & 93.65\% & 75.36\% & 97.93\% & 59.89\% & 2.7M & 170.9M & 9 \\ GMM-ViT & **95.06\%** & **77.81\%** & **98.01\%** & **62.27\%** & 2.7M & 170.9M & 9 \\ \hline Swin & 95.26\% & 77.88\% & 97.89\% & 60.45\% & 7.1M & 236.9M & 12 \\ GMM-Swin & **95.39\%** & **78.26\%** & **97.90\%** & **61.03\%** & 7.1M & 236.9M & 12 \\ \hline CaiT & 94.79\% & 78.42\% & 98.13\% & 62.46\% & 5.1M & 305.9M & 26 \\ GMM-CaiT & **95.15\%** & **78.97\%** & 98.09\% & **63.64\%** & 5.1M & 305.9M & 26 \\ \hline PiT & 93.68\% & 72.82\% & 97.78\% & 57.63\% & 7.0M & 239.1M & 12 \\ GMM-PiT & **94.41\%** & **74.16\%** & **97.82\%** & **58.37\%** & 7.0M & 239.1M & 12 \\ \hline T2T & 95.32\% & 78.10\% & 97.99\% & 61.50\% & 6.5M & 417.4M & 13 \\ GMM-T2T & **96.16\%** & **79.91\%** & 97.98\% & **63.33\%** & 6.5M & 417.4M & 13 \\ \hline \hline \end{tabular} \end{table} Table 1: Top-1 accuracies of different ViTs and GMM variants obtained on small datasets (%). \begin{table} \begin{tabular}{c|c|c} \hline \hline **Model** & **Top-1 Acc** & **Number of Parameters** \\ \hline ViT & 93.65\% & 2.5M \\ Swin & 95.11\% & 7.1M \\ \hline GMM-ViT & 95.06\% & 2.5M (+150 params) \\ GMM-Swin & 95.42\% & 7.1M (+240 params) \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison of four models on CIFAR-10. Swin performs better than ViT but at a cost of increasing the parameters by more than 3 times. Lightweight GMM module help ViT achieves almost the same performance as Swin by increasing only 150 parameters. Swin can further be improved by our GMM module. Figure 6: Comparison of Top-1 Accuracy of GMM-ViT with different numbers of Gaussian kernels on CIFAR100 and Tiny-ImageNet datasets. \(n\) is the number of GMMs, represents the standard ViT model with a GMM composed of \(n\) layers of Gaussian kernels. that the GMM model has a strong effect in improving the accuracy of ViT and Swin models. In addition, the number of parameters of GMM-ViT and GMM-Swin models only increased by 150 and 240, respectively, which shows the advantage of the GMM model with low parameter quantity. **GMM in deep ViTs.** Table 3 investigates the impact of GMM on ViTs with various depths, under a comparable parameter scale. Increasing the number of layers and reducing the dimension of feature vectors, the accuracy of deep ViT models drops significantly; GMM can significantly improve this situation, with the accuracy of the 60-layer ViT pulled back to the level of the 15-layer standard ViT after adding the GMM. GMM has the plasticity to increase the model depth as well. A GMM can be separately generated to contain a small \(\sigma\) and a dynamic changed \(\alpha\) to modulate the weight of its patch information. This kind of mask can in some way enhance the residual connection or offset some negative effects brought by the residual connection. Self-attention mechanism tends to act locally in shallow networks and globally in deep networks [11, 24]. In deep networks, the residual connection helps to make the next layer contain more information from the previous layer, thus stabilizing the learning process. This kind of high-sigma mask can make the patch information contain more of its own information after updating, thus achieving a similar effect. In shallow networks, this kind of self-tending network may delay the learning process. At this time, a high-sigma negative-alpha mask can offset this effect, thus dynamically regulating the information interaction between layers. In comparison with CaiT, GMM-ViT as shown in Table 4 can converge to a higher accuracy at deeper layers under the same parameter and depth level. ## 5 ELM _v.s._ GMM **Attention map in three ways.** We trained three versions of ViT on Tiny-ImageNet: standard ViT, ViT with ELM, and ViT with GMM, and set the patch size to 4. The top-1 accuracy is shown in Table 5. We visualized the last-layer attention maps of three versions of ViT on ImageNet dataset shown in Figure 7, and concluded that GMM-ViT has stronger expressive power than ELM-ViT and Standard-ViT in terms of attention maps. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Model** & **CIFAR-10** & **Depth** & **Hidden-dim** & **\#Params** \\ \hline CaiT & 95.38\% & 26(2SA+24CA) & 256 & 9,020,618 \\ GMM-ViT & **95.45\%** & 30 & 192 & 8,917,750 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of performances of GMM-ViT and CaiT at the same level of parameters and depth on CIFAR-10. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Model** & **CIFAR-10** & **Depth** & **Hidden-dim** & **\#Params** \\ \hline ViT-base & 94.32\% & 6 & 252 & 3,091,798 \\ GMM-ViT & **94.68\%** & 6 & 252 & 3,091,858 \\ \hline ViT-base & 94.17\% & 9 & 192 & 2,692,042 \\ GMM-ViT & **94.87\%** & 9 & 192 & 2,692,096 \\ \hline ViT-base & 93.65\% & 15 & 144 & 2,523,610 \\ GMM-ViT & **95.06\%** & 15 & 144 & 2,523,760 \\ \hline ViT-base & 93.60\% & 30 & 108 & 2,838,790 \\ GMM-ViT & **94.54\%** & 30 & 108 & 2,838,970 \\ \hline ViT-base & 90.91\% & 60 & 72 & 2,531,890 \\ GMM-ViT & **93.32\%** & 60 & 72 & 2,532,250 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison of ViT with different depths on CIFAR-10 by controlling the dimension of the hidden vector to maintain the order of magnitude of parameters changed due to the change of layers. Figure 7: In each image group, the **left** image shows the input image, the **second left** image shows the standard ViT’s attention map, the **second right** image represents the ViT’s attention map with ELM strengthening, and the **rightmost** image shows the ViT’s attention map with GMM strengthening. **Comparison of performance between GMM and ELM.** An ELM consists of \(N\times N\) learnable parameters, which will undoubtedly show a quadratic growth as the patch size decreases or the image size increases. However, the learnable parameters of a single-layer GMM are \(2\times K\), where \(K\) is the number of Gaussian mixture kernels, which shows a linear growth with the increase of mask number. Therefore, the parameter scale of GMM is far less than that of a single-layer ELM. Table 6 shows the performance comparison of two different versions of ViT, the first with 9 layers and 192 hidden dimensions and the second with 15 layers and 144 hidden dimensions, respectively applied with GMM and ELM. It can be seen that GMM still obtained better modeling ability than ELM with extremely few parameters added compared to ELM, which shows the feasibility of the GMM structure. ## 6 Conclusions In this work, we revisit RetNet from the perspective of convolution and propose a method to enhance ViT training from scratch on small datasets. We initially introduce an Element-wise Learnable Mask to dynamically regulate the locality of the attention mechanism, which enhances ViTs training on small datasets from scratch and summarizes two characteristics of the mask. Based on these two characteristics, we further propose a Gaussian Mixture Mask, which has a much higher performance than the Element-wise Learnable Mask and reduces the cost of parameter and computation overhead to almost zero.
2308.16746
Post-experiment coincidence detection techniques for direct detection of two-body correlations
It is one challenge to develop experimental techniques for direct detection of the many-body correlations of strongly correlated electrons, which exhibit a variety of unsolved mysteries. In this article, we present a post-experiment coincidence counting method and propose two post-experiment coincidence detection techniques, post-experiment coincidence angle-resolved photoemission spectroscopy (cARPES) and post-experiment coincidence inelastic neutron scattering (cINS). By coincidence detection of two photoelectric processes or two neutron-scattering processes, the post-experiment coincidence detection techniques can detect directly the two-body correlations of strongly correlated electrons in particle-particle channel or two-spin channel. The post-experiment coincidence detection techniques can be implemented upon the pulse-resolved angle-resolved photoemission spectroscopy (ARPES) or inelastic neutron scattering (INS) experimental apparatus with pulse photon or neutron source. When implemented experimentally, they will be powerful techniques to study the highly esoteric high-temperature superconductivity and the highly coveted quantum spin liquids.
Dezhong Cao, Yuehua Su
2023-08-31T14:06:23Z
http://arxiv.org/abs/2308.16746v4
# Post-experiment coincidence counting method for coincidence detection techniques ###### Abstract Recently, two coincidence detection techniques, the coincidence angle-resolved photoemission spectroscopy (cARPES) and the coincidence inelastic neutron scattering (cINS), have been proposed to detect directly the two-body correlations of strongly correlated electrons in particle-particle channel or two-spin channel. In the original proposals, there is a coincidence detector which records the coincidence probability of two photoelectric processes or two neutron-scattering processes. In this article, we present a _post-experiment_ coincidence counting method for the proposed coincidence detection techniques without a coincidence detector. It requires a time-resolved _pulse_ photon or neutron source. Suppose \(I_{d_{1}}^{(1)}\) records the emitted photoelectron or the scattered neutron arrived at the detector \(D_{1}\) and similarly \(I_{d_{2}}^{(1)}\) records the counting arrived at the detector \(D_{2}\) within one time window between sequential two incident pulses. The coincidence counting can be defined by \(I_{d}^{(2)}=I_{d_{1}}^{(1)}\times I_{d_{2}}^{(1)}\), which records the coincidence probability of two photoelectric processes or two neutron-scattering processes within this time window. Therefore, \(I_{d}^{(2)}\) involves the two-body correlations of the target electrons. The previously proposed cARPES and cINS can be implemented upon the time-resolved angle-resolved photoemission spectroscopy (ARPES) and inelastic neutron scattering (INS) experimental apparatuses with pulse sources. In the field of condensed matter physics, it is one challenge to develop experimental and theoretical methods to study the many-body physics of strongly correlated electrons which are beyond the traditional theories [1; 2; 3; 4; 5; 6; 7; 8]. Recently, some coincidence detection techniques have been proposed to detect directly the two-body correlations of strong correlated electrons. The coincidence angle-resolved photoemission spectroscopy (cARPES) is designed to detect directly the two-body correlations of the target electrons in particle-particle channel [9; 10], therefore it can be developed to study the unconventional superconductivity. The coincidence inelastic neutron scattering (cINS) is proposed to detect directly the two-spin correlations [11], thus it can be developed to investigate the novel quantum spin liquids. The original proposals for the coincidence detection techniques are schematically illustrated in Fig. 1 (a). In the original proposal for the cARPES [9], two incident photons excite two photoelectrons which are detected by two single-photoelectron detectors \(D_{1}\) and \(D_{2}\), respectively. An additional coincidence detector \(D_{1\otimes 2}\) records the coincidence counting of the emitted photoelectrons arrived at these two detectors. The cINS is designed similarly to detect the coincidence probability of two neutron-scattering processes [11]. It should be remarked that the two incident photons or neutrons can come from one single time-resolved _pulse_ source. In this case, the coincidence detector records the coincidence probability of two photoelectric processes or two neutron-scattering processes within each finite time window between sequential two pulses. This is the proposal for the _simultaneous_ coincidence detection techniques. In this article, we will present a _post-experiment_ coincidence counting method for the coincidence detection techniques without a coincidence detector. The experimental apparatus is schematically shown in Fig. 1 (b). Let us first consider the cARPES. Suppose the incident photons come from a time-resolved _pulse_ source. At times \(t_{n}=t_{0}+n\Delta t_{d}\) with \(n=0,1,2,\cdots,N\), the photon source emits photon pulses sequentially, where \(\Delta t_{d}\) is the time window between sequential two pulses. Each photon-pulse state is a multi-photon state, which will cause many photoelectric processes. Suppose at time \(t_{n-1}\), the photon source emits one photon pulse. At Figure 1: Schematic illustration of coincidence detection techniques, (a) the _simultaneous_ coincidence detection technique [9; 11] with a coincidence detector \(D_{1\otimes 2}\), (b) the _post-experiment_ coincidence detection technique with a time-resolved _pulse_ source and two counting records \(R_{1}\) and \(R_{2}\). Here \(D_{1}\) and \(D_{2}\) are two single-photoelectron or single-neutron detectors. the same time, two counting recorders \(R_{1}\) and \(R_{2}\) begin to record the emitted photoelectrons arrived at the two single-photoelectron detectors \(D_{1}\) and \(D_{2}\), respectively. The \(n\)-th counting time is over before the beginning of the sequential next photon pulse. Define two variables,\(I_{d_{1}}^{(1)}\) and \(I_{d_{2}}^{(1)}\), for the recorded counting data in the two respective recorders \(R_{1}\) and \(R_{2}\). Thus, we have two sequential recorded counting data, \(\{a_{1}(n),n=1,2,\cdots,N\}\) for \(I_{d_{1}}^{(1)}\) and \(\{a_{2}(n),n=1,2,\cdots,N\}\) for \(I_{d_{2}}^{(1)}\). This is schematically shown in Table 1. With these time-resolved recorded data, we will introduce the following coincidence counting method. The coincidence counting of the \(n\)-th pair, \(a_{1}(n)\) and \(a_{2}(n)\), is defined by \(I_{d}^{(2)}=a_{1}(n)\times a_{2}(n)\), which describes the coincidence probability of the two photoelectric processes within the \(n\)-th time window \(t\in(t_{n-1},t_{n})\). The statistical average of the coincidence probability is defined by \(\langle I_{d}^{(2)}\rangle=\frac{1}{N}\sum_{n=1}^{N}a_{1}(n)\times a_{2}(n)\). It involves the two-body correlations of the target electrons in particle-particle channel. Define another two statistical averages, \(\langle I_{d_{1}}^{(1)}\rangle=\frac{1}{N}\sum_{n=1}^{N}a_{1}(n)\) and \(\langle I_{d_{2}}^{(1)}\rangle=\frac{1}{N}\sum_{n=1}^{N}a_{2}(n)\). The intrinsic two-body correlations can be obtained by \(I_{d}^{(2,c)}=\langle I_{d}^{(2)}\rangle-\langle I_{d_{1}}^{(1)}\rangle\times \langle I_{d_{2}}^{(1)}\rangle\). This is a _post-experiment_ cARPES coincidence detection technique. All of the above discussions can be similarly made for the cINS, thus we can also have a _post-experiment_ cINS coincidence detection technique. A similar coincidence counting method has been used widely to study the multi-photon correlations in the quantum optics field [12; 13; 14]. One more remark is given on three time scales, \(t_{c}\) the characteristic time scale of the physics we are interested in, \(\Delta t_{p}\) the time width of the pulses, and \(\Delta t_{d}\) the time window between sequential two pulses. In order to study the dynamics of the physics we are interested in, we should choose \(\Delta t_{p}\leq t_{c}\), and in order to resolve the two photoelectric processes or two neutron-scattering processes from each pulse, it requires \(\Delta t_{d}\gg t_{c}\). The above _post-experiment_ coincidence counting method is based upon the following coincidence probability expression: \[\overline{\Gamma}^{(2)}=\Gamma^{(2)}\cdot I_{\chi}^{(2)}\cdot I_{d}^{(2)}, \tag{1}\] where \(\Gamma^{(2)}\) is the coincidence probability obtained previously for the cARPES or the cINS [9; 11] which can be regarded a target-electron form factor, \(I_{\chi}^{(2)}\) defines the incident-particle-state factor, and \(I_{d}^{(2)}=I_{d_{1}}^{(1)}\times I_{d_{2}}^{(1)}\) defines the emitted- or scattered-particle-state factor. It is \(I_{d}^{(2)}=I_{d_{1}}^{(1)}\times I_{d_{2}}^{(1)}\) that makes the _post-experiment_ coincidence counting method scientifically reasonable. In the below, we will show that the cARPES and the cINS coincidence detection techniques follow Eq. (1). Let us first consider the cARPES following the reference [9]. Suppose the electron-photon interaction [9; 15] relevant to the photoelectric processes is defined by \(V_{A}=\sum_{{\bf k}{\bf q}\alpha{\bf l}}g_{A}({\bf k};{\bf q},\lambda)d_{{\bf k }+{\bf q}\sigma}^{\dagger}{\bf c}_{{\bf k}\sigma}a_{{\bf q}\lambda}\), where \(d_{{\bf k}\sigma}^{\dagger}\) is the creation operator for the photoelectron with momentum \({\bf k}\) and spin \(\sigma\), \(c_{{\bf k}\sigma}\) is the annihilation operator for the electron in the target matter, \(a_{{\bf q}\lambda}\) is the annihilation operator for the photon with momentum \({\bf q}\) and polarization \(\lambda\). Introduce the electron-photon interaction relevant \(S\)-matrix \(S_{A}=T_{t}\exp[-\frac{i}{\hbar}\int_{-\infty}^{+\infty}dtV_{A}^{(I)}(t)]\), where \(V_{A}^{(I)}(t)=\big{[}e^{iH_{A,0}t/\hbar}V_{A}e^{-iH_{A,0}t/\hbar}\cdot F(t) \big{]}\). Here \(T_{t}\) is a time-ordering operator, \(H_{A,0}\) includes the Hamiltonians of the target electrons, the incident photons and the emitted photoelectrons, and \(F(t)=\theta(t+\Delta t_{d}/2)-\theta(t-\Delta t_{d}/2)\) defines one time window for the sequential incident photon pulses where \(\theta\) is the step function. Suppose the incident photons from the pulse source have momentum \({\bf q}\) and polarization \(\lambda\) with a distribution function \(P_{A,\chi}({\bf q},\lambda)\) and the emitted photoelectron is focused with the fixed momentum \({\bf k}\) and spin \(\sigma\). The photoemission probability in one single-photoelectric process is defined by \[\overline{\Gamma}_{A,IF}^{(1)}=\big{|}\langle\Phi_{A,F}^{(1)}|S_{A}^{(1)}|\Phi _{A,I}^{(1)}\rangle\big{|}^{2}, \tag{2}\] where \(S_{A}^{(1)}\) is the first-order expansion of the \(S_{A}\) matrix, \(|\Phi_{A,I}^{(1)}|=|\Psi_{\alpha}\rangle\otimes|\chi_{i}({\bf q}\lambda) \rangle\otimes|0^{(d)}\rangle\) is the initial state and \(|\Phi_{A,F}^{(1)}|=|\Psi_{\beta}\rangle\otimes|\chi_{f}({\bf q}\lambda) \rangle\otimes|n_{{\bf k}\sigma}^{(d)}\rangle\) is the final state. Here \(|\Psi_{\alpha}\rangle\) and \(|\Psi_{\beta}\rangle\) are the target-electron eigen-states with the respective eigen-energies \(E_{\alpha}\) and \(E_{\beta}\), \(\chi_{i}({\bf q}\lambda)\) and \(\chi_{f}({\bf q}\lambda)\) are the photon initial and final states, and \(n_{{\bf k}\sigma}^{(d)}=0\) or \(1\) is defined for the photoelectron states. It should be remarked that \(\overline{\Gamma}_{A,IF}^{(1)}\) defines the photoemission probability of _once_ single-photoelectric process in realistic ARPES measurement. It can be shown that \[\overline{\Gamma}_{A,IF}^{(1)}=\Gamma_{A,\alpha\beta}^{(1)}\cdot I_{A,\chi}^{(1 )}\cdot I_{A,d}^{(1)}, \tag{3}\] where \(\Gamma_{A,\alpha\beta}^{(1)}\) is a target-electron form factor, \(I_{A,\chi}^{(1)}\) is a photon-state factor and \(I_{A,d}^{(1)}\) is a photoelectron-state factor, the latter two of which are defined by \[I_{A,\chi}^{(1)} = \big{|}\langle\chi_{f}({\bf q}\lambda)|a_{{\bf q}\lambda}|\chi_{i} ({\bf q}\lambda)\rangle\big{|}^{2},\] \[I_{A,d}^{(1)} = \big{|}\langle n_{{\bf k}\sigma}^{(d)}|d_{{\bf k}\sigma}^{\dagger} |0^{(d)}\rangle\big{|}^{2}. \tag{4}\] The statistical average of the photoemission probability is shown to be \[\overline{\Gamma}^{(1)}_{A}=\sum_{IF}P_{A,\chi}({\bf q},\lambda)\cdot\Gamma^{(1)} _{A}\cdot I^{(1)}_{A,\chi}\cdot I^{(1)}_{A,d}, \tag{5}\] where \(\sum_{IF}\equiv\sum_{{\bf q}\lambda\chi_{i}\chi_{f}n^{(d)}}\), and \(\Gamma^{(1)}_{A}\equiv\frac{1}{Z}\sum_{\alpha\beta}e^{-\beta E_{\alpha}}\Gamma ^{(1)}_{A,\alpha\beta}\) defines the photoemission probability of the ARPES obtained previously [9], \[\Gamma^{(1)}_{A}=\frac{|g_{A}|^{2}\Delta t_{d}}{\hbar}A({\bf k}-{\bf q},\sigma; E^{(1)}_{A})\cdot n_{F}(E^{(1)}_{A}). \tag{6}\] Here \(A({\bf k},\sigma;E)=-2\,{\rm Im}G_{\sigma}({\bf k},E+i\delta^{+})\) is the spectrum function of the Green's function \(G_{\sigma}({\bf k},\tau)=-\langle T_{\tau}c^{\dagger}_{{\bf k}\sigma}(\tau)c_ {{\bf k}\sigma}(0)\rangle\), \(n_{F}(E)\) is the Fermi-Dirac distribution function, \(g_{A}\equiv g_{A}({\bf k}-{\bf q};{\bf q},\lambda)\), and \(E^{(1)}_{A}\) is the transferred energy in the photoelectric process. \(E^{(1)}_{A}\) is defined by \(E^{(1)}_{A}=\varepsilon^{(d)}_{\bf k}+\Phi-\hbar\omega_{\bf q}\), where \(\varepsilon^{(d)}_{\bf k}\) is the photoelectron energy, \(\Phi\) is the work function, \(\hbar\omega_{\bf q}\) is the photon energy. It should be remarked that the photoelectron-state factor \(I^{(1)}_{A,d}\) makes us to obtain the _absolute_ counting for the photoemission probability in realistic ARPES measurement, with zero counting when \(n^{(d)}_{{\bf k}\sigma}=0\) and \(I^{(1)}_{A,d}=0\) and finite counting when \(n^{(d)}_{{\bf k}\sigma}=1\) and \(I^{(1)}_{A,d}=1\). This is different to the usual ARPES measurement, where only the signals with \(n^{(d)}_{{\bf k}\sigma}=1\) and \(I^{(1)}_{A,d}=1\) are recorded and only the _relative_ photoemission probability can be obtained. Moreover, as shown in the below, the trick to introduce the photoelectron-state factors can lead us to understand why the coincidence counting method we have presented above is scientifically reasonable for the cARPES. Let us now consider the coincidence detection of the two photoelectric processes within one time window between sequential two pulses, where the incident photons still have momentum \({\bf q}\) and polarization \(\lambda\) with the distribution function \(P_{A,\chi}({\bf q},\lambda)\) and the two emitted photoelectrons have fixed momenta and spins (\({\bf k}_{1}\sigma_{1}\)) and (\({\bf k}_{2}\sigma_{2}\)), respectively. The coincidence probability of the given two photoelectric processes is defined by \[\overline{\Gamma}^{(2)}_{A,IF}=\big{|}\langle\Phi^{(2)}_{A,F}|S^{(2)}_{A}| \Phi^{(2)}_{A,I}\rangle\big{|}^{2}, \tag{7}\] where \(S^{(2)}_{A}\) is the second-order expansion of the \(S_{A}\) matrix [9], \(|\Phi^{(2)}_{A,I}\rangle\) and \(|\Phi^{(2)}_{A,F}\rangle\) are the corresponding initial and final states which are defined by \(|\Phi^{(2)}_{A,I}\rangle=|\Psi_{\alpha}\rangle\otimes|\chi_{i}({\bf q} \lambda)\rangle\otimes|0^{(d)}\rangle\) and \(|\Phi^{(2)}_{A,F}\rangle=|\Psi_{\beta}\rangle\otimes|\chi_{f}({\bf q}\lambda) \rangle\otimes|n^{(d)}_{{\bf k}_{1}\sigma_{1}}n^{(d)}_{{\bf k}_{2}\sigma_{2}}\rangle\). Similarly, \(\overline{\Gamma}^{(2)}_{A,IF}\) can be shown to follow \[\overline{\Gamma}^{(2)}_{A,IF}=\Gamma^{(2)}_{A,\alpha\beta}\cdot I^{(2)}_{A, \chi}\cdot I^{(2)}_{A,d}, \tag{8}\] where the target-electron form factor \(\Gamma^{(2)}_{A,\alpha\beta}\) follows \[\Gamma^{(2)}_{A,\alpha\beta}=\frac{|g_{A,1}g_{A,2}|^{2}}{\hbar^{4}}\big{|} \Phi^{(2)}_{A,\alpha\beta}({\bf k}_{A,1}\sigma_{1},{\bf k}_{A,2}\sigma_{2}; \Omega_{A},\omega_{A})\big{|}^{2} \tag{9}\] with \({\bf k}_{A,1}\equiv{\bf k}_{1}-{\bf q}\), \({\bf k}_{A,2}\equiv{\bf k}_{2}-{\bf q}\), \(g_{A,1}\equiv g_{A}({\bf k}_{A,1};{\bf q},\lambda)\) and \(g_{A,2}\equiv g_{A}({\bf k}_{A,2};{\bf q},\lambda)\). Here in order to describe the coincidence probability \(\Gamma^{(2)}_{A,\alpha\beta}\), we have introduced a two-body Bethe-Salpeter wave function [16; 17], \[\Phi^{(2)}_{A,\alpha\beta}({\bf k}_{1}\sigma_{1}t_{1};{\bf k}_{2}\sigma_{2}t_{ 2})=\langle\Psi_{\beta}|T_{t}c_{{\bf k}_{2}\sigma_{2}}(t_{2})c_{{\bf k}_{1} \sigma_{1}}(t_{1})|\Psi_{\alpha}\rangle. \tag{10}\] Introduce \(t_{c}=\frac{1}{2}(t_{1}+t_{2})\) and \(t_{r}=t_{2}-t_{1}\), we can introduce another expression of the two-body Bethe-Salpeter wave function, \(\Phi^{(2)}_{A,\alpha\beta}({\bf k}_{1}\sigma_{1},{\bf k}_{2}\sigma_{2};t_{c},t_ {r})=\Phi^{(2)}_{A,\alpha\beta}({\bf k}_{1}\sigma_{1}t_{1};{\bf k}_{2}\sigma_{2 }t_{2})\). \(\Phi^{(2)}_{A,\alpha\beta}({\bf k}_{1}\sigma_{1},{\bf k}_{2}\sigma_{2};\Omega,\omega)\) is the Fourier transformation of \(\Phi^{(2)}_{A,\alpha\beta}({\bf k}_{1}\sigma_{1},{\bf k}_{2}\sigma_{2};t_{c},t_ {r})\) defined as following: \[\Phi^{(2)}_{A,\alpha\beta}({\bf k}_{1}\sigma_{1},{\bf k}_{2}\sigma_ {2};\Omega,\omega)\] \[= \iint_{-\infty}^{+\infty}dt_{c}dt_{\tau}\Phi^{(2)}_{A,\alpha\beta}( {\bf k}_{1}\sigma_{1},{\bf k}_{2}\sigma_{2};t_{c},t_{r})e^{i\Omega t_{c}+i\omega t _{r}}.\] In Eq. (9), the center-of-mass frequency \(\Omega_{A}\) and the relative frequency \(\omega_{A}\) are defined by \(\Omega_{A}=\frac{1}{\hbar}(E_{A,1}+E_{A,2}),\ \omega_{A}=\frac{1}{2\hbar}(E_{A,2}-E_{A,1})\), where the two transferred energies in the two photoelectric processes are defined by \(E_{A,1}=\varepsilon^{(d)}_{{\bf k}_{1}}+\Phi-\hbar\omega_{\bf q}\) and \(E_{A,2}=\varepsilon^{(d)}_{{\bf k}_{2}}+\Phi-\hbar\omega_{\bf q}\), respectively. In Eq. (8), the photon-state factor \(I^{(2)}_{A,\chi}\) is defined by \[I^{(2)}_{A,\chi}=\big{|}\langle\chi_{f}({\bf q}\lambda)|a^{2}_{{\bf q}\lambda}| \chi_{i}({\bf q}\lambda)\rangle\big{|}^{2}, \tag{12}\] and the photoelectron-state factor \(I^{(2)}_{A,d}\) is defined as \[I^{(2)}_{A,d}=I^{(1)}_{A,d_{1}}\times I^{(1)}_{A,d_{2}}, \tag{13}\] where \[I^{(1)}_{A,d_{1}} = \big{|}\langle n^{(d)}_{{\bf k}_{1}\sigma_{1}}|d^{\dagger}_{{\bf k }_{1}\sigma_{1}}|0^{(d)}\rangle\big{|}^{2},\] \[I^{(1)}_{A,d_{2}} = \big{|}\langle n^{(d)}_{{\bf k}_{2}\sigma_{2}}|d^{\dagger}_{{\bf k }_{2}\sigma_{2}}|0^{(d)}\rangle\big{|}^{2}. \tag{14}\] The statistical average of the coincidence probability of the two photoelectric processes from the sequential photon pulses is given by \[\overline{\Gamma}^{(2)}_{A}=\frac{1}{Z}\sum_{IF}e^{-\beta E_{\alpha}}P_{A,\chi}({ \bf q},\lambda)\cdot\Gamma^{(2)}_{A,\alpha\beta}\cdot I^{(2)}_{A,\chi}\cdot I^{(2)} _{A,d}, \tag{15}\] where \(\sum_{IF}\equiv\sum_{\alpha\beta}\sum_{{\bf q}\lambda\chi\chi\chi}\sum_{n^{(d)}_{1} n^{(d)}_{2}}\). It should be remarked that, \(\overline{\Gamma}^{(2)}_{A,IF}\) has a same structure to \(\overline{\Gamma}^{(2)}\) in Eq. (1) and \(\mathbf{S}_{\perp}(\mathbf{q})\). Here \(f_{\mathbf{q}\sigma}\) and \(f_{\mathbf{q}\sigma}^{\dagger}\) are the neutron annihilation and creation operators with momentum \(\mathbf{q}\) and spin \(\sigma\), \(\mathbf{\tau}\) is the Pauli matrix, and \(\mathbf{S}_{\perp}(\mathbf{q})\) is the target-electron spin relevant operator. \(\mathbf{S}_{\perp}(\mathbf{q})\) is defined as \(\mathbf{S}_{\perp}(\mathbf{q})=\mathbf{S}(\mathbf{q})-\widehat{\mathbf{q}} \left[\mathbf{S}(\mathbf{q})\cdot\widehat{\mathbf{q}}\right]\), where \(\mathbf{S}(\mathbf{q})=\sum_{l}\mathbf{S}_{l}e^{-i\mathbf{q}\cdot\mathbf{R}_{ l}}\) with \(\mathbf{S}_{l}\) being the target-electron spin operator at position \(\mathbf{R}_{l}\). The electron-neutron scattering \(S\)-matrix is defined by \(S_{B}=T_{t}\exp[-\frac{i}{\hbar}\int_{-\infty}^{+\infty}dtV_{B}^{(I)}(t)]\), where \(V_{B}^{(I)}(t)=\left[e^{iH_{B,0}t/\hbar}V_{B}e^{-iH_{B,0}t/\hbar}\cdot F(t)\right]\) with \(H_{B,0}\) being the Hamiltonian of the combined neutrons and target-electron spin system. Consider a single neutron-scattering process in INS measurement with the initial state \(|\Phi_{B,I}^{(1)}\rangle=|\Psi_{\alpha}\rangle\otimes|n_{\mathbf{q}_{i},\sigma _{i}}\rangle\) and the final state \(|\Phi_{B,F}^{(1)}\rangle=|\Psi_{\beta}\rangle\otimes|n_{\mathbf{q}_{f}\sigma _{f}}\rangle\). Here \(n_{\mathbf{q}\sigma}=0\) or \(1\) is defined for the neutron states. The scattering probability of this single neutron-scattering process can be defined by \(\overline{\Gamma}_{B,IF}^{(1)}=\left|\langle\Phi_{B,F}^{(1)}|S_{B}^{(1)}|\Phi_ {B,I}^{(1)}\rangle\right|^{2}\), where \(S_{B}^{(1)}\) is the first-order expansion of the \(S_{B}\) matrix. Following the above procedure for the ARPES and the previous derivation for the INS [11], we can show that \[\overline{\Gamma}_{B,IF}^{(1)}=\Gamma_{B,\alpha\beta}^{(1)}\cdot I_{B,\chi}^{( 1)}\cdot I_{B,d}^{(1)}, \tag{16}\] where \(\Gamma_{B,\alpha\beta}^{(1)}\) is a target-electron spin form factor, \(I_{B,\chi}^{(1)}=\left|\langle 0|f_{\mathbf{q}_{i}\sigma_{i}}|n_{\mathbf{q}_{i} \sigma_{i}}\rangle\right|^{2}\) defines the incident-neutron-state factor and defines the scattered-neutron-state factor. Suppose the incident neutrons from the neutron pulses follow a distribution \(P_{B,\chi}^{(1)}(\mathbf{q}_{i},\sigma_{i})=P_{B,\chi}^{(1)}(\mathbf{q}_{i}) \cdot P_{B,\chi}^{(1)}(\sigma_{i})\), where the neutron spins are in the mixed states defined by \(\sum_{\sigma_{i}}P_{B,\chi}^{(1)}(\sigma_{i})|\sigma_{i}\rangle\langle\sigma_{ i}|=\frac{1}{2}\left(|\uparrow\rangle\langle\uparrow|+|\downarrow\rangle \langle\downarrow|\right)\), and suppose the scattered neutrons arrived at the single-neutron detector have fixed momentum \(\mathbf{q}_{f}\) but arbitrary spin \(\sigma_{f}\). The statistical average of the single neutron-scattering probability for INS can be shown to follow \[\overline{\Gamma}_{B}^{(1)}=\sum_{IF}P_{B,\chi}^{(1)}(\mathbf{q}_{i})\cdot \Gamma_{B}^{(1)}\cdot I_{B,\chi}^{(1)}\cdot I_{B,d}^{(1)}, \tag{17}\] where \(\sum_{IF}\equiv\sum_{\mathbf{q}_{1}n_{i}n_{f}}\), and \(\Gamma_{B}^{(1)}\equiv\frac{1}{Z}\sum_{\alpha\beta}e^{-\beta E_{\alpha}}\Gamma _{B,\alpha\beta}^{(1)}\) follows \[\Gamma_{B}^{(1)}=\frac{|g(\mathbf{q})|^{2}\Delta t_{d}}{\hbar}\chi_{B}(\mathbf{ q},E_{B}^{(1)})\cdot n_{B}(E_{B}^{(1)}). \tag{18}\] Here \(\chi_{B}(\mathbf{q},E)=-2\,\mathrm{Im}D(\mathbf{q},E+i\delta^{+})\) is the spectrum function of the target-electron spin Green's function \(D(\mathbf{q},\tau)=-\sum_{ij}(T_{\tau}\mathbf{S}_{i}(\mathbf{q},\tau)\mathbf{S}_{j}( \mathbf{q},0))(\delta_{ij}-\widehat{\mathbf{q}}_{i}\widehat{\mathbf{q}}_{j})\), \(n_{B}(E)\) is the Bose-Einstein distribution function. In the derivations of Eqs. (17) and (18), we have used the identity \(\frac{1}{2}\sum_{\sigma_{i}\sigma_{f}}\langle\sigma_{i}|\mathbf{\tau}^{\prime}| \sigma_{f}\rangle\langle\sigma_{f}|\mathbf{\tau}^{\prime}|\sigma_{i}\rangle=\delta _{ll^{\prime}}\) for the non-polarized neutrons. Let us now consider the coincidence probability of two neutron-scattering processes for cINS [11]. For one coincidence detection with the initial state \(|\Phi_{B,I}^{(2)}\rangle=|\Psi_{\alpha}\rangle\otimes|n_{\mathbf{q}_{i_{1}} \sigma_{i_{1}}}n_{\mathbf{q}_{i_{2}}\sigma_{i_{2}}}\rangle\) and the final state \(|\Phi_{B,F}^{(2)}\rangle=|\Psi_{\beta}\rangle\otimes|n_{\mathbf{q}_{f_{1}} \sigma_{f_{1}}}n_{\mathbf{q}_{f_{2}}\sigma_{f_{2}}}\rangle\), the coincidence probability of the two neutron-scattering processes from one neutron pulse, defined by \(\overline{\Gamma}_{B,IF}^{(2)}=\left|\langle\Phi_{B,F}^{(2)}|S_{B}^{(2)}|\Phi_ {B,I}^{(2)}\rangle\right|^{2}\) with \(S_{B}^{(2)}\) being the second-order expansion of the \(S_{B}\) matrix, can be shown to follow \[\overline{\Gamma}_{B,IF}^{(2)}=\Gamma_{B,\alpha\beta}^{(2)}\cdot I_{B,\chi}^{(2 )}\cdot I_{B,d}^{(2)}, \tag{19}\] where \(\Gamma_{B,\alpha\beta}^{(2)}\) is a target-electron spin form factor, \(I_{B,\chi}^{(2)}=\left|\langle 0|f_{\mathbf{q}_{i_{1}}\sigma_{i_{1}}}|n_{ \mathbf{q}_{i_{1}}\sigma_{i_{1}}}\rangle\right|^{2}\). \(\left|\langle 0|f_{\mathbf{q}_{i_{2}}\sigma_{i_{2}}}|n_{\mathbf{q}_{i_{2}} \sigma_{i_{2}}}\rangle\right|^{2}\) is an incident-neutron-state factor, and \(I_{B,d}^{(2)}\) is a scattered-neutron-state factor defined by \[I_{B,d}^{(2)}=I_{B,d_{1}}^{(1)}\times I_{B,d_{2}}^{(1)}, \tag{20}\] where \(I_{B,d_{1}}^{(1)}=\left|\langle n_{\mathbf{q}_{f_{1}}\sigma_{f_{1}}}|f_{ \mathbf{q}_{f_{1}}\sigma_{f_{1}}}^{\dagger}|0\rangle\right|^{2}\) and \(I_{B,d_{2}}^{(1)}=\left|\langle n_{\mathbf{q}_{f_{2}}\sigma_{f_{2}}}|f_{ \mathbf{q}_{f_{2}}\sigma_{f_{2}}}^{\dagger}|0\rangle\right|^{2}\). It is remarked that \(\overline{\Gamma}_{B,IF}^{(2)}\) has a same structure to Eq. (1). Suppose the incident two neutrons from the sequential neutron pulses have momentum and spin distribution functions \(P_{B,\chi}^{(2)}(\mathbf{q}_{i_{1}},\mathbf{q}_{i_{2}})=P_{B,\chi}^{(1)}( \mathbf{q}_{i_{1}})\cdot P_{B,\chi}^{(1)}(\mathbf{q}_{i_{2}})\) and \(P_{B,\chi}^{(2)}(\sigma_{i_{1}},\sigma_{i_{2}})=P_{B,\chi}^{(1)}(\sigma_{i_{1}}) \cdot P_{B,\chi}^{(1)}(\sigma_{i_{2}})\). Here \(P_{B,\chi}^{(1)}(\sigma_{i})\) is defined same to the above INS case for the neutron-spin mixed states. Suppose the two scattered neutrons have fixed momenta \((\mathbf{q}_{f_{1}},\mathbf{q}_{f_{2}})\) but arbitrary spins \((\sigma_{f_{1}},\sigma_{f_{2}})\). The statistical average of the coincidence probability of the two neutron-scattering processes from the sequential neutron pulses can be shown to follow \[\overline{\Gamma}_{B}^{(2)}=\sum_{IF}P_{B,\chi}^{(2)}(\mathbf{q}_{i_{1}},\mathbf{ q}_{i_{2}})\cdot\Gamma_{B}^{(2)}\cdot I_{B,\chi}^{(2)}\cdot I_{B,d}^{(2)}, \tag{21}\] where \(\sum_{IF}\equiv\sum_{\mathbf{q}_{i_{1}}\mathbf{q}_{i_{2}}}\sum_{n_{i_{1}}n_{i_{2}} n_{f_{1}}n_{f_{2}}}\), and \(\Gamma_{B}^{(2)}\equiv\frac{1}{Z}\sum_{\alpha\beta}e^{-\beta E_{\alpha}}\Gamma_{B, \alpha\beta} \(\phi_{\alpha\beta}^{(ij)}(\mathbf{q}_{1}t_{1},\mathbf{q}_{2}t_{2})\) with \(t_{c}=\frac{1}{2}(t_{1}+t_{2})\) and \(t_{r}=t_{2}-t_{1}\). The two contributions, \(\Gamma_{B,1}^{(2)}\) and \(\Gamma_{B,2}^{(2)}\), come from two different classes of microscopic neutron-scattering processes, the former corresponds to the neutron-state changes as \(|\mathbf{q}_{i_{1}}\sigma_{i_{1}}\rangle\rightarrow|\mathbf{q}_{f_{1}}\sigma _{f_{1}}\rangle\) and \(|\mathbf{q}_{i_{2}}\sigma_{i_{2}}\rangle\rightarrow|\mathbf{q}_{f_{2}}\sigma _{f_{2}}\rangle\), and the latter with \(|\mathbf{q}_{i_{1}}\sigma_{i_{1}}\rangle\rightarrow|\mathbf{q}_{f_{2}}\sigma _{f_{2}}\rangle\) and \(|\mathbf{q}_{i_{2}}\sigma_{i_{2}}\rangle\rightarrow|\mathbf{q}_{f_{1}}\sigma _{f_{1}}\rangle\). In Eq. (23), the transferred momenta are defined by \(\mathbf{q}_{1}=\mathbf{q}_{f_{1}}-\mathbf{q}_{i_{1}}\), \(\mathbf{q}_{2}=\mathbf{q}_{f_{2}}-\mathbf{q}_{i_{2}}\), \(\overline{\mathbf{q}}_{1}=\mathbf{q}_{f_{1}}-\mathbf{q}_{i_{2}}\), \(\overline{\mathbf{q}}_{2}=\mathbf{q}_{f_{2}}-\mathbf{q}_{i_{1}}\), and the transferred frequencies are defined by \(\Omega_{B}=\frac{1}{\hbar}(E_{B,1}+E_{B,2}),\omega_{B}=\frac{1}{2\hbar}(E_{B,2 }-E_{B,1}),\overline{\Omega}_{B}=\frac{1}{\hbar}(\overline{E}_{B,2}+E_{B,1}), \overline{\omega}_{B}=\frac{1}{2\hbar}(\overline{E}_{B,2}-\overline{E}_{B,1})\),where the transferred energies are defined as \(E_{B,1}=\mathcal{E}(\mathbf{q}_{f_{1}})-\mathcal{E}(\mathbf{q}_{i_{1}}),E_{B,2 }=\mathcal{E}(\mathbf{q}_{f_{2}})-\mathcal{E}(\mathbf{q}_{i_{2}}),\overline{E }_{B,1}=\mathcal{E}(\mathbf{q}_{f_{1}})-\mathcal{E}(\mathbf{q}_{i_{2}}), \overline{E}_{B,2}=\mathcal{E}(\mathbf{q}_{f_{2}})-\mathcal{E}(\mathbf{q}_{i_ {1}})\). Here \(\mathcal{E}(\mathbf{q}_{i})\) and \(\mathcal{E}(\mathbf{q})\) are the incident and the scattered neutron energies. The two constants \(C_{1}\) and \(C_{2}\) are given by \(C_{1}=\frac{1}{\hbar^{2}}|g(\mathbf{q}_{1})g(\mathbf{q}_{2})|^{2}\) and \(C_{2}=\frac{1}{\hbar^{2}}|g(\overline{\mathbf{q}}_{1})g(\overline{\mathbf{q}} _{2})|^{2}\). From the coincidence probabilities of the two neutron-scattering processes in cINS, \(\overline{\Gamma}_{B,IF}^{(2)}\) and \(\overline{\Gamma}_{B}^{(2)}\), it is clear that the cINS can be designed into a _post-experiment_ coincidence detection technique. In summary, we have shown that the coincidence probabilities of the cARPES and the cINS both follow Eq. (1), thus the cARPES and the cINS can be designed into the _post-experiment_ coincidence detection techniques. When there is a time-resolved photon or neutron pulse source, the coincidence detection by the cARPES or the cINS can be implemented upon the time-resolved ARPES or INS experimental apparatuses. _Acknowledgements_ We thank Prof. Yuan Li and Prof. Shan Qiao for invaluable discussions. This work was supported by the National Natural Science Foundation of China (Grants No. 11774299 and No. 11874318) and the Natural Science Foundation of Shandong Province (Grant No. ZR2023MA015).
2308.00124
Doped 2D diamond: properties and applications
In the present paper, we investigate the structural, thermodynamic, dynamic, elastic, and electronic properties of doped 2D diamond C$_4$X$_2$ (X = B or N) nanosheets in both AA$'$A$''$ and ABC stacking configurations, by first-principles calculations. Those systems are composed of 3 diamond-like graphene sheets, with an undoped graphene layer between two 50% doped ones. Our results, based on the analysis of ab-initio molecular dynamics simulations, phonon dispersion spectra, and Born's criteria for mechanical stability, revealed that all four structures are stable. Additionally, their standard enthalpy of formation values are similar to the one of pristine 2D diamond, recently synthesized by compressing three graphene layers. The C$_4$X$_2$ (X = B or N) systems exhibit high elastic constant values and stiffness comparable to the diamond. The C$_4$N$_2$ nanosheets present wide indirect band gaps that could be advantageous for applications similar to the ones of the hexagonal boron nitride (h-BN), such as a substrate for high-mobility 2D devices. On the other hand, the C$_4$B$_2$ systems are semiconductors with direct band gaps, in the 1.6 - 2.0 eV range, and small effective masses, which are characteristics that may be favorable to high carrier mobility and optoelectronics applications.
Bruno Ipaves, João F. Justo, Biplab Sanyal, Lucy V. C. Assali
2023-07-31T19:46:00Z
http://arxiv.org/abs/2308.00124v2
# Doped two-dimensional diamond: properties and potential applications ###### Abstract This paper examines the structural, thermodynamic, dynamic, elastic, and electronic properties of doped 2D diamond C\({}_{4}\)X\({}_{2}\) (X = B or N) nanosheets in both AA\({}^{\prime}\)A\({}^{\prime\prime}\) and ABC stacking configurations, by first-principles calculations. Those systems consist of three diamond-like graphene sheets, with an undoped graphene layer between two 50% doped ones. Our results, based on the analysis of _ab-initio_ molecular dynamics simulations, phonon dispersion spectra, and Born's criteria for mechanical stability, revealed that all four structures are stable. Additionally, their standard enthalpy of formation values are similar to the one of pristine 2D diamonds, recently synthesized by compressing three graphene layers together. The C\({}_{4}\)X\({}_{2}\) (X = B or N) systems exhibit high elastic constant values and stiffness comparable to the diamond. The C\({}_{4}\)N\({}_{2}\) nanosheets present wide indirect band gaps that could be advantageous for applications similar to the ones of the hexagonal boron nitride (h-BN), such as a substrate for high-mobility 2D devices. On the other hand, the C\({}_{4}\)B\({}_{2}\) systems are semiconductors with direct band gaps, in the 1.6 - 2.0 eV range, and small effective masses, which are favorable characteristics to high carrier mobility and optoelectronics applications. ## I Introduction Graphene is the most popular two-dimensional (2D) material, being a zero-gap semimetal with a honeycomb carbon structure and \(sp^{2}\) hybridization. It carries a unique combination of physical properties in nature, such as high electrical conductivity, tensile strength, and optical transparency. Additionally, it is the elementary structure for several other nanomaterials, such as fullerenes, nanotubes, graphite, and the single-layer diamond (2D diamond) [1; 2; 3]. As a result of recent developments in the synthesis and characterization of 2D materials, the 2D diamond has received great attention, with promising applications in several fields, such as batteries, quantum computing, nano-optics, and nanoelectronics [4]. The stabilization of 2D diamond often requires surface functionalization, leading to a variety of structures, which have received different labels, such as diamane, diameme, diamond, and diamondene [4; 5]. 2D diamonds can also be built out of bilayer graphene (BLG) or few-layer graphene (FLG) through different techniques. For example, the hydrogenated (HD) and fluorinated (FD) 2D diamonds can be synthesized at ambient pressure without a substrate, in which the HD can be produced using hot filament chemical vapor deposition (CVD) [6], while FD by combining FLG and gaseous CIF\({}_{3}\)[7]. The pristine 2D diamond (PD) is hard to synthesize as high pressures are required to transform \(sp^{2}\) bonds from graphene layers into interlayer \(sp^{3}\) ones [5]. Nevertheless, the PD has recently been synthesized without a substrate, by compressing three graphene layers [8]. Additionally, a theoretical investigation has shown that it is possible to stabilize the 2D diamond made of two graphene layers with nitrogen substitution [9]. For example, the NCCN 2D structure, composed of two carbon layers functionalized with nitrogen ones on both sides, has also been investigated, suggesting it could be used as a selective ammonia sensor [10; 11; 12]. The physical properties of 2D diamonds may vary considerably, depending on the synthesis methods, leading to structures with different configurations, functional groups, and heteroatoms [4; 5]. At room temperature, the thermal conductivity of HD is high and the heat transport arises from the acoustic phonon modes. On the other hand, under the same conditions, the thermal conductivity of FD is lower than that of HD and the heat transport is controlled by the optical phonon modes [13]. 2D diamonds also present remarkable mechanical properties, with stiffness and Young's modulus similar to the ones of graphene and bulk diamond [14]. Furthermore, unlike graphene, 2D diamonds have band gap features that depend on the stacking arrangement, the number of layers, and the functional groups present in the structures [4; 15]. Despite several recent experimental and theoreti cal investigations on 2D diamonds, the origin of all these peculiar properties has been the subject of debate [4; 5]. In this paper, we present a study of the physical properties of 2D diamonds doped with substitutional N or B atoms. The reference systems consist of three graphene sheets: an undoped graphene layer between two 50% doped ones, where the C-C bonds between neighboring layers are strong covalent bonds. Here, we considered four structure configurations labeled AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\), ABC-C\({}_{4}\)N\({}_{2}\), AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)B\({}_{2}\), and ABC-C\({}_{4}\)B\({}_{2}\). Their structural, thermodynamic, dynamic, elastic, and electronic properties and potential applications are explored and discussed in depth. ## II Computational methods This investigation was performed using first-principles calculations based on the Density Functional Theory (DFT) [16], using the plane-wave basis set and projector augmented-wave (PAW) method [17], as implemented in the Quantum ESPRESSO computational package [18; 19]. We utilized the generalized gradient approximation of Perdew-Burke-Ernzerhof (GGA-PBE) exchange-correlation functional [20] and the Dion _et al._ scheme [21] optimized by Klimes _et al._ (optB88-vdW [22]) to properly describe the effects of the dispersive van der Waals (vdW) interactions. For an accurate description of the energy band gap values, we employed the hybrid Heyd-Scuseria-Ernzerhof (HSE) functional [23] at the relaxed structures obtained from the optB88-vdW approximation. The plane-wave energy cutoff was set to 1100 eV with a convergence threshold of 0.1 meV/atom for the total energy. We used a \(16\times 16\times 1\)\(k\)-point mesh to describe the irreducible Brillouin zone [24], and the forces on atoms were converged down to 1 meV/A. To obtain the phonon dispersion curves, we used the Density Functional Perturbation Theory (DFPT) [25] with an \(8\times 8\times 1\)\(q\)-point mesh. The primitive hexagonal cells of the 2D structures were constructed using 6 atoms. To determine the cell parameters in the \(xy\)-plane, a variable-cell optimization was carried out with the BFGS quasi-newton algorithm. In order to avoid interactions among cell images, the lattice parameter perpendicular to the sheets (\(z\)-axis) was fixed at 25 A. This approach has been successfully applied to similar 2D systems in previous studies [26; 10; 27]. In order to determine the elastic properties of the systems, we built a rectangular cell with 12 atoms and used the strain-energy method [28; 29]. Accordingly, for isotropic structures and small deformations (\(\epsilon\)) near their equilibrium configurations, the elastic energy, per unit area, was approximated as \[E(\epsilon)-E(0)\approx\frac{1}{2}E^{(2)}\epsilon^{2}, \tag{1}\] where \(E(\epsilon)\) is the total energy of strained configurations, while \(E(0)\) is the total energy of the respective unstrained ones. We applied two in-plane deformations, ranging from -1.2% to 1.2%, in order to obtain the \(E^{(2)}\), which allowed to obtain the elastic constants after fitting a second-order polynomial to the data. Herein, \(E^{(2)}=C_{11}\) elastic constant for the zigzag axial deformation, while \(E^{(2)}=2(C_{11}+C_{12})\) for the biaxial planar deformation [28; 29]. The thermal stability was studied by computing the standard enthalpy of formation, per atom, of the structures at 0 GPa (\(\Delta H_{f}^{0}\)), by using \[\Delta H_{f}^{0}=\frac{E_{t}(\mathrm{C_{4}X_{2}})-4E_{t}(\mathrm{C})-2E_{t}( \mathrm{X})}{6}, \tag{2}\] where \(E_{t}(\mathrm{C_{4}X_{2}})\) is the total energy of the 2D nanosheet, with 4 C atoms and 2 X atoms (X = B or N) in the primitive cell. \(E_{t}(\mathrm{C})\) and \(E_{t}(\mathrm{X})\) are the total energies, per atom, of the respective C and X standard ground states, i.e., of graphite and the crystalline boron in the trigonal structure (\(\beta\)-boron) or the isolated N\({}_{2}\) molecule. This procedure to determine enthalpies and/or energies of formation has been successfully used to investigate several other systems [30; 31; 32; 33; 34]. Additionally, _ab-initio_ molecular dynamics simulations (AIMD) were carried out using the Vienna _ab initio_ simulation package (VASP) [35], where a 6 \(\times\) 6 \(\times\) 1 hexagonal 216-atom supercell was adopted to allow possible structural reconstructions. A Nose-Hoover thermostat (NVT) ensemble was employed, from 300 to 1000 K for 5 ps, with a simulation time step of 1 fs. ## III Results and discussion Initially, we explored the physical properties of pristine 2D diamond in AA\({}^{\prime}\)A\({}^{\prime\prime}\) and ABC stacking structural configurations, composed of three graphene layers in which the C atoms between layers are covalently bonded with a near \(sp^{3}\) hybridization. Starting the simulations with the diamond-like configuration, they converged, after the relaxation of atomic positions, to trilayer graphene systems with vdW interactions between layers (graphite-like). This behavior has also been found in a previous theoretical investigation of 2D diamond, starting the simulations with two graphene layers [36]. These results can be understood as a consequence of the absence of an external pressure-induced and/or surface passivation to promote the \(sp^{2}\) to \(sp^{3}\) hybridization transformation [36]. Those pristine structures represent the reference systems used here to study and understand the effects of their functionalization. Then, we explored the properties of C\({}_{4}\)X\({}_{2}\) (X = B or N) systems, which can be described as three graphene sheets in which four C atoms are bonded covalently (2D diamond-like) in each unit cell. The two external layers are 50% doped with substitutional X atoms, hence, each X atom is bonded to three C atoms. Figure 1 presents a schematic representation of the optimized and relaxed C\({}_{4}\)X\({}_{2}\) (X = B or N) systems, in both AA\({}^{\prime}\)A\({}^{\prime\prime}\) and ABC stacking configurations, as well as the respective labels given to the intraplanar bond angle (\(\theta\)), intralayer (\(d_{\rm C-X}\)) and interlayer (\(h_{\rm C-X}\)) distances, and systems' thickness (\(\Delta h\)). The optimized structural parameters of C\({}_{4}\)X\({}_{2}\) (X = B or N) nanosheets are shown in Table 1, where the distance labels are consistent with the ones defined in figure 1. It can be observed that all the nanosystems functionalized with N atoms keep the lattice constants almost unchanged when compared to the PD ones. Additionally, for both stacking configurations of the C\({}_{4}\)N\({}_{2}\), the intraplanar bond angle (\(\theta\)) values are close to the \(sp^{3}\) hybridization ones (\(109.47\lx@math@degree\)), leading to a thickness of \(\approx 4.7\) A. Nevertheless, the C\({}_{4}\)B\({}_{2}\) nanosheet lattice parameters are slightly greater than those of HD and FD systems, with \(\theta\) close to the value of \(120\lx@math@degree\), i.e., the B atoms bonded to three adjacent C atoms present a \(sp^{2}\)-type hybridization and hence we observed a smaller thickness of \(\approx 4.2\) A as compared to the N-functionalized structures. We now discuss the stability of the C\({}_{4}\)X\({}_{2}\) nanosheets. To study the thermal stability of those structures, we computed the standard enthalpy of formation \(\Delta H_{f}^{0}\) using equation (2). Herein, we found positive \(\Delta H_{f}^{0}\) of 424, 365, 348, and 333 meV for AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)B\({}_{2}\), ABC-C\({}_{4}\)B\({}_{2}\), AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\), and ABC-C\({}_{4}\)N\({}_{2}\), respectively, displayed in Table 1, indicating possible thermodynamic instability. However, the literature has reported 2D materials synthesized by endothermic processes (\(\Delta H_{f}^{0}>0\)), such as graphene, germanene, and silicene [29; 33]. Also, the \(\Delta H_{f}^{0}\) values of C\({}_{4}\)X\({}_{2}\) nanosheets are similar to the 300 meV/atom of PD with three graphene layers at 0 GPa (Table 1), which was recently synthesized [8], and slightly higher than NCCN that was theoretically studied to stabilize the 2D diamond without any passivation [9]. The thermodynamic stability of the systems was also investigated by AIMD simulations. The results exhibited a total energy small variation during 5 ps at 300 K, as shown in figure 2, indicating that the structural integrity of those systems is maintained at those conditions. At 1000 K, the same behavior is observed for the C\({}_{4}\)N\({}_{2}\) systems, while the C\({}_{4}\)B\({}_{2}\) nanosheets presented some broken bonds, suggesting some structural degradation. Furthermore, the dynamic stability of the systems was investigated using the phonon theory, in which a system is considered stable when its vibrational spectrum contains only positive frequencies. The phonon dispersion curves of C\({}_{4}\)B\({}_{2}\) and C\({}_{4}\)N\({}_{2}\) compounds, in both AA\({}^{\prime}\)A\({}^{\prime\prime}\) and ABC stacking configurations, are presented in figure 3. All spectra show 18 phonon branches, related to the 6 atoms present in the primitive cell. All those systems are dynamically stable since there are only positive frequencies. Next, we computed the elastic constants of the systems using equation (1) to verify their mechanical stability, according to the Born stability criteria (\(C_{11}>0\) and \(C_{12}<C_{11}\)) [37]. Table 2 presents the elastic constants \(C_{11}\), \(C_{12}\), and \(C_{44}=\left(C_{11}-C_{12}\right)/2\), Young's modulus \(Y^{\rm 2D}=\left(C_{11}^{2}-C_{12}^{2}\right)/C_{11}\), and the Poisson ratio \(\nu=C_{12}/C_{11}\) of C\({}_{4}\)X\({}_{2}\) trilayers (X = B or N), as well as those of several other 2D materials for comparison. Accordingly, the C\({}_{4}\)X\({}_{2}\) structures are mechanically stable since they satisfy the Born criteria, agreeing with the phonon dispersion spectra shown in figure 3. The C\({}_{4}\)X\({}_{2}\) nanosheets present high Young's modulus values and characteristics of isotropic systems since their Pois \begin{table} \begin{tabular}{l c c c c c c c} System & \(a\) & \(d_{\rm C-X}\) & \(d_{\rm C-C}\) & \(h_{\rm C-C}\) & \(\Delta h\) & \(\theta\) & \(\Delta H_{f}^{0}\) \\ \hline AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\) & 2.42 & 1.49 & 1.49 & 1.60 & 4.74 & 108.9 & 348 \\ ABC-C\({}_{4}\)N\({}_{2}\) & 2.44 & 1.50 & 1.50 & 1.57 & 4.66 & 109.0 & 333 \\ AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)B\({}_{2}\) & 2.66 & 1.55 & 1.62 & 1.66 & 4.25 & 118.4 & 424 \\ ABC-C\({}_{4}\)B\({}_{2}\) & 2.67 & 1.55 & 1.63 & 1.65 & 4.18 & 118.5 & 365 \\ PD\({}^{\rm a}\) & 2.43 & Y\({}^{\rm a}\) & 1.54 & 1.65 & — & Y\({}^{\rm a}\) & 300 \\ HD\({}^{\rm b}\) & 2.53 & 1.56 & — & 1.56 & — & — & — \\ FD\({}^{\rm b}\) & 2.56 & 1.55 & — & 1.55 & — & — & — \\ NCCN & 2.39\({}^{\rm c}\) & 1.47\({}^{\rm c}\) & — & 1.58\({}^{\rm c}\) & 2.59\({}^{\rm d}\) & 108.8\({}^{\rm c}\) & 211\({}^{\rm d}\) \\ \end{tabular} \({}^{\rm a}\) Reference [8]. The (\(\overline{2}110\))-oriented h-diamate exhibits two \(d_{\rm C-X}\) and \(\theta\) values. Y = bond lengths are 1.35 and 1.54 Å with the angles presenting \(sp^{3}\) and \(sp^{2}\) hybridizations. \({}^{\rm b}\)Reference [15] \({}^{\rm c}\)Reference [10] \({}^{\rm d}\)Reference [9] \end{table} Table 1: Structural properties of C\({}_{4}\)X\({}_{2}\) (X = B or N): lattice parameter (\(a\)), intralayer (\(d\)) and interlayer (\(h\)) distances, thickness (\(\Delta h\)), and the intraplanar bond angle (\(\theta\)), labeled according to figure 1. The distances are given in Å and angles in degrees. The standard enthalpies of formation (\(\Delta H_{f}^{0}\)) at 0 GPa are given in meV/atom. For PD, HD, and FD, X = C. Figure 1: Schematic illustration of the C\({}_{4}\)X\({}_{2}\) (X = B or N) systems. (a) Top and (b) side views of AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)X\({}_{2}\), (c) top and (d) side views of ABC-C\({}_{4}\)X\({}_{2}\). The black and gray spheres represent the C and X atoms, respectively. The red lines denote the simulation unit cell limits, with the rectangle cells used to determine the elastic properties. The graphs also indicate the labels given to the intralayer (\(d_{\rm C-X}\)) and interlayer (\(h_{\rm C-X}\)) distances, structure thickness (\(\Delta h\)), and the intraplanar bond angle (\(\theta\)). son ratio \(\sigma\) values are lower than 0.5 [40; 41]. \begin{table} \begin{tabular}{l c c c c c c c c} System & \(C_{11}\) & \(C_{12}\) & \(C_{44}\) & \(Y^{\rm 2D}\) & \(\sigma\) & \(\rho_{\rm 2D}\) & \(v_{\rm LA}\) & \(v_{\rm TA}\) \\ \hline AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{\rm A}\)N\({}_{2}\) & 816 & 85 & 366 & 808 & 0.10 & 24.8 & 18.1 & 12.1 \\ ABC-C\({}_{\rm A}\)N\({}_{2}\) & 777 & 82 & 348 & 769 & 0.11 & 24.5 & 17.8 & 11.9 \\ AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{\rm A}\)B\({}_{2}\) & 627 & 88 & 270 & 615 & 0.14 & 18.9 & 18.2 & 11.9 \\ ABC-C\({}_{\rm A}\)B\({}_{2}\) & 609 & 92 & 259 & 595 & 0.15 & 18.7 & 18.0 & 11.7 \\ Graphene & 3543 & 60\({}^{\rm a}\) & 147\({}^{\rm a}\) & 340\({}^{\rm b}\) & 0.18\({}^{\rm c}\) & 7.55\({}^{\rm c}\) & 21.6\({}^{\rm a}\) & 13.9\({}^{\rm a}\) \\ HD & 474\({}^{\rm c}\) & 36\({}^{\rm c}\) & 219\({}^{\rm e}\) & 471\({}^{\rm a}\) & 0.08\({}^{\rm c}\) & 14.9\({}^{\rm c}\) & 17.8\({}^{\rm c}\) & 12.2\({}^{\rm c}\) \\ ABC-HD & 718\({}^{\rm e}\) & 58\({}^{\rm c}\) & 330 & 713\({}^{\rm a}\) & 0.08\({}^{\rm c}\) & 22.2\({}^{\rm c}\) & 18.0\({}^{\rm c}\) & 12.2\({}^{\rm c}\) \\ FD & 485\({}^{\rm d}\) & 49\({}^{\rm e}\) & 218\({}^{\rm e}\) & 480\({}^{\rm d}\) & 0.10\({}^{\rm e}\) & & 14.0\({}^{\rm e}\) & 9.3\({}^{\rm e}\) \\ NCCN & 568\({}^{\rm f}\) & 66\({}^{\rm f}\) & 243\({}^{\rm f}\) & 560\({}^{\rm e}\) & 0.12\({}^{\rm e}\) & & & \\ Diamond & 1079\({}^{\rm f}\) & 124\({}^{\rm f}\) & 578\({}^{\rm f}\) & & 18.3\({}^{\rm c}\) & 12.4\({}^{\rm c}\) & & \\ \end{tabular} \end{table} Table 2: Elastic constants \(C_{11}\), \(C_{12}\), and \(C_{44}\), Young’s modulus \(Y^{\rm 2D}\), Poisson ratio \(\sigma\), formal density \(\rho_{\rm 2D}\), longitudinal \(v_{\rm LA}\) and transverse \(v_{\rm TA}\) acoustic velocities of C\({}_{\rm A}\)X\({}_{2}\) (X = B or N), graphene, and other 2D diamonds. Elastic constants and Young’s modulus are given in N/m, Poisson ratio is dimensionless, formal density and velocities are given in \(10^{-7}\) kg/m\({}^{2}\) and km/s, respectively. The results with the \({}^{*}\) symbols were obtained using the data from the table and the equations described in this paper. Additionally, we estimated the longitudinal and the transversal acoustic velocities, given respectively by \[v_{\rm LA}=\left(\frac{C_{11}}{\rho_{\rm 2D}}\right)^{1/2}\ \ \ \text{and}\ \ \ v_{\rm TA}=\left(\frac{C_{44}}{2\rho_{\rm 2D}} \right)^{1/2}\!\!, \tag{3}\] where \(\rho_{\rm 2D}\) is the formal density, allowing comparison among systems, independent of their thickness [4]. The velocity values, listed in Table 2, suggest that the stiffness of the C\({}_{4}\)X\({}_{2}\) systems is comparable with that of the diamond. Following, we studied the electronic band structures and the projected density of states (PDOS) of the C\({}_{4}\)X\({}_{2}\) systems, displayed in figure 4 (a)-(d). The electronic band structures and PDOS of AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\) and ABC-C\({}_{4}\)N\({}_{2}\) nanosheets, presented in figures 4 (a) and (b), exhibit some differences, despite their analogous PDOS, showing that the N \(p\)-orbitals dominate at valence band maximum (VBM) while the conduction band minimum (CBM) are mostly characterized by a mixture of \(s\)- and \(p\)-orbitals of N and C atoms. Both systems present the VBM around the \(\Gamma\)-point with a Mexican-hat dispersion, in which the two peaks lie on the \(\Gamma\)-K and \(\Gamma\)-M lines. The height of the Mexican-hat band at \(\Gamma\) point is 0.01 and 0.001 eV for AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\) and ABC-C\({}_{4}\)N\({}_{2}\), respectively. However, the CBM of ABC-C\({}_{4}\)N\({}_{2}\) is well defined at the M-valley, while in AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\) although the CBM is located at the M-valley, the energy of the K-point is very close to the M-point one. On the other hand, the AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)B\({}_{2}\) and ABC-C\({}_{4}\)B\({}_{2}\) nanosheets have very similar band structures, as shown in figures 4 (c) and (d). Both systems present direct band gaps, where the doubly degenerated VBM and the CBM are located at the \(\Gamma\)-point, in which the B \(p\)-orbitals dominate at the CBM and the VBM is described by a combination of B \(p\)-orbitals and C \(p\)-orbitals. As discussed in the introduction, the 2D diamond systems present non-zero band gaps with characteristics that depend on several factors, such as doping with different functional groups. Herein, we are working with B and N atoms as X-doping elements, which the B atom belongs to group-III elements of the periodic table, with a \(2s^{2}2p^{1}\) valence electronic configuration, and the N atom belongs to group-V elements, with a \(2s^{2}2p^{3}\) valence electronic configuration. As a result, we found a wider indirect band gap for the C\({}_{4}\)N\({}_{2}\) nanosheets, and a narrower direct band gap for the C\({}_{4}\)B\({}_{2}\) systems when compared to PD. Table 3 displays the band gap values of the C\({}_{4}\)X\({}_{2}\) nanosheets obtained with the optB88-vdW [22] (\(E_{g}^{\rm vdW}\)) and the hybrid HSE [23] (\(E_{g}^{\rm HSE}\)) functional approaches for the exchange-correlation energy. For com Figure 4: Electronic band structures along the main high-symmetry directions of the BZ and PDOS, obtained with the optB88-vdW approach for the exchange-correlation energy: (a) AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\), (b) ABC-C\({}_{4}\)N\({}_{2}\), (c) AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)B\({}_{2}\), and (d) ABC-C\({}_{4}\)B\({}_{2}\). The PDOS on the C and X \(s\)-orbitals are given in purple and blue, respectively, and on the C and X \(p\)-orbitals are given in red and green, respectively. E\({}_{\rm v}\) represents the VBM. Energies and PDPS are given in eV and states/eV, respectively. parison, we also included the HD and FD band gap values acquired with PBE functional [15] and of the three-layer graphene PD obtained with hybrid HSE functional [8]. The band gap width of the latter is 2.70 eV, while for the AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\), ABC-C\({}_{4}\)N\({}_{2}\), AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)B\({}_{2}\), and ABC-C\({}_{4}\)B\({}_{2}\) functionalized compounds they are 5.56, 5.42, 1.64 eV, and 1.97 eV, respectively, using the same exchange-correlation functional. The C\({}_{4}\)N\({}_{2}\) nanosheets have band gap width values, obtained with the optB88-vdW approach, similar to those of the HD and FD systems obtained with the PBE approximation. Furthermore, since effective masses (\(m^{*}\)) can be used to investigate electronic transport under the influence of electric fields or carrier gradients, we estimated them by fitting parabolic functions to the CBM and VBM via the formula as follows: \[\frac{1}{m^{*}}=\frac{1}{\hbar^{2}}\Bigg{|}\frac{\partial^{2}E(k)}{\partial k ^{2}}\Bigg{|} \tag{4}\] where \(E(k)\) and \(k\) are the energy and the wave vector of the CBM or VBM. The values of the effective masses depend on the curvature radius of the electronic band around the band-edge position, i.e., a larger curvature radius implies a heavier effective mass. The electron (\(m^{*}_{e}\)) and hole (\(m^{*}_{h}\)) effective masses, in \(m_{0}\) units, calculated with the optB88-vdW approach, are presented in table 3. In both stacking structures of the C\({}_{4}\)B\({}_{2}\) nanosheets, the effective masses of electrons and holes are comparable to the ones of HD and FD systems, which present extraordinary carriers mobility [15]. Furthermore, these estimated effective masses are similar to the \(m^{*}_{e}=1.06\,m_{0}\) and \(m^{*}_{h}=0.59\,m_{0}\) of silicon at a temperature of 4K [42]. Regarding C\({}_{4}\)N\({}_{2}\) nanosheets, the \(m^{*}_{h}\) is the magnitude of the effective mass at \(\Gamma\)-point, i.e., we fitted a parabolic function considering the minimum located at the \(\Gamma\)-point [43]. Accordingly, the \(m^{*}_{h}\) depends on the height of the Mexican-hat band and the radius centered at \(\Gamma\)-point around band edges [43]. On the other hand, the \(m^{*}_{e}\) displays two effective masses around the M-valley. \(m^{*M\rightarrow\Gamma}_{e}\) shows a high electron effective mass, much higher than \(m^{*M\to K}_{e}\). \(m^{*M\rightarrow\Gamma}_{e}\) is five times larger than \(m^{*M\to K}_{e}\) for ABC-C\({}_{4}\)N\({}_{2}\) and nine times larger for AA\({}^{\prime}\)A\({}^{\prime\prime\prime}\)-C\({}_{4}\)N\({}_{2}\). Considering the stable structures presented previously and their physical properties, it is interesting to explore their potential applications. Further investigations could explore the applicability of C\({}_{4}\)N\({}_{2}\) and C\({}_{4}\)B\({}_{2}\) structures as building blocks to build 2D/3D systems, such as van der Waals heterostructures, with different properties [10; 44; 45]. Moreover, wide band gap materials, such as hexagonal boron nitride (h-BN), serve as a substrate for high-mobility 2D devices [46], a host material for single-photon emitter defect-centers for quantum computing and biosensors [4], etc. Therefore, the C\({}_{4}\)N\({}_{2}\) nanosheets seem appropriate for these kinds of applications. Finally, the C\({}_{4}\)B\({}_{2}\) nanosheets presented direct band gaps, in the 1.6 - 2.0 eV range, being more favorable for optoelectronics applications than C\({}_{4}\)N\({}_{2}\) ones, which have indirect band gaps in the 5.4 - 5.6 eV range [46]. In particular, the small effective masses and high elastic modulus of C\({}_{4}\)B\({}_{2}\) systems may contribute to high electron mobility [15], being suitable to be applied in photovoltaic cells. Table 4 summarizes their properties and the respective potential applications. In conclusion, we performed an _ab-initio_ investigation on the structural, thermodynamic, dynamic, elastic, and electronic properties of C\({}_{4}\)X\({}_{2}\) (X = B or N) systems. According to AIMD simulations, phonon calculations, and the Born stability criteria, all the nanosheets are thermodynamically, dynamically, and mechanically stable. Furthermore, the systems presented standard enthalpy of formation close to the recently synthesized pristine 2D diamond composed of three graphene layers. Elastic properties indicated that those nanosheets possess a high Young's modulus values and characteristics of isotropic \begin{table} \begin{tabular}{l c c c c c c c} System & \(E^{\rm vdW}_{g}\) & \(E^{\rm PBE}_{g}\) & \(E^{\rm HSE}_{g}\) & Band gap & \(m^{*}_{h}\) & \(m^{*}_{e}\) & \(m^{*M\rightarrow\Gamma}_{e}\) & \(m^{*M\rightarrow\Gamma}_{e}\) \\ \hline AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)N\({}_{2}\) & 4.40 & & 5.56 & indirect & 6.23 & & 2.94 & 0.30 \\ ABC-C\({}_{4}\)N\({}_{2}\) & 4.13 & & 5.42 & indirect & 16.77 & & 2.61 & 0.47 \\ AA\({}^{\prime}\)A\({}^{\prime\prime}\)-C\({}_{4}\)B\({}_{2}\) & 0.53 & & 1.64 & direct & 0.34 (0.68) & 1.22 & & \\ ABC-C\({}_{4}\)B\({}_{2}\) & 0.84 & & 1.97 & direct & 0.36 (0.80) & 1.30 & & \\ HD\({}^{\rm a}\) & & 3.32 & & direct & 0.21 (0.58) & 1.11 & & \\ FD\({}^{\rm a}\) & & 4.04 & & direct & 0.37 (1.13) & 0.55 & & \\ PD\({}^{\rm b}\) & & & 2.70 & indirect & & & & \\ \end{tabular} \({}^{\rm a}\) Reference [15] \({}^{\rm b}\) Reference [8]. The (\(\overline{2}\)110)-oriented h-diamane with 3 graphene layers. \end{table} Table 3: Electronic band gap values of C\({}_{4}\)X\({}_{2}\) (X = B or N) nanosheets (in eV), obtained with two different approximations for the exchange-correlation functional: optB88-vdW (\(E^{\rm wdW}_{g}\)) [22] and HSE (\(E^{\rm HSE}_{g}\)) [23]. The PD band gap value is also displayed. The electron (\(m^{*}_{e}\)) and hole (\(m^{*}_{h}\)) effective masses, in \(m_{0}\) units, obtained with the optB88-vdW approach are also shown. The VBM of C\({}_{4}\)B\({}_{2}\) systems are doubly degenerated at \(\Gamma\) and have two values for hole carrier. The CBM of C\({}_{4}\)N\({}_{2}\) systems displays two effective masses around the M-valley, \(m^{*M\rightarrow\Gamma}_{e}\) and \(m^{*M\rightarrow K}_{e}\). systems, and the estimated longitudinal and transversal acoustic velocities revealed that their stiffness is comparable with that of the diamond. Finally, the systems' electronic properties presented some differences, in which C\({}_{4}\)N\({}_{2}\) structures exhibited wide indirect band gaps and heavier effective masses, while the C\({}_{4}\)B\({}_{2}\) ones had narrow direct band gaps and lighter effective masses. These results provide chemical routes to tune the electronic properties of 2D diamonds by doping them for specific applications, such as optoelectronic devices. ###### Acknowledgements. Brazilian Federal Government Agencies CAPES (Grants 88882.332907/2019-01 and 88887.371193/2019-00), CNPq (Grants 314884/2021-1, 302800/2022-0, and 150595/2023-9) and FAPESP (Grant 22/10095-8) partially supported this investigation. The authors acknowledge the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the Santos Dumont supercomputer, Centro Nacional de Processamento de Alto Desempenho em Sao Paulo (CENAPAD-SP, Brazil) and SNIC-UPMAX, SNIC-HPC2N, and SNIC-NSC centers under the Swedish National Infrastructure for Computing (SNIC) resources.
2309.07562
On dynamics of the Chebyshev's method for quartic polynomials
Let $p$ be a normalized (monic and centered) quartic polynomial with non-trivial symmetry groups. It is already known that if $p$ is unicritical, with only two distinct roots with the same multiplicity or having a root at the origin then the Julia set of its Chebyshev's method $C_p$ is connected and symmetry groups of $p$ and $C_p$ coincide~[Nayak, T., and Pal, S., Symmetry and dynamics of Chebyshev's method, \cite{Sym-and-dyn}]. Every other quartic polynomial is shown to be of the form $p_a (z)=(z^2 -1)(z^2-a)$ where $a \in \mathbb{C}\setminus \{-1,0,1\}$. Some dynamical aspects of the Chebyshev's method $C_a$ of $p_a$ are investigated in this article for all real $a$. It is proved that all the extraneous fixed points of $C _a$ are repelling which gives that there is no invariant Siegel disk for $C_a$. It is also shown that there is no Herman ring in the Fatou set of $C_a$. For positive $a$, it is proved that at least two immediate basins of $C_a$ corresponding to the roots of $p_a$ are unbounded and simply connected. For negative $a$, it is however proved that all the four immediate basins of $C_a$ corresponding to the roots of $p_a$ are unbounded and those corresponding to $\pm i\sqrt{|a|}$ are simply connected.
Tarakanta Nayak, Soumen Pal
2023-09-14T09:45:14Z
http://arxiv.org/abs/2309.07562v1
# On dynamics of the Chebyshev's method ###### Abstract Let \(p\) be a normalized (monic and centered) quartic polynomial with non-trivial symmetry groups. It is already known that if \(p\) is unicritical, with only two distinct roots with the same multiplicity or having a root at the origin then the Julia set of its Chebyshev's method \(C_{p}\) is connected and symmetry groups of \(p\) and \(C_{p}\) coincide [Nayak, T., and Pal, S., Symmetry and dynamics of Chebyshev's method, [5]]. Every other quartic polynomial is shown to be of the form \(p_{a}(z)=(z^{2}-1)(z^{2}-a)\) where \(a\in\mathbb{C}\setminus\{-1,0,1\}\). Some dynamical aspects of the Chebyshev's method \(C_{a}\) of \(p_{a}\) are investigated in this article for all real \(a\). It is proved that all the extrane fixed points of \(C_{a}\) are repelling which gives that there is no invariant Siegel disk for \(C_{a}\). It is also shown that there is no Herman ring in the Fatou set of \(C_{a}\). For positive \(a\), it is proved that at least two immediate basins of \(C_{a}\) corresponding to the roots of \(p_{a}\) are unbounded and simply connected. For negative \(a\), it is however proved that all the four immediate basins of \(C_{a}\) corresponding to the roots of \(p_{a}\) are unbounded and those corresponding to \(\pm i\sqrt{|a|}\) are simply connected. _Keyword:_ Quartic polynomials; Fatou and Julia sets; Symmetry; Chebyshev's method. AMS Subject Classification: 37F10, 65H05 Introduction A root-finding method is a function from the space of all polynomials that assigns a rational map \(F_{p}\) to a polynomial \(p\) such that each root of \(p\) is an attracting fixed point of \(F_{p}\), i.e., if \(z_{0}\) is a root of \(p\) then \(F_{p}(z_{0})=z_{0}\) and \(|F_{p}^{\prime}(z_{0})|<1\). Though there are several such methods appearing in the literature, the family of Konig's methods [2] and Chebyshev-Halley methods [3] seem to be comparatively well-studied among them. The Newton method \(N_{p}\) is the first member of the Konig's methods and its order of convergence (i.e., the local degree of \(N_{p}\) at each of the simple roots of \(p\)) is two. Further, it has no finite extraneous fixed point, i.e., each finite fixed point of \(N_{p}\) is a root of \(p\). Note that the sequence of forward iterates of every root-finding method converges to a root of a polynomial in a suitably small neighborhood of the root. The non-existence of finite extraneous fixed points for the Newton method have been found to be crucial in the study of its global dynamics (i.e., not only in a neighborhood of the roots of the polynomial but in \(\widehat{\mathbb{C}}\)) of the Newton method. For example, this is precisely the reason why the Julia set (it is the set of all points in \(\widehat{\mathbb{C}}\) at which the sequence of iterates \(\{N_{p}^{n}\}_{n>0}\) is not normal [1]) of the Newton method applied to a polynomial is connected [9]. These are possibly some reasons for which this method has drawn a good amount of attention of researchers. However there are root-finding methods whose order of convergence is three and which has finite extraneous fixed points. One such, namely the Chebyshev's method is the subject of this article. For a polynomial \(p\), its Chebyshev's method is given by \[C_{p}(z)=z-\left[1+\frac{1}{2}L_{p}(z)\right]\frac{p(z)}{p^{\prime}(z)},\] where \(L_{p}(z)=\frac{p(z)p^{\prime\prime}(z)}{[p^{\prime}(z)]^{2}}\). The Fatou set of a rational map \(R\), denoted by \(\mathcal{F}(R)\) is the set of all points in \(\widehat{\mathbb{C}}\) in a neighborhood of which \(\{R^{n}\}_{n>0}\) is normal. Its complement in \(\widehat{\mathbb{C}}\) is known as the Julia set of \(R\) and is denoted by \(\mathcal{J}(R)\). A maximally connected open subset \(U\) of the Fatou set, called a Fatou component is said to be \(p-\)periodic if \(R^{p}(U)=U\). It is well-known that for every Fatou component \(U\) of a rational map \(R\), there is a \(k\) such that \(R^{k}(U)\) is periodic. A periodic Fatou component can be an attracting domain, a parabolic domain, a Siegel disk or a Herman ring. Other properties of these Fatou component can be found in [1]. We are concerned with the dynamics (the Fatou and the Julia sets) of the Chebyshev's method applied to polynomials. The nature of extraneous fixed points and some other dynamical aspects of Chebyshev's method applied to quadratic and cubic polynomial have been studied in [7]. Existence of superattracting cycles for Chebyshev's method applied to cubic polynomial are investigated in [8]. The first systematic study of the dynamics of Chebyshev's method applied to cubic polynomials can be found in [4]. The family of Chebyshev's method applied to cubic polynomials is parametrized in terms of the multiplier of an extraneous fixed point and its dynamics is determined for parameters in \([-1,1]\). A discussion of the Chebyshev's method applied to some quartic polynomials appeared in [5] which mainly deals with the relation between the group of Euclidean isometries preserving the Julia set of a polynomial and its Chebyshev's method, whenever the earlier is non-trivial. It is shown that both these groups are isomorphic for all centered cubic and quartic polynomials with non-trivial symmetry groups. This article deals with the quartic polynomials. First, we parametrize the family of maps arising as the Chebyshev's method of quartic polynomials. A root-finding method \(F_{p}\) is said to satisfy the Scaling theorem (see [4]) if for every affine map \(T\) and every non-zero constant \(\lambda\), \(F_{p}=T\circ F_{q}\circ T^{-1}\) where \(q=\lambda p\circ T\). A quartic polynomial is of the form \(p(z)=az^{4}+bz^{3}+cz^{2}+dz+e\) where \(a(\neq 0),b,c,d,e\in\mathbb{C}\). It is well-known that every polynomial \(g\) can be transformed to a monic and centered (called normalized) polynomial by post-composing an affine map \(T\) and then multiplying by a suitable non-zero constant \(\lambda\). Indeed, by taking \(\lambda\) to be the reciprocal of the leading coefficient of \(g\) and \(T(z)=z+\zeta\) where \(\zeta\) is the centroid of \(g\) (see page 205, [1]), it can be seen that \(\lambda g\circ T\) is normalized. As the Chebyshev's method satisfies the Scaling theorem (See Theorem 2.2, [4]), \(C_{g}=T\circ C_{\lambda g\circ T}\circ T^{-1}\). Therefore, without loss of generality we assume that \(p\) is normalized, i.e., \(a=1\) and \(b=0\). Then \[p(z)=z^{4}+cz^{2}+dz+e. \tag{1}\] Though the ongoing discussion is on polynomials, we define the symmetry group of the Julia set of a rational map \(R\). It is denoted by \(\Sigma R\) and is defined as \(\Sigma R=\{\sigma:\sigma\) is an holomorphic Euclidean isometry and \(\sigma(\mathcal{J}(R))=\mathcal{J}(R)\}\). A normalized polynomial \(p\) can be written as \[p(z)=z^{\alpha}p_{0}(z^{\beta}) \tag{2}\] where \(p_{0}\) is a monic polynomial, \(\alpha\in\mathbb{N}\cup\{0\}\) and \(\beta\in\mathbb{N}\) are maximal. We call it normal form of \(p\) and it is unique. Then by Theorem 9.5.4, [1], \[\Sigma p=\{z\mapsto\lambda z:\lambda^{\beta}=1\}.\] Note that \(\beta=1\) if and only if \(\Sigma p\) is trivial. If \(d=0\) then \(p(z)=z^{4}+cz^{2}+e\). Further, if \(c=0=e\) then \(p\) is a monomial, whose Chebyshev's method is a linear map and this case is not of interest. As seen in the following \(\Sigma p\) is non-trivial in all other situations. Case 1: If \(c=0,e\neq 0\) then \(p(z)=z^{4}+e\) and \(\beta=4\). Case 2: If \(c\neq 0,e=0\) then \(p(z)=z^{2}(z^{2}+c)\) and \(\beta=2\). Case 3: If \(c\neq 0,e\neq 0\) then \(p(z)=z^{4}+cz^{2}+e\) and \(\beta=2\). For the quartic polynomial \(p\) of the form given in Equation 1, let \(d\neq 0\). If both \(c\) and \(e\) are non-zero then \(p(z)=z^{4}+cz^{2}+dz+e\) is in its normal form with \(\alpha=0,\beta=1\) and \(p_{0}(z)=p(z)\). If \(c\neq 0\) and \(e=0\) then \(p\) in its normal form is \(z(z^{3}+cz+d)\), and \(\alpha=1=\beta\) and \(p_{0}(z)=z^{3}+cz+d\). Similarly, if \(c=0\) and \(e\neq 0\) then \(p(z)=z^{4}+dz+e\) is in its normal form with \(\alpha=0\) and \(\beta=1\). On the other hand if both \(c,e\) are zero then \(p(z)=z(z^{3}+d)\) and \(\alpha=1,\beta=3\). This is the only case for \(d\neq 0\) where \(\Sigma p\) is non-trivial. Case 4: \(p(z)=z(z^{3}+d)\), \(\alpha=1\) and \(\beta=3\) in this case. In this article, we attempt to understand the dynamics of \(C_{p}\) for all quartic \(p\) with non-trivial \(\Sigma p\). The non-identity elements of \(\Sigma p\) are used to understand the dynamics of \(C_{p}\). This approach of determining the dynamics by the symmetries does not work when \(\Sigma p\) is trivial. This is a reason why this case needs to be dealt with differently, which we postpone for our future work. The polynomials in Case 1 are unicritical. Those in Cases 2 and 4 have one of their roots at the origin (the centroid of \(p\)) and have non-trivial symmetry. The dynamics of the Chebyshev's method applied to these polynomials and those in Case 3 having exactly two distinct roots follows from Theorem 1.3, [5]. In all these cases, the Julia set is found to be connected and \(\Sigma p=\Sigma C_{p}\). Whether, these are true for the rest of the cases is not known. The polynomials given in Case 3 can not have exactly three distinct roots because zero is not a root and the roots appear in pairs symmetric about the origin. Note that, in Case 3, \(p\) can be expressed as \(p(z)=(z^{2}-\gamma_{1})(z^{2}-\gamma_{2})\), where \(\gamma_{1},\gamma_{2}\in\mathbb{C}\setminus\{0\}\). The Chebyshev's method of \(\frac{1}{\gamma_{1}^{2}}p(\sqrt{\gamma_{1}}z)\) is conjugate to \(C_{p}\) by Scaling theorem. Therefore, assume without loss of generality that \(p(z)=(z^{2}-1)(z^{2}-a)\) where \(a:=\frac{\gamma_{2}}{\gamma_{1}}\in\mathbb{C}\setminus\{0\}\). If \(a=-1\) then \(p(z)=z^{4}-1\) is of the form as given in Case 1; if \(a=1\) then \(p(z)=(z-1)^{2}(z+1)^{2}\) has exactly two roots with the same multiplicity, which is already dealt with in Theorem 1.3 (1) of [5]. Hence it is sufficient to consider \(p(z)=(z^{2}-1)(z^{2}-a)\) where \(a\in\mathbb{C}\setminus\{-1,0,1\}\). Further, if \(|a|>1\) then consider the polynomial \(\frac{1}{a^{2}}p(\sqrt{a}z)=(z^{2}-\frac{1}{a})(z^{2}-1)\), whose Chebyshev's method is conjugate to \(C_{p}\) by the Scaling theorem. Therefore, it is enough to consider \[p_{a}(z)=(z^{2}-1)(z^{2}-a) \tag{3}\] where \(|a|\leq 1\) and \(a\notin\{-1,0,1\}\). We denote the Chebyshev's method of \(p_{a}\) by \(C_{a}\). This paper takes up the case when \(a\) is real. All extraneous fixed points of \(C_{a}\) are shown to be repelling. As a consequence, it follows that there is no invariant Siegel disk for \(C_{a}\). This is because every such Siegel disk requires an indifferent fixed point whereas all fixed points are found to be either attracting (when these are the roots of \(p_{a}\)) or repelling (where these are extraneous). It is proved that every periodic Fatou component on which a multiply connected Fatou component lands is an attracting or parabolic domain corresponding to a real periodic point. Here, we say a Fatou component \(U^{\prime}\) lands on a Fatou component \(U\) if there is a \(k\geq 0\) such that \(C_{a}^{k}(U^{\prime})=U\). In particular this means that there is no Herman ring in the Fatou set of \(C_{a}\). It is also found that the Julia set of \(C_{a}\) is connected if and only if the Julia component containing a non-zero pole is unbounded. For \(a>0\), we have proved that the immediate basins of \(1\) and \(-1\) are unbounded and at least two immediate basins (out of four corresponding to the four roots of \(p_{a}\)) are simply connected. For \(a<0\), all the immediate basins corresponding to the four roots of \(p_{a}\) are found to be unbounded and those corresponding to the purely imaginary roots are shown to be simply connected. Under the assumption that the Fatou set is the union of the basins of attraction of the fixed points corresponding to the roots of \(p_{a}\), it is shown that \(\Sigma C_{a}=\Sigma p_{a}\) for all positive \(a\). For negative \(a\) this is proved with an additional assumption that the largest positive extraneous fixed point and the purely imaginary extraneous fixed points are with the same absolute value. Section 1 describes some useful properties of \(C_{a}\). Results on fixed points and dynamics of \(C_{a}\) are stated and proved in the first part of Section 2. Then in the two subsections of Section 2, the main results are proved. By conjugacy, we mean conformal conjugacy throughout this article. ## 2 Basic properties of \(C_{a}\) For the polynomial \(p_{a}\) in Equation 3, \[L_{p_{a}}(z)=\frac{6z^{6}-7(a+1)z^{4}+(a^{2}+8a+1)z^{2}-a(a+1)}{2z^{2}\{2z^{2}-( a+1)\}^{2}},\] \[1+\frac{1}{2}L_{p_{a}}(z)=\frac{Q(z)}{4z^{2}\{2z^{2}-(a+1)\}^{2}}\] where \(Q(z)=22z^{6}-23(a+1)z^{4}+(5a^{2}+16a+5)z^{2}-a(a+1)\), and \[L_{p^{\prime}_{a}}(z)=\frac{12z^{2}\{2z^{2}-(a+1)\}}{\{6z^{2}-(a+1)\}^{2}}.\] The Chebyshev's method \(C_{a}\) of \(p_{a}\) is \[C_{a}(z) =z-\left[1+\frac{1}{2}L_{p_{a}}(z)\right]\frac{p_{a}(z)}{p^{ \prime}_{a}(z)}\] \[=z-\frac{Q(z)p_{a}(z)}{8z^{3}\{2z^{2}-(a+1)\}^{3}} \tag{4}\] \[=\frac{42z^{10}+Az^{8}+Bz^{6}+Cz^{4}+Dz^{2}+a^{2}(a+1)}{8z^{3}\{2 z^{2}-(a+1)\}^{3}} \tag{5}\] where \(A=-51(a+1)\), \(B=4(5a^{2}+3a+5)\), \(C=-3(a^{3}-7a^{2}-7a+1)\) and \(D=-6a(a^{2}+3a+1)\). Now, we enumerate some basic properties of \(C_{a}\). **Lemma 2.1**.: 1. _(Degree) The map_ \(C_{a}\) _is an odd rational map of degree ten._ 2. _(Critical points) The map_ \(C_{a}\) _has eighteen critical points, counting with multiplicities. The multiple critical points are the four (simple) roots of_ \(p_{a}\) _and three poles of_ \(C_{a}\)_. Each of these has multiplicity two as a critical point of_ \(C_{a}\)_. The other four critical points are simple. If_ \(a\) _is real then these four critical points are neither real nor purely imaginary and the poles are all real._ 3. _(Fixed points) The roots of_ \(p_{a}\) _are the superattracting fixed points of_ \(C_{a}\)_. The point at infinity is a repelling fixed point of_ \(C_{a}\) _with multiplier_ \(\frac{32}{21}\)_. The extra-neous fixed points are the solutions of_ \(Q(z)=22z^{6}-23(a+1)z^{4}+(5a^{2}+16a+5)z^{2}-a(a+1)\)_._ Proof.: 1. It follows from Equation 5 that \(C_{a}\) is an odd rational map with degree ten. 2. By Theorem 2.7.1. [1], \(C_{a}\) has eighteen critical points counting with multiplicity. Recall that the multiplicity of a critical point is one less than the local degree of the function at that point. The derivate of \(C_{a}\) is \[C_{a}^{\prime}(z) =\frac{(L_{p_{a}}(z))^{2}}{2}[3-L_{p_{a}^{\prime}}(z)]\] \[=\frac{3(z^{2}-1)^{2}(z^{2}-a)^{2}\{28z^{4}-8(a+1)z^{2}+(a+1)^{2}\} }{8z^{4}\{2z^{2}-(a+1)\}^{4}}\] (6) The critical points of \(C_{a}\) are \(\pm 1\), \(\pm\sqrt{a}\), each of multiplicity 2 (these are the roots of \(p\)), \(0,\pm\sqrt{\frac{a+1}{2}}\) with multiplicity 2 each (these are the poles of \(C_{a}\)) and the solutions of \[28z^{4}-8(a+1)z^{2}+(a+1)^{2}=0.\] (7) The above equation has four distinct roots, namely the solutions of \(z^{2}=\frac{(2\pm i\sqrt{3})(a+1)}{14}\) and each is a simple critical point of \(C_{a}\). That simple critical points are neither real nor purely imaginary and non-zero poles are real whenever \(a\in(-1,1)\setminus\{0\}\) is obvious. 3. As the roots of \(p_{a}\) are fixed points as well as critical points of \(C_{a}\), their multiplier is zero. In other words, these are superattracting fixed points of \(C_{a}\). It is evident from Equation 5 that the degree of the numerator is bigger than that of the denominator of \(C_{a}\) giving that \(\infty\) is a fixed point of \(C_{a}\). Its multiplier is \(\frac{32}{21}\) (see page no. 41 [1] for the formula). Thus \(\infty\) is a repelling fixed point of \(C_{a}\). The extraneous fixed points of \(C_{a}\) are the solutions of \[22z^{6}-23(a+1)z^{4}+(5a^{2}+16a+5)z^{2}-a(a+1)=0\] (8) (See Equation 4). **Remark 2.1**.: 1. _(Free critical points) The critical points_ \(\pm 1\) _and_ \(\pm\sqrt{a}\) _are superattracting fixed points of_ \(C_{a}\)_. The poles_ \(0\) _and_ \(\pm\sqrt{\frac{a+1}{2}}\) _are also critical points and are mapped to_ \(\infty\) _which is a repelling fixed point of_ \(C_{a}\)_. Hence the poles are in the Julia set._ _In order to determine the existence of Fatou components different from the basins of superattracting fixed points, the forward orbits of the four simple critical points of_ \(C_{a}\) _need to be followed. These are_ \(c_{1},\ -c_{1}\) _and_ \(c_{2},-c_{2}\) _where_ \(c_{1}\) _and_ \(c_{2}\) _are principal square roots of_ \(\frac{(2+i\sqrt{3})(a+1)}{14}\) _and_ \(\frac{(2-i\sqrt{3})(a+1)}{14}\) _respectively. Following the general practice, we call these as free critical points._ 2. _If_ \(c\) _is a free critical point then_ \[C_{a}(c)=\frac{(42c^{8}+Ac^{6}+Bc^{4}+Cc^{2}+D)c^{2}+a^{2}(a+1)}{8c^{3}\{2c^{2} -(a+1)\}^{3}}.\] _For_ \(c=c_{1},\)__ \[C_{a}(c)=\frac{7^{3}[(42c^{8}+Ac^{6}+Bc^{4}+Cc^{2}+D)(2+i\sqrt{3})+14a^{2}]}{8c (a+1)^{3}(2+i\sqrt{3})(-5+i\sqrt{3})^{3}}.\] _Further, if_ \(a\) _is real then_ \[C_{a}(c)=\frac{R(a)+i\sqrt{3}S(a)}{8^{3}c(a+1)^{3}(-47+i8\sqrt{3})}\] _where_ \(R(a)=-(1345a^{4}+28508a^{3}+48838a^{2}+28508a+1345)\) _and_ \(S(a)=-3(111a^{4}-732a^{3}+3802a^{2}-732a+111)\)_. Since_ \(c_{1}=\bar{c_{2}}\)_,_ \[C_{a}(c_{2})=-\frac{R(a)-i\sqrt{3}S(a)}{8^{3}c(a+1)^{3}(47+i8\sqrt{3})}.\] _Note that if_ \(C_{a}(c)=0\) _then_ \(R(a)=0\) _and_ \(S(a)=0\)_. But_ \(R(a)\) _and_ \(S(a)\) _can be shown not to have any common root. Hence the critical values corresponding to the free critical points are non-zero whenever_ \(a\) _is real._ Dynamics of \(C_{a}\) for real parameter In view of Equation 3, for every real \(\lambda\), \(C_{\lambda}\) is conjugate to \(C_{a}\) for some \(a\in(-1,0)\cup(0,1)\). The study of dynamics of \(C_{a}\) is undertaken in this section. We start with an almost obvious but useful observation. We say a Fatou component \(U^{\prime}\) lands on a Fatou component \(U\) if there is a \(k\geq 0\) such that \(C_{a}^{k}(U^{\prime})=U\). Note that every Fatou component lands on each of its iterated forward image and in particular, periodic Fatou components land on themselves. By Sullivan's No Wandering theorem, every Fatou component of a rational map lands on a periodic Fatou component. **Lemma 3.1**.: _For every non-zero real \(a\), \(C_{a}\) preserves the real as well as the imaginary axis. Further, the following are true about the Fatou set of \(C_{a}\)._ 1. _If a Fatou component intersects either the real axis or the imaginary axis then it does not land on any rotation domain (Siegel disk or Herman ring)._ 2. _No Fatou component intersects both the real and the imaginary axis._ 3. _The Julia component containing the origin is unbounded._ Proof.: Since all the coefficients of the denominator and numerator of \(C_{a}\) are real, \(C_{a}(\mathbb{R})\subseteq\mathbb{R}\). Now for every \(y\in\mathbb{R}\), \[C_{a}(iy)=i\left[\frac{42y^{10}-Ay^{8}+By^{6}-Cy^{4}+Dy^{2}-a^{2}(a+1)}{8y^{3} \{2y^{2}+(a+1)\}^{3}}\right] \tag{9}\] where \(A,\ B,\ C\) and \(D\) are defined in Equation 5. This shows that the imaginary axis is preserved by \(C_{a}\). 1. We prove this assuming that a Fatou component intersects the real axis. The proof in the other case, when the Fatou component intersects the imaginary axis, is exactly the same. Let a Fatou component \(U^{\prime}\) intersect the real axis and let \(r^{\prime}\in U^{\prime}\cap\mathbb{R}\). If \(U^{\prime}\) lands on a rotation domain \(U\) with period \(p\), then \(C_{a}^{n^{\prime}}(U^{\prime})=U\) for some \(n^{\prime}\geq 0\). Now \(C_{a}^{kp}(r)\in U\) for all \(k\) where \(r=C_{a}^{n^{\prime}}(r^{\prime})\in U\). Since \(C_{a}^{kp}:U\to U\) is conformally conjugate to an irrational rotation \(z\mapsto e^{i2\pi\theta}z\) (for some irrational number \(\theta\)) on a disk or on an annulus about the origin, the set \(\{C_{a}^{kp}(r)\}_{k>0}\) is dense in a Jordan curve contained in \(U\). Since \(C_{a}^{kp}(r)\) is real for all \(k\), this Jordan curve can only be \(\mathbb{R}\cup\{\infty\}\). But \(\infty\) is not in the Fatou set leading to a contradiction. 2. If a Fatou component of \(C_{a}\) intersects both the real and the imaginary axis then the periodic Fatou component \(U\) on which it lands intersects both the axes and hence can not be a rotation domain by (1) of this lemma. Therefore, it must be an attracting or parabolic domain. The corresponding periodic points are in the intersection of both the axes because both the axes are invariant under \(C_{a}\). In other words, the periodic point can only be either \(0\) or \(\infty\). Since \(C_{a}(0)=\infty\) and \(\infty\) is a fixed point, the periodic point must be \(\infty\). But \(\infty\) is a repelling fixed point leading to a contradiction. 3. If the Julia component containing the origin is bounded then a Jordan curve can be found in the Fatou set surrounding the origin, i.e., the bounded component of the complement of the Jordan curve contains the origin. The Fatou component containing this Jordan curve has to intersect both the real and the imaginary axes, which is not possible by (2) of this lemma. Therefore, the Julia component containing the origin is unbounded. There are some important consequences. We say a Fatou component surrounds a point if the point is in a bounded component of its complement. **Theorem 3.2**.: _If \(U^{\prime}\) is a multiply connected Fatou component of \(C_{a}\) then there is an \(m\) such that \(C_{a}^{m}(U^{\prime})\) is a Fatou component surrounding a non-zero pole. If \(U\) is a periodic Fatou component on which \(C_{a}^{m}(U^{\prime})\) lands then \(U\) is an attracting or parabolic domain corresponding to a real periodic point. In particular, there is no Herman ring for \(C_{a}\)._ Proof.: Consider a Jordan curve \(\gamma\subset U^{\prime}\) such that each of its complementary component intersects the Julia set. Since the Julia set is the closure of the backward orbit of any point in it (see Theorem 4.2.7 (ii), [1]) and all the poles are in the Julia set of \(C_{a}\), there is an \(m\) such that \(C_{a}^{m}(\gamma)\) surrounds a pole of \(C_{a}\). Indeed, \(m\) can be taken as the smallest natural number such that \(C_{a}^{m}\) is analytic in the bounded component of the complement of \(\gamma\) and the preceding statement follows from the Open Mapping Theorem. The curve \(C_{a}^{m}(\gamma)\) can not surround the pole at the origin by Lemma 3.1(3) and therefore it surrounds a non-zero pole. The Fatou component \(C_{a}^{m}(U^{\prime})\) containing \(C_{a}^{m}(\gamma)\) clearly surrounds the non-zero pole of \(C_{a}\). If \(U\) is a periodic Fatou component on which \(C_{a}^{m}(U^{\prime})\) lands then \(U\) intersects the real line because the real line invariant under \(C_{a}\) and \(C_{a}^{m}(U^{\prime})\) intersects the real line. Since \(U\) can not be a rotation domain, it is either an attracting domain or a parabolic domain. Again the invariance of the real line under \(C_{a}\) gives that the periodic point corresponding to \(U\) must be real. Since Herman rings are multiply connected, it must land on an attracting or parabolic domain which is absurd. Therefore \(C_{a}\) does not have any Herman ring. Note that there are two non-zero poles and the Julia component containing one is bounded if and only if that containing the other is bounded. This is because \(C_{a}\) is an odd function giving that \(z\in\mathcal{J}(C_{a})\) if and only if \(-z\in\mathcal{J}(C_{a})\). The boundedness of such Julia components determines the connectedness of the whole Julia set. **Corollary 3.2.1**.: _The Julia set of \(C_{a}\) is connected if and only if the Julia component containing a non-zero pole is unbounded._ Proof.: If the Julia set of \(C_{a}\) is connected then there is only a single Julia component. Since \(\infty\) and the non-zero poles are in the Julia set, the Julia component containing a non-zero pole is unbounded. Conversely, if the Julia set is not connected then there is a multiply connected Fatou component, say \(U^{\prime}\). By Theorem 3.2, there is an \(m\) such that \(C_{a}^{m}(U^{\prime})\) surrounds a non-zero pole. In particular, the Julia component containing this non-zero pole is bounded. In other words, if the Julia component containing a non-zero pole is unbounded then the Julia set is connected. For \(0<a<1\), all the roots of \(p_{a}\) are real where as for \(-1<a<0\), \(p_{a}\) has two real and two purely imaginary roots. These two cases need to be treated differently. Before we consider them separately, two useful lemmas are presented. **Lemma 3.3** (Symmetry about the coordinate axes).: _The map \(z\mapsto-\bar{z}\) preserves the Fatou and the Julia sets of \(C_{a}\) for all real non-zero \(a\). In particular, if a Fatou component contains a real (or a purely imaginary) number then it is symmetric about the real axis (or the imaginary axis respectively)._ Proof.: Since \(C_{a}\) is odd and all the coefficients of its denominator as well as numerator are real, \(C_{a}(-\bar{z})=-\overline{C_{a}(z)}\). Therefore, if \(\psi(z)=-\bar{z}\) then \(C_{a}^{n}(z)=\psi^{-1}\circ C_{a}^{n}\circ\psi(z)\) for all \(n\) and all \(z\). This gives that \(\psi(\mathcal{F}(C_{a}))=\mathcal{F}(C_{a})\) and \(\psi(\mathcal{J}(C_{a}))=\mathcal{J}(C_{a})\). If \(z\) is an extraneous fixed point of \(C_{a}\) then its multiplier is given by the formula \(\lambda(z)=2[3-L_{p^{\prime}_{a}}(z)]\) (see [4]), which is nothing but \[\lambda(z)=2\left[3-\frac{12z^{2}\{2z^{2}-(a+1)\}}{\{6z^{2}-(a+1)\}^{2}}\right]. \tag{10}\] **Lemma 3.4**.: _All the real extraneous fixed points of \(C_{a}\) are repelling for every non-zero \(a\in(-1,1)\)._ Proof.: For every \(a\in(-1,1)\setminus\{0\}\), if \(\xi\) denotes the positive square root of \(\frac{a+1}{2}\) then \(\lambda(z)\) can be rewritten as \[2\left[3-\frac{6z^{2}\{z^{2}-\xi^{2}\}}{\{3z^{2}-\xi^{2}\}^{2}}\right].\] In order to determine the multipliers of the real extraneous fixed points, we need to analyse the function \[\lambda(z)=2\left[3-\frac{6z^{2}\{z^{2}-\xi^{2}\}}{\{3z^{2}-\xi^{2}\}^{2}}\right]\] on the real line. It suffices to do it in \(\{x\in\mathbb{R}:x\geq 0\}\) as the function \(z\mapsto\lambda(z)\) is even. The function \(\lambda^{\prime}(x)=-\frac{24\xi^{2}x(x^{2}+\xi^{2})}{(3x^{2}-\xi^{2})^{3}}>0\) for all \(x\in(0,\frac{\xi}{\sqrt{3}})\) and therefore \(\lambda(x)\) is increasing in this interval. Consequently, \(\lambda(x)>6\) for all \(x\in(0,\frac{\xi}{\sqrt{3}})\). Similarly, \(\lambda^{\prime}(x)=-\frac{24\xi^{2}x(x^{2}+\xi^{2})}{(3x^{2}-\xi^{2})^{3}}<0\) for all \(x>\frac{\xi}{\sqrt{3}}\). Since \(\lim\limits_{x\rightarrow+\infty}\lambda(x)=\frac{14}{3}\), \(\lambda(x)\) is a strictly decreasing function in \((\frac{\xi}{\sqrt{3}},+\infty)\) with minimum value \(\frac{14}{3}\). Consequently \(\lambda(x)>\frac{14}{3}\) for all real \(x\). Thus the multiplier of every real extraneous fixed point of \(C_{a}\) is at least \(\frac{14}{3}\). Hence every real extraneous fixed point of \(C_{a}\) is repelling. The graphs of \(\lambda(x)\) are given in Figure 1. ### Positive parameter First, we determine the location of extraneous fixed points. Recall that there are six extraneous fixed point of \(C_{a}\) and these are the solutions of \(Q(z)=0\) where \(Q(z)\) is as given in Lemma 2.1(3). **Lemma 3.5**.: _All the extraneous fixed points of \(C_{a},0<a<1\) are in \((-1,1)\) and hence are repelling._ Proof.: In view of Lemma!3.4, it is enough to show that all the extraneous fixed points of \(C_{a}\) are in \((-1,1)\). Let \(w=z^{2}\) and \[f(w)=22w^{3}-23(a+1)w^{2}+(5a^{2}+16a+5)w-a(a+1)\] (see Lemma 2.1(3)). Then \(f(0)=-a(a+1)<0\), \(f(a)=4a(1-a)^{2}>0\), \(f(\frac{a+1}{2})=-\frac{1}{2}(1-a)^{2}(1+a)<0\) and \(f(1)=4(1-a)^{2}>0\). Therefore, \(f\) has a root in each of the intervals \((0,a)\), \((a,\frac{a+1}{2})\) and \((\frac{a+1}{2},1)\), and the square roots of these roots are precisely the extraneous fixed points of \(C_{a}\). If \(a_{1},a_{2},a_{3}\) are the positive extraneous fixed points in decreasing order then \[0<a_{3}<\sqrt{a}<a_{2}<\sqrt{\frac{a+1}{2}}<a_{1}<1,\] and the other three extraneous fixed points \(-a_{1},-a_{2},-a_{3}\) satisfy \[-1<-a_{1}<-\sqrt{\frac{a+1}{2}}<-a_{2}<-\sqrt{a}<-a_{3}<0.\] Thus all the extraneous fixed points of \(C_{a}\) for \(0<a<1\) are in \((-1,1)\). The graph of \(C_{0.5}(x)\), \(x\in\mathbb{R}\) is described in Figure 1(a). Green dots represent the extraneous fixed points, blue dots along with \(\pm 1\) are the superattracting fixed points of \(C_{0.5}\) corresponding to the roots of \(p_{0.5}\), whereas the poles are indicated by the red dots. Figure 1: The graph of \(\lambda\) From Equation (6), for \(x\in\mathbb{R}\), we have \[C^{\prime}_{a}(x)=\frac{3(x^{2}-1)^{2}(x^{2}-a)^{2}\{28x^{4}-8(a+1)x^{2}+(a+1)^{2 }\}}{8x^{4}\{2x^{2}-(a+1)\}^{4}}.\] As \(28x^{4}-8(a+1)x^{2}+(a+1)^{2}\) is not zero for any real \(x\) and is positive at \(x=0\) (see Equation 7), \(C^{\prime}_{a}(x)\geq 0\) for all real \(x\). Therefore, \(C_{a}\) is increasing in \(\mathbb{R}\). Further, \[C_{a}(x)-x=-\frac{11(x^{2}-1)(x^{2}-a)(x^{2}-a_{1}^{2})(x^{2}-a_{2}^{2})(x^{2}- a_{3}^{2})}{32x^{3}(x^{2}-\frac{a+1}{2})^{3}} \tag{11}\] where \(a_{1},\ a_{2},\ a_{3}\) are as mentioned in Lemma 3.5. Recall that \(C_{a}\) has four superattracting fixed points, namely \(-1,\ -\sqrt{a},\ \sqrt{a},\ 1\) and these are all real. Let \(\mathcal{A}_{-1},\ \mathcal{A}_{-\sqrt{a}},\ \mathcal{A}_{\sqrt{a}},\ \mathcal{A}_{1}\) be their respective immediate basins of attractions. **Theorem 3.6**.: _The immediate basins \(\mathcal{A}_{-1}\) and \(\mathcal{A}_{1}\) contain \((-\infty,-a_{1})\) and \((a_{1},\infty)\) respectively. In particular, these are unbounded and their respective boundaries contain a pole._ Proof.: It follows from Equation 11 that for every \(x<-1\), \(C_{a}(x)-x>0\). Since \(C_{a}:(-\infty,-1)\to(-\infty,-1)\) is strictly increasing, \(C_{a}^{n}(x)>x\) for every \(n\in\mathbb{N}\). This implies that \(\{C_{a}^{n}(x)\}_{n>0}\) converges to \(-1\) for every \(x\in(-\infty,-1)\), i.e., \((-\infty,-1]\subset\mathcal{A}_{-1}\). Now \(C_{a}\) maps \([-1,-a_{1})\) onto itself and \(C_{a}(x)<x\) by Equation 11. Since \(C_{a}\) is strictly increasing in this interval, \(\lim\limits_{n\to\infty}C_{a}^{n}(x)=-1\) for all \(x\in[-1,-a_{1})\). In other words, \((-\infty,-a_{1}]\subset\mathcal{A}_{-1}\) and in particular \(\mathcal{A}_{-1}\) is unbounded. Since \(\mathcal{A}_{-1}\) is invariant and unbounded, it follows from Lemma 4.3 [4] that its boundary contains a pole. As \(C_{a}\) is odd, \([1,\infty)\subset\mathcal{A}_{1}\) and \(\mathcal{A}_{1}\) is also unbounded. Further, its boundary contains a pole. **Remark 3.1**.: 1. _A similar analysis using Equation_ 11 _as done in the proof of Theorem_3.6 _gives that_ \((-a_{2},-a_{3})\subset\mathcal{A}_{-\sqrt{a}}\) _and_ \((a_{3},a_{2})\subset\mathcal{A}_{\sqrt{a}}\)_._ 2. _Since_ \(C_{a}\) _is strictly increasing in_ \((-a_{1},-\sqrt{\frac{a+1}{2}})\) _and_ \(C_{a}(-a_{1})=-a_{1}\)_, the left hand limit of_ \(C_{a}\) _at_ \(-\sqrt{\frac{a+1}{2}}\) _is_ \(=+\infty\)_. Hence_ \(C_{a}\) _has a unique root in_ \((-a_{1},-\sqrt{\frac{a+1}{2}})\)_. Now_ \((-a_{2},-a_{3})\) _is contained in_ \(\mathcal{A}_{-\sqrt{a}}\) _and therefore does not contain any root of_ \(C_{a}\) _(as roots are in the Julia set). Repeating the same argument for_ \(C_{a}\) _in_ \((-a_{3},0)\)_, it is found that_ \(C_{a}\) _has a unique root between_ \(-a_{3}\) _and_ \(0\)_. Since_ \(C_{a}\) _is odd, it has two positive roots, one in_ \((0,a_{3})\) _and the other in_ \((\sqrt{\frac{a+1}{2}},a_{1})\) Recall from Equation 9 that \(C_{a}(iy)=i\varphi_{a}(y)\) where \[\varphi_{a}(y)=\frac{42y^{10}-Ay^{8}+By^{6}-Cy^{4}+Dy^{2}-a^{2}(a+1)}{8y^{3}\{2y ^{2}+(a+1)\}^{3}}.\] Here \(A,B,C,D\) are as given in Equation 5. Further, \(\varphi_{a}\) is a real-valued function defined on the real axis and \(\varphi_{a}=\alpha^{-1}\circ C_{a}\circ\alpha\), where \(\alpha(y)=iy\). Note that for all real non-zero \(y\), \[\varphi_{a}^{\prime}(y)=\frac{3(y^{2}+1)^{2}(y^{2}+a)^{2}\{28y^{4}+8(a+1)y^{2 }+(a+1)^{2}\}}{8y^{4}\{2y^{2}+(a+1)\}^{4}}>0. \tag{12}\] It is enough to study the function \(\varphi_{a}\) on the real line for understanding \(C_{a}\) on the imaginary axis. First we look at the possible zeros of \(C_{a}\) on the imaginary axis. **Lemma 3.7**.: _The function \(C_{a}\) has exactly two purely imaginary roots._ Proof.: Consider \(q(x)=42x^{5}-Ax^{4}+Bx^{3}-Cx^{2}+Dx-a^{2}(a+1)\). Then \(q(0)=-a^{2}(a+1)<0\) and \(q(1)=42-A+B-C+D-a^{2}(a+1)=4\{29+5a(1-a)+a(4-a^{2})\}>0\). Since \(\lim\limits_{y\to 0^{+}}\varphi_{a}(y)=-\infty\), there is an \(y_{0}\in(0,1)\) such that \(\varphi_{a}(y_{0})<0\). As \(\varphi_{a}(1)=\frac{q(1)}{8(2+a+1)^{3}}>0\), \(\varphi_{a}\) has a root in \((0,1)\). This root is unique as \(\varphi_{a}\) is strictly increasing by Equation 12. It follows from the discussion preceding this lemma that \(C_{a}\) has two purely imaginary roots. Here is a remark. **Remark 3.2**.: _Note that \(0\) is a critical point of \(\varphi_{a}\), and \(\varphi_{a}\) is an increasing function on the negative real axis. As \(C_{a}\) has no fixed point on the imaginary axis, the same is true for \(\varphi_{a}\). Since \(\lim\limits_{y\to 0^{-}}\varphi_{a}(y)=+\infty\), \(\varphi_{a}(y)>y\) for all \(y<0\). As \(\varphi_{a}\) is an odd function, \(\varphi_{a}(y)<y\) for all \(y>0\) (see Figure 2b)._ Though the imaginary axis does not contain any fixed point of \(C_{a}\), the existence of periodic or pre-periodic points in the imaginary axis can not be ruled out. **Observation 1**.: _The imaginary axis contains two-periodic points of \(C_{a}\) and those are repelling._ Proof.: Let \(\zeta\) be the positive root of \(\varphi_{a}\) and \(I=(0,\zeta)\). Then \(\varphi_{a}(I)=(-\infty,0)\) and \(\varphi_{a}(\varphi_{a}(I))=\mathbb{R}\). In fact, \(\varphi_{a}^{2}\) maps \(I\) bijectively onto \(\mathbb{R}\). The branch \(g\) of \((\varphi_{a}^{2})^{-1}\) such that \(g(\mathbb{R})=I\) is a contraction. By the Contraction Mapping Principle, \(\varphi_{a}^{2}\) has a fixed point in \(I\). As \(\varphi_{a}\) does not have any fixed point on the real line by Remark 3.2, this fixed point of \(\varphi_{a}^{2}\) is a two periodic point of \(\varphi_{a}\). Further, this is attracting for \(g\) and hence repelling for \(\varphi_{a}^{2}\). Since \(\varphi_{a}(y)=-iC_{a}(iy)\), \(C_{a}\) has a repelling 2-periodic point on the positive imaginary axis. **Remark 3.3**.: _Using similar arguments, it can be seen that \(\varphi_{a}\) has a two periodic point in \((\zeta,+\infty)\). Indeed, this is in the same cycle of the two periodic point mentioned in the above lemma._ As \(\varphi_{a}^{\prime}(y)\) is real for every real \(y\), \(C_{a}^{\prime}(z)\) is a real number for every purely imaginary \(z\). Therefore, the periodic points of \(C_{a}\) lying on the imaginary axis can not be irrationally indifferent. Thus, these are attracting, rationally indifferent or repelling. If these are repelling then we have an important consequence. **Lemma 3.8**.: _If all the periodic points of \(C_{a}\) lying on the imaginary axis are repelling then the imaginary axis is in the Julia set of \(C_{a}\)._ Proof.: Suppose on the contrary that there is a Fatou component \(U^{\prime}\) intersecting the imaginary axis. Let \(U\) be the periodic Fatou component on which \(U^{\prime}\) lands. Then \(U\) intersects the imaginary axis and by Lemma 3.1, \(U\) can not be a rotation domain. The other possibility that \(U\) is an attracting domain or a parabolic domain would imply that the corresponding attracting or parabolic periodic point must be purely imaginary, which is contrary to the hypothesis. This completes the proof. Though each multiply connected Fatou component is restricted in the sense that it lands on a Fatou component intersecting the real axis, their existence can not be completely ruled out. We are able to show that not all immediate basins of superattracting fixed points of \(C_{a}\) corresponding to the roots of \(p_{a}\) are multiply connected. **Theorem 3.9**.: _At least two immediate basins of attraction corresponding to the roots of \(p_{a}\) are simply connected._ Proof.: If none of the immediate basins \(\mathcal{A}_{1},\mathcal{A}_{-1},\ \mathcal{A}_{\sqrt{a}},\mathcal{A}_{-\sqrt{a}}\) contain any free critical point then these are simply connected by Theorem 3.9 [6]. If there is a free critical point say \(\eta\) in an immediate basin of attraction \(\mathcal{A}_{\zeta}\) of \(\zeta\in\{-1,-\sqrt{a},\sqrt{a},1\}\) then \(\bar{\eta}\in\mathcal{A}_{\zeta}\). This is because each Fatou component containing a real number is symmetric about the real axis by Lemma 3.3. Further, it follows from the same lemma that \(-\eta,-\bar{\eta}\in\mathcal{A}_{-\zeta}\). Therefore, the other two immediate basins of superattracting fixed points contain no critical points other than the respective roots of \(p_{a}\). Hence these two immediate basins of attraction (corresponding to the roots of \(p_{a}\)) are simply connected by Theorem 3.9 [6]. Here is a remark. **Remark 3.4**.: _Let the boundary of \(\mathcal{A}_{1}\) (or \(\mathcal{A}_{-1}\)) contain a non-zero pole. If \(\mathcal{A}_{1}\) is simply connected then the Julia component containing this non-zero pole is unbounded and it follows from Corollary 3.2.1 that the Julia set is connected. If \(\mathcal{A}_{1}\) is not simply connected then \(\mathcal{A}_{\sqrt{a}}\) and \(\mathcal{A}_{-\sqrt{a}}\) are simply connected by Theorem 3.9._ We have the following result about the symmetry group of the Julia set of \(C_{a}\). **Theorem 3.10**.: _If the Fatou set of \(C_{a}\) consists only of the basins of attraction of the superattracting fixed points of \(C_{a}\) then \(\Sigma p_{a}=\Sigma C_{a}\)._ Proof.: It is known that \(\Sigma p_{a}\subseteq\Sigma C_{a}\) and every element of \(\Sigma C_{a}\) is a rotation about the origin, the centroid of \(p_{a}\) (Theorem 1.1. [5]). It is shown in Theorem 3.6 that the immediate basins \(\mathcal{A}_{\pm 1}\) are unbounded. As \(\infty\) is a repelling fixed point, by Lemma 3.2. [5], every Fatou component landing on these immediate basins \(\mathcal{A}_{\pm 1}\) is bounded. If \(\mathcal{A}_{\pm\sqrt{a}}\) are bounded then every Fatou component landing on these will be bounded as \(\infty\) is a fixed point. On the other hand, if \(\mathcal{A}_{\pm\sqrt{a}}\) are unbounded then Lemma 3.2. [5] gives that every Fatou component landing on \(\mathcal{A}_{\pm\sqrt{a}}\) is bounded. Hence the Fatou set of \(C_{a}\) contains at most four unbounded components. Let \(\sigma\in\Sigma C_{a}\). Then \(\sigma(\mathcal{A}_{1})\) can not be equal to \(\mathcal{A}_{-\sqrt{a}}\) or \(\mathcal{A}_{\sqrt{a}}\) and therefore \(\sigma(\mathcal{A}_{1})=\mathcal{A}_{1}\) or \(\mathcal{A}_{-1}\). Thus \(\sigma\) is either the identity or \(z\mapsto-z\). Since \(z\mapsto-z\) is the only non-identity element of \(\Sigma p_{a}\), we have \(\sigma\in\Sigma p_{a}\). Thus \(\Sigma C_{a}\subseteq\Sigma p_{a}\) and hence \(\Sigma C_{a}=\Sigma p_{a}\). **Remark 3.5**.: 1. _If_ \(a>0\) _and the Fatou set of_ \(C_{a}\) _consists of only the basins of the fixed points of_ \(C_{a}\) _corresponding to the roots of_ \(p_{a}\) _(all of these are real) then there is no Fatou component intersecting the imaginary axis. This is because the imaginary axis is invariant and no Fatou component can intersect both the axes. Therefore, the imaginary axis is in the Julia set of_ \(C_{a}\) _where_ \(a>0\) _and_ \(\mathcal{F}(C_{a})\) _consists of only the basins of the fixed points of_ \(C_{a}\) _corresponding to the roots of_ \(p_{a}\)_._ 2. _It is not known whether the non-zero poles are on the boundary of_ \(\mathcal{A}_{\pm 1}\) _or not._ 3. _For_ \(a>0,\) _it is believed (but yet not proved) that the immediate basins_ \(\mathcal{A}_{\pm\sqrt{a}}\) _are unbounded. This is supported by Figure_ 2(a)_, which is generated using MatLab._ Figure 2(a) illustrates the Fatou and the Julia sets of \(C_{0.5}\). The largest regions in deep blue, blue, yellow and deep yellow represent the immediate basins of attractions Figure 3: The Julia sets of \(C_{a}\) of \(-1,-\sqrt{a},\sqrt{a}\) and \(1\) respectively. All the smaller regions in deep blue belong to the basin (but not the immediate basin) of \(-1\). Similar is the case of smaller regions in other three colours. The Julia set is the complement of the union of these four basins. ### Negative parameters We are to deal with \(p_{a}(z)=(z^{2}-1)(z^{2}-a)\) for \(-1<a<0\). Let \(a=-b\) where \(b\in(0,1)\) so that \[p_{-b}(z)=(z^{2}-1)(z^{2}+b)\] for \(0<b<1\). Then the Chebyshev's method of of \(p_{-b}\), denoted by \(C_{-b}\), is defined as \[C_{-b}(z)=\frac{42z^{10}+\tilde{A}z^{8}+\tilde{B}z^{6}+\tilde{C}z^{4}+\tilde{D} z^{2}+b^{2}(1-b)}{8z^{3}\{2z^{2}-(1-b)\}^{3}} \tag{13}\] where \(\tilde{A}=-51(1-b)\), \(\tilde{B}=4(5b^{2}-3b+5)\), \(\tilde{C}=3(b^{3}+7b^{2}-7b-1)\) and \(\tilde{D}=6b(b^{2}-3b+1)\). The critical points of \(C_{-b}\) are \(\pm 1,\pm i\sqrt{b},0,\pm\sqrt{\frac{1-b}{2}}\), each with multiplicity two and the solutions of \(28z^{4}-8(1-b)z^{2}+(1-b)^{2}=0\) (see Equation 7) which are all simple. If \(z\) is such a solution then \(z^{2}=\frac{2\pm i\sqrt{3}}{14}(1-b)\). We prove as in positive parameter case that all extraneous fixed points are repelling. **Lemma 3.11**.: _All the extraneous fixed points of \(C_{a}\) for \(-1<a<0\) are repelling._ Proof.: The extraneous fixed points are the solutions of Equation 8. Let \(w=z^{2}\) and \(f(w)=22w^{3}-23(1-b)w^{2}+(5b^{2}-16b+5)w+b(1-b)\). Then \(f(-b)=-4b(1+b)^{2}<0\), \(f(0)=b(1-b)>0\), \(f(\frac{1-b}{2})=-\frac{1}{2}(1-b)(1+b)^{2}<0\) and \(f(1)=4(1+b)^{2}>0\). Therefore \(f\) has a root in each of the intervals \((-b,0),\ (0,\frac{1-b}{2})\) and \((\frac{1-b}{2},1)\) and the square roots of these roots are precisely the extraneous fixed points of \(C_{-b}\). There are four real and two purely imaginary extraneous fixed points. If \(b_{1},b_{2}\) are the positive extraneous fixed points in decreasing order then \[-1<-b_{1}<-\sqrt{\frac{1-b}{2}}<-b_{2}<0<b_{2}<\sqrt{\frac{1-b}{2}}<b_{1}<1.\] It follows from Lemma 3.4 that these four (real) extraneous fixed points are repelling. If \(ib_{3},-ib_{3}\) are purely imaginary extraneous fixed points such that \(b_{3}>0\) then \(ib_{3}\) is a square root of the negative root of \(f\) (lying in \((-b,0)\)) and therefore, \[-\sqrt{b}<-b_{3}<0<b_{3}<\sqrt{b}.\] If \(z=iy\) is an extraneous fixed point of \(C_{-b}\) then its multiplier is given by \(2\left[3-\frac{6y^{2}\{y^{2}+\tilde{\xi}^{2}\}}{\{3y^{2}+\tilde{\xi}^{2}\}^{2} }\right]\) where \(\tilde{\xi}=\sqrt{\frac{1-b}{2}}>0\). Let \(\tilde{\lambda}:\mathbb{R}\rightarrow\mathbb{R}\) be defined by \[\tilde{\lambda}(y)=2\left[3-\frac{6y^{2}\{y^{2}+\tilde{\xi}^{2}\}}{\{3y^{2}+ \tilde{\xi}^{2}\}^{2}}\right].\] Then \[\tilde{\lambda}^{{}^{\prime}}(y)=\frac{24\tilde{\xi}^{2}y(y^{2}-\tilde{\xi}^ {2})}{(3y^{2}+\tilde{\xi}^{2})^{3}}. \tag{14}\] Since \(\tilde{\lambda}\) is even, it is enough to analyse it in \([0,+\infty)\). Since \(C_{-b}^{{}^{\prime}}(ib_{3})=\tilde{\lambda}(b_{3})\), \(ib_{3}\) is a repelling fixed point of \(C_{-b}\) if and only if \(|\tilde{\lambda}(b_{3})|>1\). We are going to establish this by showing that \(\tilde{\lambda}(y)>1\) for all \(y\in(0,\sqrt{b})\). This is so because \(0<b_{3}<\sqrt{b}\). There are two cases depending on whether \(0<b\leq\frac{1}{3}\) or \(\frac{1}{3}<b<1\). If \(0<b\leq\frac{1}{3}\) then \(\tilde{\xi}^{2}\geq b\) and \(\tilde{\lambda}^{\prime}(y)<0\) for all \(y\in(0,\sqrt{b})\). Therefore \(\tilde{\lambda}\) is a decreasing function in \((0,\sqrt{b})\) and consequently, \(\tilde{\lambda}(y)\geq\tilde{\lambda}(\sqrt{b})=6\left[\frac{21b^{2}+6b+1}{(5 b+1)^{2}}\right].\) Letting \(\tilde{s}(b)=\frac{21b^{2}+6b+1}{(5b+1)^{2}}\), it is seen that \(\tilde{s}^{{}^{\prime}}(b)=\frac{12b-4}{(5b+1)^{3}}<0\) in \((0,\frac{1}{3})\) and its minimum value is attained at \(b=\frac{1}{3}\). Therefore \(\tilde{s}(b)\geq\tilde{s}(\frac{1}{3})=\frac{3}{4}\) for all \(0<b\leq\frac{1}{3}\). This gives that \(\tilde{\lambda}(y)\geq\frac{9}{2}>1\) for all \(y\in(0,\sqrt{b})\). If \(\frac{1}{3}<b<1\) then \(\tilde{\xi}^{2}<b\) and \(\tilde{\lambda}\) has a critical point \(\tilde{\xi}\) in the interval \((0,\sqrt{b})\) (see Equation 14). Indeed, \(\tilde{\lambda}\) decreases in \((0,\tilde{\xi})\) and then increases in \((\tilde{\xi},\sqrt{b})\) attaining its minimum at \(\tilde{\xi}\). Therefore \(\tilde{\lambda}(y)>\tilde{\xi}=\frac{9}{2}>1\) for any \(y\in(0,\sqrt{b})\). This concludes the proof. Unlike the case of positive parameter, all the superattracting basins corresponding to the roots of \(p_{a}\) are found to be unbounded in this case. **Theorem 3.12**.: _All immediate basins corresponding to the superattracting fixed points of \(C_{-b}\) are unbounded._ Proof.: Recall that the superattracting fixed points of \(C_{-b}\) are \(-1,1,i\sqrt{b}\) and \(-i\sqrt{b}\). In view of Lemma 3.3, it is enough to prove that the immediate basins \(\mathcal{A}_{1}\) and \(\mathcal{A}_{i\sqrt{b}}\) corresponding to \(1\) and \(i\sqrt{b}\) respectively are unbounded. To show that \(\mathcal{A}_{1}\) is unbounded, we need to analyse the iterative behavour of \(C_{-b}\) on \(\mathbb{R}\). For \(x\in\mathbb{R}\), \[C^{\prime}_{-b}(x)=\frac{3(x^{2}-1)^{2}(x^{2}+b)^{2}\{28x^{4}-8(1-b)x^{2}+(1-b)^ {2}\}}{128x^{4}\{x^{2}-\tilde{\xi}^{2}\}^{4}},\] where \(\tilde{\xi}\) is the positive square root of \(\frac{1-b}{2}\). Since all simple critical points of \(C_{-b}\) are non-real, \(C^{\prime}_{-b}(x)>0\) for every \(x\in\mathbb{R}\setminus\{\pm 1,0,\pm\tilde{\xi}\}\). In particular, \(C_{-b}\) is increasing in \([1,\infty)\). Also \[C_{-b}(x)-x=-\frac{11(x^{2}-1)(x^{2}+b)(x^{2}-b_{1}^{2})(x^{2}-b_{2}^{2})(x^{2} +b_{3}^{2})}{32x^{3}(x^{2}-\tilde{\xi}^{2})^{3}}\] where \(\pm b_{1},\pm b_{2}\) are real and \(\pm ib_{3}\) are purely imaginary extraneous fixed points of \(C_{-b}\) (see Lemma 3.11). It follows from Lemma 3.11 that \(b_{1}^{2},b_{2}^{2},\tilde{\xi}^{2}<1\) which gives that \(C_{-b}(x)<x\) for all \(x\in[1,\infty)\). Now \(\{C_{-b}^{n}(x)\}_{n>0}\) is a decreasing sequence which is bounded below by \(1\) for each \(x\in[1,\infty)\). Therefore, \(\lim\limits_{n\to\infty}C_{-b}^{n}(x)=1\) and hence \([1,\infty)\subseteq\mathcal{A}_{1}\). Figure 3(a) illustrates the case when \(b=0.5\). To show that \(\mathcal{A}_{i\sqrt{b}}\) is unbounded, first note that \(C_{-b}(iy)=i\varphi_{-b}(y)\) for \(y\in\mathbb{R}\) where \[\varphi_{-b}(y)=\frac{42y^{10}-\tilde{A}y^{8}+\tilde{B}y^{6}-\tilde{C}y^{4}+ \tilde{D}y^{2}-b^{2}(1-b)}{8y^{3}\{2y^{2}+(1-b)\}^{3}},\] where \(\tilde{A}=-51(1-b)\), \(\tilde{B}=4(5b^{2}-3b+5)\), \(\tilde{C}=3(b^{3}+7b^{2}-7b-1)\) and \(\tilde{D}=6b(b^{2}-3b+1)\). This follows from Equation 13. The dynamics of \(C_{-b}\) on the imaginary axis is the same as that of \(\varphi_{-b}\) on the real line. The unboundedness of \(\mathcal{A}_{\sqrt{b}}\) will be proved by showing that for each \(y\in(\sqrt{b},\infty)\), \(\lim\limits_{n\to\infty}\varphi_{-b}^{n}(y)=\sqrt{b}\). Observe that \[\varphi_{-b}^{\prime}(y)=\frac{3(y^{2}+1)^{2}(y^{2}-b)^{2}\{28y^{4}+8(1-b)y^{2} +(1-b)^{2}\}}{128y^{4}\{y^{2}+\tilde{\xi}^{2}\}^{4}}.\] The equation \(28y^{4}+8(1-b)y^{2}+(1-b)^{2}=0\) has no real root ( else \(y^{2}\) will be equal to \(\frac{-2\pm i\sqrt{3}}{14}(1-b)\) which is not possible). Therefore \(\varphi_{-b}^{\prime}(y)>0\) for every \(y\in\mathbb{R}\setminus\{0,\pm\sqrt{b}\}\). Since \[\varphi_{-b}(y)-y=-\frac{11(y^{2}+1)(y^{2}-b)(y^{2}+b_{1}^{2})(y^{2}+b_{2}^{2} )(y^{2}-b_{3}^{2})}{32y^{3}(y^{2}+\tilde{\xi}^{2})^{3}}<0,\] \(\varphi_{-b}(y)<y\) for all \(y>\sqrt{b}\). Therefore \(\{\varphi_{-b}^{n}(y)\}_{n>0}\) is a decreasing sequence which is bounded below by \(\sqrt{b}\) and hence \(\lim\limits_{n\to\infty}\varphi_{-b}^{n}(y)=\sqrt{b}\) for all \(y>\sqrt{b}\) (see Figure 4b). **Remark 3.6**.: _Following a similar argument used in the proof of Theorem 3.12, it can also be shown that \(\lim\limits_{n\to\infty}C_{-b}^{n}(x)=1\) whenever \(x\in(b_{1},1)\), where \(b_{1}\) is the extraneous fixed point of \(C_{-b}\) lying on \((\tilde{\xi},1)\), and \(\lim\limits_{n\to\infty}\varphi_{-b}^{n}(y)=\sqrt{b}\) for all \(y\in(b_{3},\sqrt{b})\), where \(b_{3}\) is the purely imaginary extraneous fixed point of \(C_{-b}\) lying on \((0,i\sqrt{b})\). Thus \((b_{1},\infty)\subset\mathcal{A}_{1}\) and \((ib_{3},\infty)\subset\mathcal{A}_{i\sqrt{b}}\), where \((ib_{3},\infty)\) is an interval on the imaginary axis._ The next theorem assures the simply connectedness of the immediate basins of the purely imaginary superattracting fixed points of \(C_{-b}\). The connectedness of the Julia set is also proved under a condition. **Theorem 3.13**.: _The immediate basins \(\mathcal{A}_{i\sqrt{b}}\) and \(\mathcal{A}_{-i\sqrt{b}}\) are simply connected. If there is a non-zero pole on the boundary of any of these immediate basins then the Julia set of \(C_{-b}\) is connected._ Proof.: In view of Lemma 3.3, it is enough to prove this theorem for \(\mathcal{A}_{i\sqrt{b}}\). The immediate basin \(\mathcal{A}_{i\sqrt{b}}\) does not intersect the real line by Lemma 9 and all the poles of \(C_{-b}\) are real. It follows from the arguments used in the proof of Theorem 3.2 that it is simply connected. The Julia component containing the origin is unbounded by Lemma 3.1(3). If there is a non-zero pole on the boundary of \(\mathcal{A}_{i\sqrt{b}}\) then this non-zero pole is also in the unbounded Julia component. We are now done by Corollary 3.2.1. That the symmetry groups of \(p_{-b}\) and \(C_{-b}\) coincide in some case is now proved. **Theorem 3.14**.: _If the Fatou set of \(C_{-b}\) consists only of the basins of attraction of the superattracting fixed points of \(C_{-b}\) and \(b_{1}\neq b_{3}\) where \(b_{1}\) is the largest positive extraneous fixed point \(C_{-b}\) and \(ib_{3}\) is the extraneous fixed point of \(C_{-b}\) lying on the imaginary axis then \(\Sigma p_{-b}=\Sigma C_{-b}\)._ Proof.: First note that \(\Sigma p_{-b}\subseteq\Sigma C_{-b}\) and every element of \(\Sigma C_{-b}\) is a rotation about the origin, the centroid of \(p_{-b}\) (Theorem 1.1. [5]). Following the proof of Theorem 3.10, we get that there are exactly four unbounded components in \(\mathcal{F}(C_{-b})\). Therefore, for any \(\sigma\in\Sigma C_{-b}\), \(\sigma(\mathcal{A}_{1})\) is either \(\mathcal{A}_{\pm i\sqrt{b}}\) or \(\mathcal{A}_{-1}\). From Remark 3.6, we get \((b_{1},\infty)\subset\mathcal{A}_{1}\), whereas, the interval \((ib_{3},\infty)\) on the imaginary axis is in \(\mathcal{A}_{i\sqrt{b}}\). Thus if \(\sigma(\mathcal{A}_{1})=\mathcal{A}_{i\sqrt{b}}\) then \(\sigma((b_{1},\infty))=(ib_{3},\infty)\). As the extraneous fixed points \(b_{1}\) and \(ib_{3}\) are in the Julia set, this can only possible whenever \(b_{1}=b_{3}\), that contradicts our assumption. Therefore \(\sigma(\mathcal{A}_{1})\neq\mathcal{A}_{i\sqrt{b}}\). Since two purely imaginary extraneous fixed points are with same modulus, by the similar argument we get \(\sigma(\mathcal{A}_{1})\neq\mathcal{A}_{-i\sqrt{b}}\). Therefore \(\sigma(\mathcal{A}_{1})=\mathcal{A}_{-1}\). As \(\sigma\) is an arbitrary element in \(\Sigma C_{-b}\), we get \(\Sigma C_{-b}=\{I,z\mapsto-z\}\). Since \(z\mapsto-z\) is the only non-identity element of \(\Sigma p_{-b}\), \(\Sigma C_{-b}\subseteq\Sigma p_{-b}\). The Fatou set of \(C_{-0.5}\) is given in Figure 3b. The regions with deep blue, blue, yellow and deep yellow signify the basins of attraction of the four super attracting fixed points of \(C_{-0.5}\). The largest region of each colour is the respective immediate basin. The Julia set of \(C_{-0.5}\) is the complement of the union of these four basins. Lastly, we provide the following table illustrating some comparisons between the two cases: \(a<0\) and \(a>0\). \begin{table} \begin{tabular}{c|c} \hline \hline \(a>0\) & \(a<0\) \\ \hline \hline The real and imaginary axes are invariant and \({\cal J}(C_{a})\) is symmetric with respect to both the axes. \\ \hline All roots of \(p_{a}\) and poles of \(C_{a}\) are critical points of \(C_{a}\) with multiplicity two each. There are four other simple critical points of the form \(c,\ -c,\ \bar{c}\) and \(-\bar{c}\), where \(c^{2}=\frac{(2+i\sqrt{3})(a+1)}{14}\). Thus simple critical points are non-real. \\ \hline \end{tabular} There are six extraneous fixed points. \begin{tabular}{c|c} \hline \hline All extraneous fixed points are real and repelling. & Four extraneous fixed points are real and two are purely imaginary. All are repelling. \\ \hline Immediate basins of \(1\) and \(-1\) are unbounded. & Immediate basins \({\cal A}_{\pm 1}\) and \({\cal A}_{\pm\sqrt{a}}\) are unbounded. \\ \hline At least two immediate basins are simply connected. & At least two immediate basins are simply connected. More precisely, \({\cal A}_{\pm\sqrt{a}}\) are simply connected. \\ \hline \end{tabular} There is no Herman ring. There is no invariant Siegel disk. \end{table} Table 1: Properties of \(C_{a}\) ## 4 Declarations ### Funding The second author is supported by the University Grants Commission, Govt. of India. ### Conflicts of interest/Competing interests Not Applicable. ### Data Availability statement Data sharing not applicable to this article as no datasets were generated or analysed during the current study. ### Code availability Not Applicable
2305.19489
Spiking ruby revisited: self-induced periodic spiking oscillations leading to chaotic state in a Cr:Al2O3 laser with cw 532-nm pumping
This paper reexamines a 60 year old mystery of spiking behavior in ruby lasers with a cw 532 nm pump, paying special attention to mode matching between the pump and lasing beam within a ruby crystal placed in a semi-confocal laser cavity. Periodic spiking oscillations were observed in a limited pump power regime, where spikes obeying the generic asymmetric hyperbolic function appeared at a repetition rate around 50 kHz and with a 130-150 ns width and 0.1-0.6 microjoule energy depending on the pump power. The physics of the spiking behavior based on Kleinman's mechanical approach and a plausible interpretation for the periodic spiking oscillation in terms of self-induced mode matching between the pump and laser beams through the self-induced Kerr-lens effect are addressed. The statistical nature inherent to spiking and the associated self-organized critical behavior in quasi-periodic spiking oscillations as well as chaotic states occurring outside the periodic spiking regime are also explored from a nonlinear dynamics point of view.
Kenju Otsuka, Seiichi Sudo
2023-05-31T01:56:50Z
http://arxiv.org/abs/2305.19489v3
Spiking ruby revisited: self-induced periodic spiking oscillations leading to chaotic state in a Cr:Al\({}_{2}\)O\({}_{3}\) laser with cw 532-nm pumping ###### Abstract This paper reexamines a 60-year-old mystery of spiking behavior in ruby lasers with a cw 532-nm pump, paying special attention to mode matching between the pump and lasing beam within a ruby crystal placed in a semi-confocal laser cavity. Periodic spiking oscillations were observed in a limited pump power regime, where spikes obeying the generic asymmetric hyperbolic function appeared at a repetition rate around 50 kHz and with a 130-150 ns width and 0.1-0.6 \(\upmu\)l energy depending on the pump power. The physics of the spiking behavior based on Kleinman's mechanical approach and a plausible interpretation for the periodic spiking oscillation in terms of self-induced mode matching between the pump and laser beams through the self-induced Kerr-lens effect are addressed. The statistical nature inherent to spiking and the associated self-organized critical behavior in the quasi-periodic spiking oscillations as well as chaotic states occurring outside the periodic spiking regime are clarified from a nonlinear dynamics point of view and intriguing statistical properties inherent to chaotic oscillations in usual solid-state lasers subjected to external modulations are shown to be present in the self-induced instabilities of the ruby laser ## 1 Introduction The tremendous progress that has been witnessed in the laser sciences and optical technologies all started with T. Maiman's ruby laser [1]. A few years after the development of the ruby laser, a certain problem was nearly forgotten until the advent of the Ar laser pumped cw ruby laser [2-6]. That is, when this laser was first demonstrated, its output was regarded as being completely stable under cw pumping at helium or nitrogen temperatures [2, 4]. Later, other investigators disagreed and reported dynamical instabilities [3, 5, 6]. It has become clear since then that ruby lasers can operate under both conditions. Not only can the laser be completely stable, but it can also operate in two instability conditions of chaotic and regular pulsing output [7]. The repetitive self-switching as well as related undamped spiking phenomena still remain as interesting unsolved mysteries in ruby and other lasers. There is to date still no definite understanding of the physical mechanism driving the instability. On the other hand, two-orders of spectral narrowing of the fluorescence line width is required by cooling the rod to liquid nitrogen temperatures for cw argon ion laser pumping at 514.5 nm. Recently, two research groups have realized room-temperature cw ruby lasers by pumping at the optimum of the absorption profile at 405 nm of GaN laser diodes [8, 9], where both employed ruby crystals with a Cr\({}_{2}\)O\({}_{5}\) concentration of 0.05 % and a c-axis at 90\({}^{o}\)relative to the rod axis. These cw ruby laser oscillations with GaN laser-diode pumping raise intriguing questions as to whether room-temperature LD pumping at 405 nm can solve the over 60-year-old mystery of the spiking behaviors in ruby lasers. Here, we performed systematic experiments towards room-temperature cw operation of ruby lasers pumped with a frequency-doubled Nd:YAG laser at 532 nm. Unfortunately, non-spiking cw oscillations were not achieved, while self-induced periodic spiking oscillations leading to quasi-periodic spiking and chaotic states were observed. The physics of the spiking behavior based on Kleinman's mechanical approach and a plausible interpretation for the periodic spiking oscillation in terms of self-induced mode matching between pump and laser beams are addressed. Experimental results ### Experimental apparatus and input-output characteristics The experimental setup is shown in Fig. 1(a). A TEM\({}_{00}\)second harmonic output (wavelength: 532 nm) of an LD-pumped linearly-polarized TEM\({}_{00}\)mode Nd:YAG laser (CNI MG-F-532; TEM\({}_{00}\)mode, M\({}^{2}=1.2\)), whose polarization direction was set perpendicular to the ruby c-axis by a half-wave plate, was focused on a 7-mm-thick, 10-mm-diameter a-cut ruby single crystal (Cr\({}^{3x}\)-Al\({}_{2}\)O\({}_{3}\)) with a 0.03 wt% Cr concentration (SHINKOSHA Co., Ltd.), while the absorption coefficient at 532 nm was measured to be a = 2.1 cm\({}^{-1}\). The end surface was directly coated with a dielectric mirror M\({}_{1}\) (transmission at 532 nm: 88%, reflectance at 694 nm: 99.8%). A concave mirror M\({}_{2}\) (reflectance at 694 nm: 99%; radius of curvature, R\({}_{2}=5\) cm) was placed nearly 2.5 cm in optical length apart from the mirror M\({}_{1}\) to form the most stable semi-confocal laser cavity. The pump beam spot size averaged over the crystal was made close to the semi-confocal cavity spot size (around 70 \(\mu\)m) by controlling the pump beam diameter at the lens of focal length f = 15 cm for decreasing the threshold pump power. The lasing pattern was measured by a mode profiler and the output was detected by a Si photo-diode (New Focus 1801M, 125 MHz bandwidth) followed by a digital phosphor oscilloscope (Tektronix DPO 2024, 200-MHz bandwidth). The pumping intensity fluctuation measured on the same time scale as the following experiment was at most \(\pm\)1% in the entire pump power regime. The input-output characteristics are shown in Fig. 1(b), where the far-field lasing pattern exhibited a TEM\({}_{00}\)transverse mode as shown in the inset. The dynamic behavior was found to change with increasing pump power, as depicted in Fig. 1(b). ### Dynamic behavior The present ruby laser exhibited multi-longitudinal mode operation separated by \(\Delta\)f = c/2nl = 12.1 GHz (i.e., \(\Delta\lambda=\lambda^{2}\)/2nl = 0.2A in spectrometer measurement), which resulted from the wavelength selection effect of the 7-mm thick active "crystal etalon" (c: velocity of light, refractive index n = 1.77, crystal length l = 7mm) [10]. Such multimode oscillations in homogeneously broadened solid-state lasers, e. g. ruby lasers, have been explained in terms of the spatial hole-burning effect of population inversions by C. L. Tang, _et. al_ in 1963 [11]. In addition, individual modes have been proven to exhibit self-organized collective behaviors such that the total output behaves just like a single mode laser through the cross-saturation of population inversions in the vicinity of stationary state and in large signal regimes including chaos [12-14]. Figure 1: (a) Experimental apparatus. (b) Input-output characteristics. QS: quasi-periodic spiking regime, PS: periodic spiking regime, CS: chaotic state. In the present experiment, dynamical properties were measured for the total output power focused on a Si photo-diode without using a spectrometer. Therefore, the following dynamical behaviors are considered to represent the generic properties of a single-mode laser. Periodic spiking oscillations appeared in the limited pump power regime, PS, while the periodicity was undermined in the lower pump region, QS, featuring quasi-periodicities and a chaotic state appeared in the higher pump regime, CS, above the PS regime as depicted in Fig. 1(b). An example of a periodic spiking oscillation is shown in Fig. 2(a), together with the corresponding power spectrum and a map of the phase-space attractor [7]. As for the mapping, we used a total data length of \(T_{total}=\Delta\tau\times 10^{5}\), which was restricted by the digital oscilloscope, by setting the measurement time interval to \(\Delta\tau=4\) ns to reproduce narrow spiking waveforms accurately. The pump-dependent repetition frequency, spike pulse width, and associated spike-energy are shown in Fig. 2 (b), where the peak output power was calibrated from the averaged output power shown in Fig. 1(b). The peak power of spikes within the cavity reached 100\(\sim\)400 W, reflecting extremely low duty cycles of 0.6\(\sim\)0.75%. Figure 3: (a) Quasi-periodic spiking oscillation featuring two competing frequencies, f\({}_{1}\) = 9.5 kHz and f\({}_{2}\) = 48.75 kHz, whose higher harmonic components merge in the vicinity of 390 kHz. (b) Chaotic state featuring alternating soft and hard-mode chaos over time which will discussed again in section 5. Figure 2: Periodic spiking oscillation. (a) Waveform, power spectrum, and Poincaré section at pump power of P = 1040 mW. (b) Pump-dependent repetition frequency, peak power, pulse width and energy of spiking pulse. Results for the quasi-periodic spiking and chaotic state are summarized in Fig. 3. They capture the qualitative difference in the phase space trajectories such that isomorphic "multiple loops" appear in the quasiperiodic spiking, while "chaotic loops" featuring small and large orbits, which indicate alternate appearances of soft and hard-mode chaos (see section **5**), appear in the chaotic state. ## 3 Statistical nature of spiking pulse To provide physical insight into the periodic excitation of the spiking oscillations, let us introduce the Toda potential for the laser rate equations. According to Kleiman's mechanical approach relevant to four-level as well as three-level systems like a ruby laser, which are governed by standard rate equations for population inversions and photon density [15, 16], the dynamics of the photon density can be understood in analogy with the motion of a particle in the following laser Toda potential, \(V\), with a logarithmic transformation for the photon density as \(u(t)\equiv\ln s(t)\)[17]: \[d^{2}u/\mathrm{d}t^{2}+\kappa(\mathrm{d}u/\mathrm{d}t)+\partial V/\partial u =F_{D}(t), \tag{1}\] \[V=K[(e^{u}-1)-(w-1)u], \tag{2}\] \[\kappa=1+e^{u}, \tag{3}\] where \(\mathrm{w}=\mathrm{P}/\mathrm{P}_{\mathrm{th}}\)(P: pump power, \(\mathrm{P}_{\mathrm{th}}\): threshold pump power), \(\mathrm{K}=\mathrm{\tau}/\mathrm{\tau}_{\mathrm{p}}\)(\(\mathrm{\tau}\): fluorescence lifetime, \(\mathrm{\tau}_{\mathrm{p}}\): photon lifetime), \(F_{D}\) is a driving force to the particle, while the effect of spontaneous emission is neglected for the sake of brevity. Here, the damping rate, \(\kappa\), increases as the photon density, \(\mathrm{s(t)}=\mathrm{e}^{\mathrm{u(t)}}\), increases, whereas it does not depend on \(\mathrm{u(t)}\) in the original Toda oscillator [18]. The pump-dependent laser Toda potentials, \(V(u)\), are shown in Fig. 4(a). The particle moves within a highly asymmetric laser Toda potential with a damping rate, \(\kappa\). Without a driving force, \(F_{D}\), in Fig. 4(a), the particle approaches the ground state, exhibiting damped relaxation oscillations. Hamiltonian motion around the ground state, i.e., periodic relaxation oscillations (soft mode), is established if a periodic driving force is applied at the Figure 4: (a) Pump-dependent laser Toda potentials. \(\mathrm{w}=\mathrm{P}/\mathrm{P}_{\mathrm{th}}\). (b) Zoomed-in view of spike waveform, together with the hyperbolic fitting curve. (c) Intensity probability distribution for periodic spiking oscillations shown in Fig. 2(a). relaxation oscillation frequency, \(f_{\text{RO}}=(1/2\pi)[(\text{w}-1)/\tau_{\text{tp}}]^{1/2}\), such that the damping force, \(\kappa\left(du/dt\right)\), balances the periodic driving force, \(F_{D}\)[17]. In addition to the soft mode, a spike-like waveform builds up within the asymmetric potential by tuning the strength and frequency of the driving force in the large signal regime. In fact, periodic spiking oscillations (hard-mode) were realized in semiconductor lasers through the use of deep injection current modulation [19] and in solid-state lasers through the use of a deep pump [20] or loss modulation [17] at \(f_{\text{SP}}\) (\(<f_{\text{RO}}\)). Let us examine whether the self-induced periodic spiking oscillations shown in Fig. 2(a) obey the generic dynamic properties of spiking oscillations reported so far. The spike-pulse waveform is confirmed to be well fitted by the following hyperbolic function, as in ref. [17]: \[s(t)=s_{p}sech^{2}\left(\sqrt{\frac{s_{p}}{2\tau_{\text{tp}}}}(t-t_{0})\right), \tag{4}\] Here, \(t_{0}\) is the time at which the peak photon number occurs. A magnified view of a single spike-pulse waveform in the ruby laser and the hyperbolic fitting curve are shown in Fig. 4(b), assuming \(\tau=3.4\) ms and \(\tau_{\text{p}}=2\) ns. A large coefficient of determination of \(\text{R}^{2}=0.99\) is attained in this case. The intensity probability distribution \(P(s)\) for the periodic spiking oscillation corresponding to Fig. 2(a) reaches a minimum at the inflection point, \(s_{c}=s_{p}sech^{2}\left(arctanh\frac{1}{\sqrt{3}}\right)=\frac{2}{3}\)\(s_{p}\), as shown in Fig. 4(c), where \(P(s)\) increases monotonically in the region \(s>s_{c}\)[14] ## 4 Transverse mode formation within ruby crystal ### Thermal and Kerr lens effects A peculiar statistical property relevant to the laser's spiking oscillations shown in the previous section strongly suggests that the appropriate self-induced driving force forms so as to cancel the damping force to thereby establish periodic motions preserving the potential energies, \(\text{V}(u_{\text{M}})=\text{V}(u_{\text{m}})=\text{E}\), over time, as in Fig. 4(a). Ever since the first report in 1968, self-induced pulsations in ruby lasers have drawn interest and have been given a number of plausible explanations in terms of "self-Q-switching" (SQS), which is based on the saturable absorber effect inherent to impurities or ground-state reabsorption effect in the unpumped region, as well as cavity misalignment [6, 21-23]. Here, we investigate the mechanism of self-induced periodic spiking oscillations in the present ruby laser by focusing on the matching between the pump and lasing beam profiles within the ruby crystal. First of all, let us examine the thermal lens effect which modifies the transverse eigenmode to be formed in the laser resonator. The focal length of the thermal lens is given by \[f_{T}=\frac{\pi\text{w}_{p}^{2}K_{T}}{2q\left(\frac{dn}{dT}\right)} \tag{5}\] \[Q=P_{abs}\left(1-\frac{u}{\nu_{p}}\right), \tag{6}\] where \(P_{abs}\) is absorbed pump power, \(\nu_{p}\) and \(\nu_{l}\) are pump and laser optical frequencies, \(w_{p}\) is the average pump spot size in the crystal, and \(K_{T}\) is thermal conductivity [7]. The estimated focal length within the periodic spiking regime is shown by the red curve in Fig. 5(b) as a function of the pump power in the periodic spiking regime in Fig. 2(b), assuming \(\text{K}_{\text{T}}=0.092\text{cal/cm-}^{\text{o}}\text{C-sec}\) and \(\text{dn/dT}=1.2\text{x}10^{5/o}\text{C}\) for ruby crystal. In the case of a ruby laser, on the other hand, the complex nonlinear refractive index originates from the light-induced population changes in the excited and ground states of the Cr\({}^{3+}\) ion. Indeed, a huge _resonance-enhanced_ nonlinear refractive index of \(n_{2}=1.25\times 10^{-12}\)m\({}^{2}\)/W was reported in an experiment based on the differential interferometric technique using an argon ion laser for the absorption band at 514.5 nm [24], which resulted in self-induced superluminal and subluminal group velocity propagation in the pink ruby [25, 26]. The optical bistability was demonstrated in a Fabry-Perot cavity containing the pink ruby for the input light nearly resonant to R\({}_{1}\) line (694 nm) with the input power level below 20 mW, resulting from a huge nonlinear refractive index [27]. A large amount of frequency chirping was observed in a Q-switched ruby laser operating at 694 nm [28], which would correspond to a _resonance-enhanced_\(n_{2}\)-value on the order of \(10^{-16}\) m\({}^{2}\)/W under lasing conditions, in reference to values of various materials measured by the nonlinear transmittance measurement method [29]. Here, we examine the Kerr-lens effect based a nonlinear refractive index due to the dispersive contribution, which is expected to result from the population-density changes caused by high-intensity spikes, to provide new insights into self-induced spiking behaviors. The time-dependent focal length of a thin piece (thickness \(d\)) of a Kerr lens is given by \[f_{K}=\pi{w_{e}}^{4}/8n_{2}dP_{c}(t), \tag{7}\] where \(w_{e}\) is the effective lasing beam spot size and \(P_{c}(t)\) is the circulating intracavity laser power within the ruby crystal [30]. The focal length for the Kerr lens is determined by \(P_{c}\) on the basis of the experimental peak power versus pump power relation shown in Fig. 2(b), assuming 1% transmittance of the output mirror, M\({}_{2}\), \(w_{e}(\cong w_{p})=72\) mm, and \(n_{2}d=10^{-18}\) m\({}^{3}\)/W. ### Mode matching between pump and lasing beams In Fig. 5(b), the Kerr-lens effect shown by the green curve dominates the thermal-lens effect in the formation of the transverse eigenmode in the laser cavity. The equivalent laser cavity configuration incorporating thermal- and Kerr-lens effects is depicted in Fig. 5(a), where the effective radius of curvature of the coated end surface is given by \(R_{1}=2f_{c}=2[f_{T}\cdot f_{K}/(f_{T}+f_{K})]\), with \(f_{c}\) being the focal length of the combined lens. The resultant pump-dependent beam-waist spot size, \(w_{0}\), and its position, \(z_{1}\), given by the following equations are shown in Figs. 5(c) and 5(d), respectively. Figure 5: (a) Equivalent laser-cavity configuration featuring thermal and Kerr lens effects. (b) Pump-dependent focal lengths of thermal, Kerr, and combined lenses. Pump and effective lasing beam spot sizes are assumed to be 72 \(\square\)m. (c), (d) Pump-dependent beam waist spot size and its position within the cavity. \[w_{0} = \sqrt{\frac{\lambda_{0}}{\pi}\,^{4}}\sqrt{\frac{L(R_{1}+R_{2}-L)(R_{ 1}-L)(R_{2}-L)}{R_{1}+R_{2}-2L}}, \tag{5}\] \[z_{1}=\frac{L(R_{2}-L)}{R_{1}+R_{2}-2L}\,. \tag{6}\] The lasing mode profiles within the ruby laser crystal calculated using \(w_{0}(z_{1})\) values are shown in Fig. 6 for the thermal lens only (upper traces) and the combined lens incorporating the thermal and Kerr lens effects (lower traces), assuming R\({}_{2}\) = 5 cm and an optical cavity length, L = 2.5cm, together with the pump profile. The mode matching between the pump and laser beams is found to be greatly improved by the Kerr-lens focusing for the spiking mode, where the gain medium itself acts as a "soft aperture" for the cw mode in the terminology of Kerr-lens mode-locking. On the other hand, the laser's slope efficiency, S\({}_{\rm e}\), is shown to depend on _modal coupling efficiency_, \(\eta_{c}\), which is determined by the three-dimensional overlap integral of the pump beam, r(x,y,z), and the laser beam, s(x,y,z), obeying the following equations [31] \[\eta_{c}=\frac{(\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! From these relations, the cavity loss decreases in proportion to the modal coupling efficiency given by Eq. (7). The equivalent time-dependent effective loss reduction associated with the time-dependent Kerr-lens effect (i.e., the lower traces in Fig. 6) is calculated to be \(\delta L_{c}/L_{c}=2.3\times 10^{-2}\) (\(2.3\%\)) and \(5.1\times 10^{-2}\)(\(5.1\%\)) with respect to the thermal lens only (i.e., the upper traces in Fig. 6). Here, \(\eta_{c}\) was calculated based on the experimentally determined beam profiles shown in Fig. 6(a) and Fig. 6(b). Such a dynamical loss reduction provides the laser with the possibility of encouraging spiking operation rather than cw operation through the Kerr-lens mediated balance of the gain and cavity loss, which includes the ground-state reabsorption in the unpumped region in Fig. 6. A similar analysis to that of the periodic loss modulation in a laser Toda potential [17] shows that a spike-mediated driving force given by \[F_{D}\left(t\right)=K\gamma\left({P^{\prime}}_{c}(t)+P_{c}(t)\right) \tag{10}\] arises, as depicted by the arrow in Fig. 4(a), where \(\gamma\) is determined in proportion to \(n_{2}\)-value in the form of \(\gamma\propto 8n_{2}d/w_{e}^{4}\). In the present ruby laser, the scaling factor is extremely large as \(K\)\(=1.7\times 10^{6}\) and a pronounced driving force is expected to reduce dissipative resistive force as if the particle is moving through a viscous medium. Note that the self-Q-switching operation based on the Kerr-lens effect was reported most recently in laser-diode-pumped Nd:LuAG lasers, where SQS was further verified to occur in a numerical simulation of the laser rate equations, including a similar photon-density-dependent intracavity loss to what we discussed above, considering soft aperture loss and self-focusing caused by the Kerr lens [32, 33]. ## 5 Statistical analysis of self-induced instability in ruby laser Finally, let us examine nonlinear dynamics, i.e., quasi-periodic spiking oscillations and chaotic state, observed outside the periodic spiking regime addressed in **2.2**. Time-dependent analyses of singular-value decomposition (SDV) spectra [34] for Fig. 3(b) are shown in Fig. 6(a), where the analysis was carried out in the time interval of a 4096- data length for each calculation with the shift of a 1024-data length for the next calculation. The steep exponential decay in the first segment followed by the noise floor clearly ensures the existence of chaos in the present ruby laser. Figure 7: (a) SDV spectra for chaotic state. 3D statistical graphics for (b) quasi-periodic spiking and (c) chaotic state. Fitting parameter values [\(E_{s,0}\), a, b, c, d]: (b) [-\(5.0\times 10^{6}\), \(2.91\times 10^{7}\), \(35.26\), -\(4.05\times 10^{9}\), -\(5.97\times 10^{7}\)]; (c) [-\(2.53\times 10^{6}\), \(3.80\times 10^{-7}\), \(8.41\), -\(4.48\times 10^{9}\), -\(4.02\times 10^{6}\)]. The intriguing relation between three quantities of peak intensity, pulse width, and pulse energy (area) of spikes is shown as the 3D statistical graphics in Figs. 7(b) and 7(c) for quasi-periodic spiking and chaotic state. Note that individual pulses with different energies are self-organized to lie on the 2-dimensional parabolic surface \(E_{p}=E_{p,0}+\mathrm{a}\cdot\mathrm{s}_{p}+\mathrm{b}\cdot\mathrm{a}\tau_{p}+ \mathrm{c}\cdot\mathrm{s}_{p}{}^{2}+\mathrm{d}\cdot\mathrm{a}\tau_{p}{}^{2}\) for both cases. In particular, \(\mathrm{R}^{2}=0.999\) was obtained for quasi-periodic spiking oscillation. In the case of chaotic self-spiking (hard mode) oscillations in solid-state lasers subjected to strong external modulation, a high degree of self-organization, with \(\mathrm{R}^{2}=0.996\), was clarified to be established behind the inverse-power-law universality of their intensity probability distributions [35, 17], while occasional interruptions of small-amplitude chaotic relaxation oscillations (soft mode) during chaotic spiking oscillations (hard mode) result in a breakup of the inverse power law, giving rise to the appearance of a peculiar "slope" [17]. On the other hand, the present ruby laser violates the inverse power law, as shown Fig. 8, where the peculiar "slope" appears to represent a quiet region of s(t) between the upper bound of the soft-mode chaos intensity fluctuations and the lower bound of the hard-mode chaotic fluctuations. Accordingly, self-organization is considered to be degraded to \(\mathrm{R}^{2}=0.963\). Note that the intriguing statistical nature inherent to chaotic oscillations, which are brought about in solid-state lasers usually by external modulations, is present in the self-induced instabilities in the three-level ruby laser. ## 6 Summary and outlook In summary, self-induced spiking instability leading to chaos was demonstrated to occur in a ruby laser pumped with a continuous 532-nm laser. Dynamic behaviors in the form of periodic, quasiperiodic spiking oscillations and chaotic states were characterized by plotting waveforms, power spectra, and Poincar\(\acute{e}\) sections. The intensity probability distribution of periodic spiking oscillations was found to result from the inherent asymmetric hyperbolic spiking waveform. Spiking oscillations were explained in terms of a particle moving in the laser Toda potential. Precise analyses of the experimental results revealed that the lasing cavity modes were formed by the pump-induced thermal lens effect for the cw mode and that high intensity spiking was associated with the dynamic Kerr-lens effect. The resulting enhanced mode-matching between the pump and lasing beams was described in terms of the mode coupling efficiency by computing the overlap integrals of the two interacting beams, the dynamics of the photon density were found to be analogous to a particle experiencing driving force while moving through a viscous medium. The quasi-periodic spiking and chaotic dynamics observed at pump powers outside the periodic spiking regime in the ruby laser were characterized statistically. A time-dependent singular-value-decomposition (SVD) of an experimental time series showed that the chaotic spiking state appeared as the pump power increased. 3D statistical graphical analyses revealed Figure 8: Intensity probability distribution of chaotic state an intriguing relation between the peak intensity, width, and energy of the spikes in the quasi-periodic and chaotic states, wherein individual pulse energies are self-organized so as to lie on a parabolic surface with a large coefficient of determination. Moreover, self-induced instabilities that are inherent to chaotic oscillations brought about in solid-state lasers usually by external modulation were found in the three-level ruby laser. The results of the present study pose a crucial question regarding stable cw operation of ruby lasers: How do different dynamical behaviors (i.e., either non-spiking oscillation or self-induced spiking oscillation) take place within essentially the same laser cavity configuration but at different pump wavelengths, e.g., 514.5 nm, 405 nm, and 532 nm? To the best of our knowledge, this until-now hidden property of pink ruby crystals has not been reported in other solid-state lasers, including ones made from four-level or quasi-three-level materials. A possible answer could lie in the critical dependence of the dynamic stability on the relation between the pump and lasing beam profiles within the ruby crystal, reflecting the pump-wavelength-dependent absorption coefficient as well as the pump beam focus on the crystal. **Acknowledgements:** The authors thank Prof. Jing-Yuan Ko, National Kaohsiung Normal University, Taiwan for his support in the SDV analysis. **Disclosures:** The authors declare no conflicts of interest. **Data availability:** Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2310.00346
Fostering new Vertical and Horizontal IoT Applications with Intelligence Everywhere
Intelligence Everywhere is predicated on the seamless integration of IoT networks transporting a vast amount of data streams through many computing resources across an edge-to-cloud continuum, relying on the orchestration of distributed machine learning models. The result is an interconnected and collective intelligent ecosystem where devices, systems, services, and users work together to support IoT applications. This paper discusses the state-of-the-art research and the principles of the Intelligence Everywhere framework for enhancing IoT applications in vertical sectors such as Digital Health, Infrastructure, and Transportation/Mobility in the context of intelligent society (Society 5.0). It also introduces a novel perspective for the development of horizontal IoT applications, capable of running across various IoT networks while fostering collective intelligence across diverse sectors. Finally, this paper provides comprehensive insights into the challenges and opportunities for harnessing collective knowledge from real-time insights, leading to optimised processes and better overall collaboration across different IoT sectors.
Hung Cao, Monica Wachowicz, Rene Richard, Ching-Hsien Hsu
2023-09-30T11:59:39Z
http://arxiv.org/abs/2310.00346v1
# Fostering new Vertical and Horizontal IoT Applications with Intelligence Everywhere ###### Abstract Intelligence Everywhere is predicated on the seamless integration of IoT networks transporting a vast amount of data streams through many computing resources across an edge-to-cloud continuum, relying on the orchestration of distributed machine learning models. The result is an interconnected and collective intelligent ecosystem where devices, systems, services, and users work together to support IoT applications. This paper discusses the state-of-the-art research and the principles of the Intelligence Everywhere framework for enhancing IoT applications in vertical sectors such as Digital Health, Infrastructure, and Transportation/Mobility in the context of intelligent society (Society 5.0). It also introduces a novel perspective for the development of horizontal IoT applications, capable of running across various IoT networks while fostering collective intelligence across diverse sectors. Finally, this paper provides comprehensive insights into the challenges and opportunities for harnessing collective knowledge from real-time insights, leading to optimised processes and better overall collaboration across different IoT sectors. IoT collective intelligence, machine learning, edge intelligence, cloud computing, learning models, vertical IoT, IoT network, horizontal IoT applications, Society 5.0 + Footnote †: journal: 0 Footnote 0: _Copy online_: _This article will be appeared in Collective Intelligence Journal_ ## Introduction INTERNET of Things (IoT) has secured its position as one of the most promising technologies supporting many smart city applications. In many IoT applications, communication technologies play a key role in transmitting large amounts of data streams from sensors to far-edge, edge, fog, and cloud computing resources. Currently, we have countless IoT applications in many vertical sectors, which include but are not limited to digital health, agriculture, retail, manufacturing/industrial, supply chain, energy, transportation/mobility, and intelligent homes. All of these sectors are utilising a certain communication standard and mainstream set of protocols in order to send and receive continuous and accumulated data streams Ding et al. (2020). The IoT networks are crucial for enabling effective machine learning (ML) models, ensuring they can handle the necessary ML tasks for any IoT application. However, many questions still remain around the support of IoT networks to ML models in order to generate collaboration and aggregation of diverse perspectives needed for collective intelligence to emerge. Typical questions are: Which type of IoT network fits the data life-cycle of an ML model for enabling the gathering of continuous versus accumulated data streams for ML models to foster collective intelligence? Can an ML model improve the QoS of both IoT applications and IoT networks? Which learning capability (e.g., incremental learning or federated learning) should we choose for new horizontal IoT applications? Although more than 620 IoT platforms are available across academia and industry Scully (2020), most of the implemented ML tasks have been confined to optimization, prediction, and automation solutions. To the best of our knowledge, no IoT platform has previously been specifically designed to fully support an Intelligence Everywhere ecosystem that promotes continuous data-driven decision-making, automation, and learning across a continuum that extends from the edge (where data is generated) to the cloud (where data is stored and analyzed). Therefore, the design of an Intelligence Everywhere ecosystem to foster new IoT applications remains an open research issue, posing several significant challenges. Limited knowledge still exists regarding the optimal selection of hardware, software, and communication components of IoT networks that can lead to the collaborative pooling of data, ML models, and insights, enabling devices and users to learn from each other and generate intelligent collective behavior. In this paper, we introduce our proposed Intelligence Everywhere learning paradigm that can lead to advances in developing new IoT applications and the IoT network itself. The time is ripe for exploring learning capabilities capable of combining IoT networking that, if harnessed properly, may deliver the best of expectations over many vertical sectors in IoT applications and build up new horizontal IoT applications that allow users and IoT networks make collaborative decisions. Towards this end, we have adopted Narayanan's definition of collective intelligence Narayanan et al. (2022), considering it as a decision-making approach in which intelligent and distributed IoT networks generate insights and feedback from their immediate environment and users. Together, they make collective decisions to perform tasks that lead to a common and desirable outcome. The contributions of our work in this paper are as follows. * We explore the various specificities of many IoT networks and their interplay in the context of an Intelligence Everywhere learning paradigm. This paradigm is envisaged to preserve several data lifecycles of automated ML tasks by taking into account both learning and resource capabilities that are connected by IoT networks. * We compare federated learning and incremental learning to understand how they harness collective intelligence from distributed data sources, fostering collaborative decision-making and knowledge sharing from different IoT applications. * We review three vertical sectors and their best-suited IoT networks for adopting Intelligence Everywhere learning paradigms in new IoT applications. * As the final contribution, we present a comprehensive assessment of the challenges and opportunities that lie ahead, guiding future research directions in the field. The remainder of this paper is organised as follows. Section II discusses the foundation of Intelligence Everywhere and our vision of its ecosystem. Section III discusses the important role of IoT networks in supporting Intelligence Everywhere learning paradigms. Section IV compares Intelligence Everywhere learning architectures. Next, we give several examples of potential IoT applications in vertical sectors and delineate the future perspective of horizontal IoT applications. Then, Section VI discusses several research challenges and opportunities from the viewpoint of Intelligence Everywhere Learning. Finally, the paper is concluded in Section VII. ## Learning through Intelligence Everywhere Integrating machine learning in IoT networks containing millions of sensors can lead to significant advances in IoT applications and the network itself. Accumulated and continuous data streams need to be analysed as they are being transferred through various computing resources at the far-edge, edge, fog or cloud nodes of IoT networks. Data streams play a critical role in machine learning-driven IoT networks due to their impact on model accuracy and performance. Developing efficient intelligent models is becoming increasingly important to address future demands arising from IoT applications. This is especially critical due to the surge in large-scale and rapid data streams, which escalate model intricacy and lead to exponential growth in computational needs. Gill et al. (2022) reiterate that next-generation computing systems for machine learning need to consider hardware accelerators and non-volatile memory to provide a good match between ML models and the underlying storage and computational resources. We argue that the next generation of IoT networks for machine learning may take full advantage of our proposed Intelligence Everywhere learning paradigm, which will rely on the integration and interoperability of four capabilities as described below: * _Resource Capability_, which consists of a network of distributed far-edge, edge, fog, and cloud nodes connected to IoT sensors that can provide computation, storage, I/O, and processing power for the execution of automated ML tasks. * _Analytical Capability_, which consists of ML models that are deployed for the execution of automated ML analytical tasks. These tasks are integrated to enable devices and users to learn from each other and generate intelligent collective behavior. * _Learning Capability_, specifically in the context of incremental and federated learning approaches, plays a crucial role in enabling IoT networks to operate in dynamic, privacy-sensitive, and resource-constrained settings. * _Data Life-Cycle Capability_, which manages the changes that streamed data go through during the execution of the automated ML tasks. Fig. 1 illustrates our vision of an ecosystem where the global Intelligence Everywhere learning paradigm encompasses many platforms that are co-existing and sharing resource, analytical, learning, and data life-cycle capabilities towards an overarching goal of enabling data diversity, enhancing contextual understanding, and supporting distributed decision-making. A sound understanding of the intrinsic capabilities of an Intelligence Everywhere ecosystem depends on a systematic approach to gathering the massive data streams being generated from as many vertical IoT applications as possible, placing them in the context of the automated ML tasks at the right moment, and knowledgeably acting upon them. Among the latest machine learning techniques, ensemble learning has been the most researched, given its potential for dealing with the small sample sizes, high-dimensionality, and complex data structures that can be found in various IoT applications. Finally, it is crucial to highlight the significance of exploring how these capabilities can foster collective intelligence within our proposed dynamic learning paradigm. Towards this end, feedback loops will play a pivotal role in strengthening the collective intelligence and effectiveness of IoT networks, paving the way for the development of the next generation of IoT applications. ## The important role of IoT Networks In this section, we consider three specificities of IoT network technologies, known as Network Architecture Design (e.g.: security protocols, associated protocols, topology, architecture), Network Performance Monitoring (e.g.: latency, QoS, bandwidth per channel, data rate, packet length), and Network Management (e.g.: power ratio, power saving mechanism). ### Network Architecture Design It refers to a process of planning, designing, and building an entire network. This process depends upon a set of protocols, such as associated and security protocols. Associated protocols are the ones that are working with or are supported by the respective communication technology. Security protocols are those that are solely responsible for the security of the network with respect to communication technology. For example, routing protocols for low-power and lossy networks (RPL) and advanced encryption standards (AES) are IoT-specific protocols for Wi-Fi. Depending on distinct network requirements, several types of network topologies can be used, such as point-to-point, bus, star, ring or circular, and hybrid. In this process, the connected resource capabilities will require always-on reachability since a heterogeneity of sensors will be connected to far-edge and edge resources using short-range networks. The data streams will be expected to stay in-memory for a limited period of time if necessary by an automated ML task, which will also depend on the data latency and the data rate of a communication network. Towards these challenges, Guo et al. (2021) suggests hyper-convergence as the best-suited network architecture for analysing real-time data streams due to its unique virtualisation and higher fault tolerance characteristics. More research is needed to study reachability as a critical requirement to return well-timed and synchronised ML tasks. Additionally, the lack of standardization, particularly at edge resources and IoT sensors, is currently hampering the achievement of always-on reachability required for implementing the proposed Intelligence Everywhere Learning paradigm. ### Network Performance Monitoring Monitoring network performance is one of the key criteria in implementing our Intelligence Everywhere Learning paradigm. The scalability of these models relies on having the same network performance parameters, including latency, QoS, bandwidth per channel, data rate, compression, fragmentation capability, and packet length/capacity. The total time of latency is different for each communication technology because it is calculated by summing up three factors: (1) time taken by the data stream itself to propagate from one device to another; (2) time taken by the transmitter to transmit the content out; and (3) time taken by an end node to process the received content. QoS is another important attribute of communication technology that usually enhances the comprehensive performance of the network by giving higher preferential treatment to higher-priority traffic over the network (i.e. end-to-end). However, it is critical to re-think how the current practices in optimising network performances could be improved to become more suited to optimise the data life-cycle capabilities of our Intelligence Everywhere Learning ecosystem. The data life-cycles are processes for designing a seamless flow of data streams that serve as a data input and output of a sequence of distributed automated ML tasks in a network. For example, depending on the requirement of an automated ML task, we could set the priority to precise traffic like accumulated data streams. The majority of communication technologies support QoS features, which can be easily used to manage the performance of the automated ML tasks over the network. Data rate and packet length/capacity also play a critical role in strengthening the performance of ML tasks. Generally, packet sizes on lossy networks suffer performance issues with longer packet sizes (e.g. IPv6 1280 Figure 1: The vision of our proposed Intelligence Everywhere learning ecosystem. MTU), and communication technologies such as 6LoWPAN have endeavored to create smaller packet sizes to improve performance. ### Network Management Once an IoT network infrastructure is deployed and secured, network management is focused on the reliability, efficiency, and capacity of data transfer channels taking into account the protocols, applications, and computational resources of a network. For example, target wake time (TWT) is a power-saving mechanism used in Wi-Fi 6. With the help of NetFlow, a network management mechanism that tracks all the flows (i.e., a stream of packets with the same attributes, like source/destination address, port, and protocol), one can analyse the number of applications being used, and the bandwidth used by individual applications. In contrast, very little is known about the behaviour of the data streams during the execution of automated ML tasks. Logical specifications are needed to reflect what actually happens to both the input and output data streams when running automated ML tasks. Hernandez et al. (2020) propose the use of Petri Nets to expose the actual control flow patterns behind the execution of automated ML tasks running at an edge platform. This approach provides a basis for accurate conformance checking that can enable us to foster higher confidence levels in the accuracy of executed automated ML tasks. However, the challenge still remains to differentiate when bottlenecks occur due to an inefficiency of a network performance or due to the actual execution of an automated ML task. ## Intelligence Everywhere Learning Architectures The key concept of Intelligence Everywhere learning is to replace the communication with data streams being sent to a cloud by a peer-to-peer communication capable of sending data streams between any edge, far-edge, and cloud resources. These data streams might contain raw data from a sensor or the input/output of an automated ML task. The network topology can be represented as a connected graph in which nodes are any computational resource and an edge indicates a communication channel between two resources. In contrast to the star graph topology, a network graph will send/receive data streams/to/from a small number of nodes. In the context of machine learning, there is no longer a global ML model, but instead, multiple ML models that are designed to support automated ML tasks and incrementally reach a consensus at a global level Cao (2019). This opens the opportunity to combine a variety of learning techniques, including multi-task learning, transfer learning, active learning, representation learning, online learning, and ensemble learning. It also opens the prospect of designing Intelligence Everywhere Learning paradigms that integrate descriptive, diagnostic, predictive, and prescriptive analytics by handling a variety of data sources, described as one of the following: * _IoT network signaling data streams:_ audio, image, and video data transmission is always accompanied by control messages, as known as signaling data. Learning about network signaling patterns will ensure the regularity, reliability, efficiency, and security of networks, as well as for automated ML tasks at the far-edge computational nodes. This is where signaling and IoT sensor data can be analysed and provide vital information to many IoT applications, such as emergency response, medical retrieval, and teleccare. * _IoT network traffic streams:_ Traffic data can be analysed from many perspectives, from that of the device model, the service provider, or the user. In general, traffic data contains several main features and characteristics, including traffic volume, downlink and uplink traffic, network access time, subscribers, flow logs, and requests. Machine learning models are useful to enhance network management, uncover diurnal patterns, and enable performance analysis and prediction, security management, and failure detection. This knowledge is paramount for ensuring the performance of automated ML tasks for IoT applications in the industrial sector. * _IoT localization data streams:_ GPS sensors, Wi-Fi, Bluetooth signal, call details records, cell change updates, network measurement reports, and base station GIS info are examples of localization data streams. Automated ML tasks can balance network load and optimise network utilisation. It also provides support for urban planning, intelligent transportation system management, rapid emergency responses, crime prevention activities, and demographic analyses. * _IoT network waveforms data streams:_ The 5G massive Multiple Input Multiple Output (MIMO) system is not only a communication system, but also modulated waveforms that can be analysed at the massive MIMO base stations in order to estimate the mobility of a user, whether motionless, at a slow speed, at a nearly constant speed, or at a high speed. It is tremendously useful for learning human behavior in the context of future smart cities. * _IoT network operational and alarm data streams:_ Some peculiar communication systems, such as land mobile radio systems and RF systems, create massive amounts of system alarms and operational data daily. This is vital information for remote asset management. Analytics of this data can help to solve problems of system reliability and high maintenance costs. It is worth noting that the Intelligence Everywhere Learning paradigm outlined in this paper still requires a learning architecture for setting up automated ML tasks that collaboratively can train an ML model running among a set of computational resources and IoT applications. Many learning mechanisms might fit into our learning paradigm, but there are two most feasible options to consider: federated learning Mawuli et al. (2023) and incremental learning van de Ven et al. (2022), as shown in Table 1. Incremental learning allows an ML model to continuously learn from new data without forgetting the previously acquired knowledge. One of the primary advantages of utilizing incremental learning models is their ability to handle concept drift, which is crucial for maintaining accuracy over time and facilitating the generation of collective intelligent behavior in vertical IoT applications. In contrast, federated learning is a distributed learning approach that enables multiple IoT devices or computing nodes to collaboratively train a global model while keeping their data locally. The result is a learning capability that allows the global model to learn from diverse data sources, making it adaptable to support collective intelligent behaviour in horizontal IoT applications. It is still premature to decide which learning architecture will prevail in the future. Human-oriented design and experimental machine learning data-driven approaches are the main paths to be explored when developing new horizontal IoT applications using these learning architectures. A holistic interdisciplinary perspective is needed to first identify which vertical sector needs have to be addressed, and then to combine these needs with communication technologies and machine learning models to amplify the potential for collective intelligence generation. ## Vertical and Horizontal IoT Applications There are many different vertical sectors adopting IoT applications. Personal IoT applications have been deployed in smart homes by allowing owners to have the ability to control appliances or devices around the house, and wearable devices are offering personal healthcare applications for monitoring physical activities. In general, they have shown a slow adoption trend in the last decade. In contrast, government and industrial IoT applications have been widely adopted, despite requiring a collaboration among many stakeholders. The leading vertical sectors in IoT applications are manufacturing/industrial, transportation/mobility, energy, retail, digital health, supply chain, and agriculture Scully (2020). This section reviews several potential vertical sectors and their best-suited IoT networks for adopting Intelligence Everywhere learning paradigms. They are the digital health, infrastructure, and transportation sectors. ### Digital Health Sector Table 2 summarises the most suitable, stable, and efficient communication technologies available for wearable-based IoT applications in the digital health sector. In a short-range communication scenario (e.g., wireless human sensing, gesture recognition), an RFID solution fits well because of its unique ID. With the advancement in Wi-Fi technologies, Wi-Fi 6 is a favorable solution for non-intrusive human sensing Zhang et al. (2022). Depending on the requirements of a wearable application, various network types (e.g. BAN, WPAN, and WLAN) can be selected. Aiming to address congestion and interference issues, academic and industry researchers are working on Wi-Fi 7 technology that operates in the 2.4 GHz, 5 GHz, and 6 GHz bands with the promise of four times faster in speed compared to Wi-Fi 6 Chen et al. (2022). Z-wave has a -20 to 0 dBm power ratio and a sleep/awake power-saving mechanism, making it an appropriate match for wearable applications from a network management perspective. Since users could manage smart lighting from their smartwatches (part of BAN), z-wave (WPAN) would then give access to a smart hub. Considering energy harvesting and low cost of tags, NFC is crucial for privacy, contactless payments and other use cases such as fruit ripeness sensor, NFC sensor for pH monitoring Bouda et al. (2022). Regarding coverage range, RF and WireposMesh can give us the flexibility to go beyond a 100-meter range with IoT wearables. From a security perspective, 6LoWPAN is an exemplary option as it only works with IPv6. A typical communication scenario for IoT health applications can be found in a Body Area Network where where EoG, ECG, and EMG sensors are attached in the form of a small chipset within wearable devices like smartwatches, wrist bands, smart shoes, clothes, or earbuds. Fig. 2 shows a communication scenario for IoT health applications operating in BAN where EoG, ECG, and EMG sensors are attached in the form of a small chipset within wearable devices like smartwatches, wristbands, smart shoes, clothes, or earbuds. These gears sense, interact and transfer data to the receiving devices through Bluetooth 5 and Wi-Fi 6. Bluetooth 5 works on 2.4GHz band with half/full-duplex mode of transmission and supports IPv6 and AES-128bit security. Moreover, through polling, it provides QoS with a minimum of 3ms latency and 3Mbps of data rate within a 100 meter coverage area. Wi-Fi 6, and the integration of Wi-Fi 6 with IoT wearables, is one of the fastest-growing communication technologies because it brings kernel benefits to security, interoperability, flexibility, and improvement in data throughput. Most IoT applications are currently focusing on monitoring continuous data streams coming from wearable sensors for activity recognition lee and Noor (2022). The aim is to intuitively gather information about the behavior of users and engage them in healthy physical activities. Several ML techniques, such as one-class K-mean clustering, SVM, and CNN, are used for human/physical activity recognition. There is also a trend to analyze continuous data streams at the edge rather than in the cloud due to privacy issues Cao and Wachowicz (2019). The adoption of wearables in IoT applications in the digital health sector has recently increased in light of the COVID-19 pandemic. We expect the demand for new IoT applications, such as wearable-enabled assistance, in response to increased care needs. Other applications, such as Aging in Place, where people will have the health and social support and services they need to live safely and independently in their homes and communities, will \begin{table} \begin{tabular}{l l l} \hline \hline & Federated Learning & Incremental Learning \\ \hline Data Distribution & Accomulated data streams & Continuous and accumulated \\ & are explored on multiple & data streams are gathered on \\ & computational resources & multiple computational resources \\ Data Partitioning & Only vertical learning & horizontal and vertical learning \\ \hline Training Setting & Training a model on multiple & Training a model on multiple \\ & verandering resources. Only & computational resources. \\ & sending updates to the cloud. & Sounding updates to any other \\ \hline Orchestration & Training organised in the cloud & Training organized by any \\ & Training organized in the cloud & computational resources at the \\ & edge, far-edge, optical cloud \\ \hline ML tasks & Only one ML, but is & Different ML tasks, are performed \\ & performed to \(\ell\), classification & \(\ell\), classification and clustering) \\ Learning Process & Centralized learning but & Decentralized learning and \\ & distributed training process & distributed training process \\ \hline \hline \end{tabular} \end{table} Table 1: Main Characteristics of Federated and Incremental Learning Architectures further highlight the importance of IoT technology. These applications will need to mimic learning that enables knowledge transfer from ML models and collective insights from federated and incremental learning mechanisms rather than sharing private wearable users' data. ### Infrastructure Sector Table 3 sketches the significant communication technologies that are appropriate for medium-range coverage. Except for Wi-Fi HaLow and Wireless HART, the majority of them support the IPv6 feature. Several types of networks, such as WLAN, LAN, CAN, and HAN, can be considered designed and implemented in medium-range intelligent home applications Du et al. (2022). Based on the data rate, latency, power consumption, and QoS specification, Z-wave, BLE, Zigbee, and Thread are the preferred technologies for the medium-range IoT system Domb (2019). Regarding wired communication, ethernet and optical fiber are considered as a backbone for such IoT use cases. WirepasMesh and DigiMesh are both well-matched for a mesh network, although we must consider the fact that the power-saving mechanism is not supported by WirepasMesh, which is an important disadvantage. \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline **Communication Technologies** & **IPv6** & **Device Mode** & **Type of Network** & **QoS** & **Latency (ms)** & **Data Rate (Mipps)** & **Power Ratio** & **Saving** & **Coverage Range (ms)** \\ & **Support** & & & & & & & & **(dBm)** & **Mechanism** & \\ \hline ANT & No & Full & WPAN & No & 7.5 - 15 & 1 & \(<\)17 & Yes & 30 \\ BLE (Blateboub 4.41, 14.25) & Yes & Half / Full & BAAN / WPAN & Yes & 3.6 & 1, 2, 3 & 0 - 10 & Yes & 15 - 30, 50 - 70, 100 \\ Bluetooth & No & Half / Full & BAAN / WPAN & No & 100 & 1 & 0 - 10 & Yes & 1,10,100 \\ Emocean & Yes & Half & WPAN & No & 40 - 100 & 0.125 & 10 & Yes & 30 \\ Infrared & Yes & Half & WLAN & Yes & 175 & 12500 & 24 & No & 0.001 - 30 \\ NTC & Yes & Half & BAAN / WPAN & Yes & 125 & 0.016, 0.212, & 20 & No & 0.1 - 0.2 \\ RF & Yes & Half / Full & N/A & Yes & 4.2 & 0.02 - 10000 & 20.40 & No & 0.0001 - 1000000000000 \\ RFID & Yes & Half / Full & WPAN & Yes & 20 & 0.5 & 1.8 & Yes & 0.1 - 5 \\ Thread & Yes & Half & HAAN / WPAN & No & 100 & 0.25 & 21 & Yes & 30 \\ Wi-Fi 6 & Yes & Half & LAN/ CAN / WPAN / WLAN & Yes & 1.5 & \(>\)10000 & 32 & Yes & 60 - 1000 \\ Wi WirepasMesh & Yes & Half / Full & WPAN & Yes & 10 & 1 & 5 & No & 100 - 10000 \\ Zigbee & Yes & Half / Full & WPAN & No & 0.02,0.04, 0.25 & 20 & Yes & 10 - 1000 \\ Z-Wave & Yes & Half & WPAN & No & 3000 & 0.006, 0.04, 0.1 & - 20 - 0 & Yes & 30, 100 \\ R\_6wPAN & Yes & Half & WPAN & Yes & 50 - 250 & 0.25 & 3 - 22 & Yes & 10 - 100 \\ \hline \hline \end{tabular} \end{table} Table 2: Communication Technologies Suited for Wearable Devices Figure 2: Communication scenario for IoT wearable applications. As an example of a medium-range intelligent home application, Fig. 3 illustrates the communication technologies deployed in a HAN combined with the Analytics Everywhere framework to support automated analytical tasks. In intelligent homes, as in other IoT use cases, seamless connectivity is an important factor that gives users the flexibility to govern smart sensors from any location Mao and Chang (2023). Advanced communication tactics can minimize a smart device's power consumption and augment its communication efficiency Sovacool and Del Rio (2020). Wi-Fi 6 and Wi-Fi 6E provide 1000Mbps of data rate with almost 1.5ms of negligible latency. Moreover, it uses a 6GHz band, which gives less interference compared to other technologies and provides excellent QoS with the help of mechanism like hybrid coordination function (HCF) which includes enhanced distributed channel access (for prioritized QoS services), and HCF controlled access (for parameterized QoS services). Federal Communication Commission (FCC) recently adopted a 6GHz band with new rules for unlicensed spectrum use. 6GHz spectrum has already been used by some licensed services (point-to-point microwave links, fixed satellite systems, etc.), which could increase the chance of harmful interference with unlicensed services (specifically outdoor operations). FCC permits unlicensed devices to operate at very low-power for indoor operations, and at standard-power with automated frequency control (AFC) mechanism for outdoor operations across the 6GHz band in order to avoid collision with licensed services. The key advantages of Wi-Fi 6 for IoT are TWT, dual carrier modulation (DCM), and the ability to offer lower bandwidth to IoT sensors (a single Resource Unit of 2MHz can be offered to provide 375 Kbps, which is ideal for IoT sensors). This effectively improves the link budget by 8 dB hence improving the range. Wi-Fi 6 is based on orthogonal frequency-division multiple access (OFDMA), allowing simultaneous sessions to transmit together using resource units and trigger frames. Considering all these facts, Wi-Fi 6 can potentially help in reaching the final goal of intelligent homes, which is to increase comfort and quality of life for intelligent home residents. In a HAN, accumulated and continuous IoT data streams from smart devices can normally be analyzed at the far edge and cloud using an Analytics Everywhere framework Cao et al. (2019). The insights from this process can support robbery detection, energy optimization, and automated gardening. ### Transportation/Mobility Sector Most significant IoT applications in this sector are navigation and route optimisation, including real-time fleet monitoring, transit operation optimisation, parking availability monitoring, and driver behaviour tracking. Table 4 presents different long-range communication technologies that provide MAN/WAN/LPWAN networking. Cellular, CatCat-0, and 5G are the principal WAN networks, while Weightless W/N/P, LoRaWAN, SigFox, NB-IoT, and EC-GSM-IoT are LPWAN communication technologies. Most LPWAN network types serve QoS except LoRaWAN, which is based on slotted alohanet. Most of the LPWAN networks are equipped with better security mechanisms. NB-IoT is the preference for QoS among LPWAN Kanj et al. (2020). LoRaWAN follows a separate network architecture and, depending on the sensor node, requires a gateway to communicate with another endpoint, which usually is a star network topology Miles et al. (2020). LoRaWAN supports 20-65km (dependent on many conditions e.g. spreading factor) of coverage range in a rural area with 13km of stable distance with a 500ms of latency. In contrast, SigFox has a higher latency of 2s in almost 50km of rural coverage area and 11km of stable range. In terms of data rate, LoRaWAN outperforms SigFox. In recent years, Weightless W has been one of the rarely used LPWAN communication technologies, though it has some advantages like maximum data rate and minimum power ratio among LPWAN protocols. However, LoRaWAN and SigFox seem to own the market in unlicensed spectrum. Therefore, the choice of communication technology relies on the type of IoT application being used. 5G paves the way for IoT with the highest connection density of 1 million devices/km\({}^{2}\) compared to other communication technologies Van Hilten and Wolfert (2022). That such an immense number of devices could be able to communicate with the Internet within a given space is a gigantic step forward for an application like IoV. Fig. 4 illustrates the networking within the context of IoV, where far-edge/near-edge/cloud computing continuum can support complex automated ML tasks. In an IoV application, 5G technology plays a pivotal role since it provides a special QoS for IoV called V2X QoS. It also provides more than 1000Mbps data rate and supports vehicle telematics, autonomous vehicles, and in-vehicle infotainment applications. In Cellular V2X (C-V2X), \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline **Communication** & **Type-4 Super** & **Daiping Mode** & **Type-4 Networks** & **QoS** & **Latency (nm)** & **Data Rate (dmp)** & **Power Ratio (dmp)** & **Power Ratio (dmp)** & **Convergence Range (nm)** \\ \hline **ML** (Marcello) & Yes & Half-Full & BAN/WAN & Yes & 3,6 & 1,2,3 & 0.10 & Yes & 15-30, 50-70, 100 \\ **ML**(12,205) & No & Half-Full & BAN/WAN & No & 100 & 1 & 0.10 & Yes & 1,30,100 \\ **DMRD** & Yes & Half-Full & BAN/WAN & No & 13 & 0.1125 & 10-18 & Yes & 90-1600 \\ **DMRD** & Yes & Half-Full & LAN/MAN & Yes & 5,125 & 1,4000000 & 40-88 & No & 100 \\ **DMRD** & Yes & Half-Full & LAN/MAN & Yes & 0.015 & 10000000 & 30-77 & Yes & 550, 1000, 2000, 10000 \\ **DMRD** & Yes & Half-Full & N/AN & Yes & 0.005 & 0.005 & 0.005 & 0.005 & 0.005 & 0.005 & 0.005 \\ **DMRD** & Yes & Half-Full & BAN/WAN & Yes & 100 & 0.25 & 21 & Yes & 30 \\ **W-Fi 6** & Yes & Half & LAN/CAN/WAN/WAN & Yes & 15 & \(>\)10000 & 12 & Yes & 60-10000 \\ **W-Fi 7** & Yes & Half & LAN/CAN/WAN & WAN/AN & Yes & 1,38,728,347,487,481,1,600,470,468,58,93 & 23 & Yes & 100-1000 \\ **W-Fi** & Yes & Half & LAN/CAN/WAN & WAN/AN & Yes & 8,91,90 & 40,324,81000,47,480,480,480,480,480 & 32 & Yes & 10-10000 \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** & **W-Fi** \\ there are two interfaces: Uu (long-range cellular network communication) and PC5 (short-range, network-less, direct communication). The Uu utilizes the 5G spectrum while PC5 is short-range, similar to what was supported with DSRC (Dedicated short-range communications). V2V, V2I, and V2P communications operate in ITS bands (e.g. 5.9GHz), which are independent of the cellular network, whereas V2N communication operates in mobile broadband licensed spectrums such as V2V, V2I, V2H, and V2X. Transportation is, in fact, a unique vertical sector because it is already adopting new universal architectures that rely on different communication networks to serve the needs of smart cities Duan et al. (2020). However, the increasing number of moving smart vehicles with different speeds is already triggering problems in packet delivery to sink nodes, as well as packet delivery rate. This causes a ripple effect on the gathering of continuous data streams, the performance of ML tasks, and the development of future V2X applications. Many ML algorithms have been proposed for smart transportation applications, including AdaBoost, Bayesian Network Seasonal Autoregressive Integrated Moving Average (BN-SARIMA), Coupled Hidden Markov Model (CHMM), Convolutional Neural Network (CNN) and Deep CNN (DCNN), and Decision Trees. Please refer to Zandalis et al. (2019) for an extensive. With the Intelligence Everywhere learning paradigm, some of the aforementioned ML \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Communication Technologies**}} & \multicolumn{1}{c}{**IPv6**} & \multicolumn{1}{c}{**Duplex Mode**} & \multicolumn{1}{c}{**Type of Network**} & \multicolumn{1}{c}{**QoS**} & \multicolumn{1}{c}{**Latency (ms)**} & \multicolumn{1}{c}{**Data Rate (Mhips)**} & \multicolumn{1}{c}{**Poster**} & \multicolumn{1}{c}{**Power**} & \multicolumn{1}{c}{**Power**} & \multicolumn{1}{c}{**Ratio**} & \multicolumn{1}{c}{**Saving**} & \multicolumn{1}{c}{**Coverage Range (m)**} \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & & & & & & (**dBn**) & **Mechanism** & & \\ \hline Cellular & Yes & Full & MAN / WAN & Yes & 10.50 & 10.002, 26.0064, & & & & & & \\ & & & & & 30.614, 44; 40: 100 - 1000 & & & & & & \\ Cat-M & No & Half / Full & MAN / WAN & Yes & 10 - 15 & 1 & 23 & Yes & 5000 & \\ Cat-0 & No & Half / Full & MAN / WAN & Yes & 18 & 1 & 23 & Yes & 5000 & \\ DigHash & Yes & Half / Full & WAN & No & 13 & 0.1152 & 10 - 18 & Yes & 90 - 1600 & \\ EC-GSM-1OT & Yes & Half / Full & LPWAN & Yes & 700 - 2000 & 0.24, 0.35 & 23 - 33 & Yes & 15000 & \\ LotRavan & Yes & Half / Full & LPWAN & Yes & 500 & 0.02, 0.027 & 20 & Yes & 5000, 13000, 2000 & \\ NN-1OT & Yes & Half & LPWAN & Yes & \(<\)1000 & 0.22, 0.24, 0.2048 & 23 & Yes & 100 - 10000 & \\ Optical Faver & Yes & Half / Full & LAN / MAN / WAN & Yes & 0.005 & 100 - 40000000 & 30 - 37 & Yes & 5500, 1000, 2000, 10000 & \\ RF & Yes & Half / Full & N/A & Yes & 4.2 & 0.02 - 10000 & 20 - 40 & No & 0.0001 - 100000000 \\ SigFox & Yes & Half / LPWAN & Yes & 200 & 0.0001, 0.0006 & 21.7 & Yes & 1000, 10000, 50000 & \\ WiWaxMesh & Yes & Half / Full & WPAN & Yes & 10 & 1 & 5 & No & 100 - 10000 & \\ WiMAX & Yes & Fall & WAN & Yes & 40 - 60 & 30, 75, 100 & 23 - 43 & Yes & 3500, 1000 & \\ Wi-HLer & No & Half & CAN / WLAN & Yes & 64 & 0.15 - 0.465 - 7.8 & \(\sim\)30 & Yes & 100 - 1000, 700 & \\ Wieldness N & No & Fall & LPWAN & No & 8000 - 12000 & 0.03 - 0.1 & 17 & No & 2000 & \\ Wieldness P & No & Half & LPWAN & Yes & 8000 - 12000 & 0.001 - 10 & 17 & No & 5000 & \\ Wieldness P & No & Fall & LPWAN & Yes & 8000 - 12000 & 0.002 - 0.1 & 17 & No & 2000 & \\ 5G & Yes & Full & MAN / WAN & Yes & 1 - 10 & \(\sim\)1000 & 33 - 43 & Yes & 2000, 4500, 60000 & \\ \hline \hline \end{tabular} \end{table} Table 4: Communication Technologies Suited for IoV Applications Figure 3: Communication scenario for intelligent home applications. algorithms can easily be adapted to fit in federated learning and incremental learning mechanisms to provide better smart transportation applications. However, it is important to point out that the potential of V2X in developing new IoT applications serving the future needs of smart cities has not yet been fully investigated. Other network performance metrics (e.g. throughput, end-to-end delay, latency) also need to be investigated for developing innovative real-time IoV applications, taking into account the privacy and security attacks. ### A Vision for Future Horizontal IoT applications IoT platforms have been developed globally to support different applications ranging from industrial/manufacturing and transportation/mobility to energy, healthcare, and supply chains. The current landscapes of IoT platforms highlight that most of them are focusing on specific sectors, also known as verticals Scully (2020). The current trend in IoT is moving towards the adoption of horizontal applications that span across different sectors, promoting interoperability in diverse IoT networks. These horizontal IoT applications will play a key role in supporting complex use cases involving multiple sectors simultaneously. For example, we might see a new IoT application that is synergy in the healthcare and smart building sectors or an IoT application embedded in transportation/mobility, energy, and supply chain sectors. As we are moving towards a Society 5.0 in the very near future Fukuda (2020), horizontal IoT applications will play a core role in transforming a large amount of streamed data across multiple vertical sectors into collective intelligence that can be made transferable, inter-connectable, and collectively create more value. Figure 5 depicts an example of a horizontal IoT application that crossed multi-vertical sectors, including digital health, infrastructure, and transportation sectors. We envisage that the collective intelligence obtained from the analytical activities or ML models in a vertical IoT application (i.e., digital health) can be re-used and transferred to other vertical IoT applications (i.e., infrastructure or transportation) as input information for other analytical activities or other ML models. For example, through the concept of collective intelligence, health analytics information of the home/building users is learned at the edge and transmitted to the cloud, where it is aggregated and processed to optimize the home/building environment, leading to improved health outcomes for the users. Simultaneously, the knowledge gained from the smart home/building sector is fed back to the users at the edge, enhancing their awareness of other issues, such as energy optimization and behavior changes, fostering a more sustainable and energy-efficient home/building ecosystem. The collective intelligence system continually learns from data gathered across multiple homes/buildings and user interactions. This ongoing learning process enables the system to improve its recommendations and responses over Figure 4: Communication scenario for IoV applications. time, enhancing the overall effectiveness and intelligence of the IoT networks. From the Intelligence Everywhere perspective, data, and insights can traverse through multiple vertical applications using a well-defined data lifecycle and the IoT networks outlined in Tables 2, 3, 4. Learning capability and analytical capability can assist developers in providing a mix of applications in which analytical results from the digital health sector or transportation can be used/transferred as the input for the smart home/building and vice versa. ## Challenges and Opportunities In this section, we have identified the open challenges and opportunities to achieving an Intelligence Everywhere learning paradigm. This list is not meant to be exhaustive, but it gives an indication of topics for future research and development. ### Challenges A large number of important questions remain open on the topics of security, privacy, scalability, compatibility, reliability, and resilience. They are explained in more detail below. * RFC 8520) provides features like device visibility and context-specific access policies, which reduce the threat surface for IoT devices. Intelligence Everywhere learning paradigms will play an important role in developing new approaches for IoT security mainly because the next generation of far-edge/near-edge/cloud ML tasks will be running in real-time. However, it will expose new security challenges in the domain of collective intelligence once human and IoT systems start to collaborate in the near future. * _Privacy_: IoT devices usually cultivate, fetch, and transmit sensitive data streams. These data could be personal, private, or business-critical. Thus, maintaining confidentiality at all levels during transportation and manipulation is a pertinent task. This is imperative in the context of collective intelligence. With the next generation of encryption algorithms, we are moderately safe, but IoT will require new privacy-aware algorithms at the device level, analytical capability level, and user level. NISTIR 8228 gives great insight into cybersecurity and privacy risk mitigation for the IoT environment. Enforcement of GDPR (General Data Protection Regulation) in IoT has a very large and encompassing set of requirements like personally identifiable information, privacy, and data protection by design for the entire fabric of IoT. Fulfilling such prerequisites enhances privacy in IoT. We would like to point out that only when the IoT industry overcomes these privacy issues, will it be able to earn the trust level required to bring new business models to horizontal IoT applications. * _Scalability_: With the deployment of more advanced IoT devices in the near future, the interconnected network will also be expanded. Therefore, the design of an IoT network architecture requires scalability, which still remains a technical challenge. If an IoT network is not scalable, an Intelligence Everywhere learning paradigm is also not scalable. Also, it remains unclear how collective intelligence will impact the scalability of IoT applications. We will only be able to serve very limited devices and run very simple automated ML tasks under a single infrastructure. This creates a whole new independent environment for new IoT devices. A scalable IoT network is a cost-effective solution that conveniently ratifies new smart IoT devices within the same network. * _Compatibility_: Another challenge is compatibility, which is related to better communication and analytical accuracy. An old version of the firmware/software of the IoT sensor or an old version of an ML algorithm needs to be compatible with new versions, as some vendors provide over-the-air firmware/software updates. If the mismatch of firmware versions does not let two devices talk, this will lead to a communication breakdown that can cause problems in a network and for analytics. Hence, the importance of addressing new compatibility solutions in IoT networks. Figure 5: Communication scenario for horizontal IoT applications. * _Reliability_: When two devices communicate with each other in an IoT use case, there will always be a chance of data loss due to certain parameters, like collisions, that can curtail reliability. Research on reliability could be expanded by a finer choice of communication technologies that have a better mechanism for error handling and re-transmission of lost information [8]. From the Intelligence Everywhere learning perspective, process mining becomes imperative to simultaneously monitor both the computing resources and data life-cycles. * _Resilience_: Always connected things in IoT is one of the crucial requirements since failures and connection loss can cause fatal accidents. Due to different fault types (physical, interaction, transient, permanent, and development), it is challenging to detect anomalies and improve the resilience of IoT networks Ratasich et al. (2019). ### Opportunities At the same time, new opportunities emerge in the fields of infrastructure supervision and context awareness. * _Human-Machine Collaboration for Exploitation-Exploration:_ Intelligence Everywhere is promising to evolve to become an interconnected and collective intelligent ecosystem where devices, systems, services, and users work together to support IoT applications. Normally, it requires some tradeoff between the utilization of known solutions (exploitation) and new solutions (exploration) to achieve the highest level of efficiency for collective intelligence. The synergy of artificial/machine intelligence and human intelligence are usually seen as complementary forms of intelligence Casadei (2023). Therefore, Intelligence Everywhere with a variety of IoT networks, and multi-models of learning mechanisms in combination with human input will potentially open many opportunities and directions for looking for new ways to achieve common goals such as using the wisdom of crowds, using multi-agent systems, markets, swarm intelligence. * _Infrastructure Supervision_: Supervising millions of endpoints should be of top priority for IoT networks because, once all the nodes are deployed and are transferring new insights, we need to know whether they are all still up and running properly and efficiently. In infrastructure supervision, both device monitoring and network monitoring are covered. IoT network observation is an equally compelling issue to address as the communication process of the IoT relies on underlying networks. Infrastructure supervision can slash downtime and help engineers to quickly troubleshoot problems. It will be a win-win situation for any IoT application if we have end-to-end device and network visibility. * _Context Awareness_: After the connection is entrenched between IoT devices, the correct choice of communication technologies could make data transfer very nimble, with limited latency. Intelligence Everywhere Learning supports on-premise and real-time data processing as needed with mobility features. With the help of context sensing, collective intelligence, and incremental/federated learning, it opens new doors for ML tasks like deep visibility. Therefore, the far-edge/near-edge/cloud continuum can enhance the routes of automation in IoT networking, and provide visibility to the end user/application. As well, with device comparison, it improves the performance and security of IoT devices and network elements. Collective Intelligence is being inserted in all aspects of the end-to-end IoT ecosystem, from the device all the way to the cloud. Finally, merging IoT networking and ML models can lead to enormous opportunities for IoT research and development to support the continued scaling and build-out of IoT into nearly everything. ## Conclusions In the fast-paced development of the Internet of Things, the importance of collective intelligence becomes evident as communication networks evolve from their traditional role of connecting "things" to becoming vehicles for creating and transferring valuable "insights." Embracing this vision, this article introduces the Intelligence Everywhere learning paradigm, emphasizing its four core capabilities: resource, analytical, learning, and data life-cycle capabilities. By focusing on these capabilities, the aim is to assist developers in building new platforms that not only enhance existing vertical IoT applications but also envision and pave the way for pioneering horizontal IoT applications. The incorporation of collective intelligence through the Intelligence Everywhere paradigm empowers IoT networks to harness the collaborative potential of diverse devices, systems, and users, ultimately leading to more intelligent and innovative IoT applications that address complex challenges and foster seamless interoperability across multiple domains. We have identified the state-of-the-art in IoT networking and machine learning from a data scientist perspective, which has culminated into a collection of open challenges and opportunities. However, it is important to point out that achieving an Intelligence Everywhere learning paradigm em only be successful if users actually feel safe in an intelligent society, and know exactly what is happening with their data. ## Acknowledgements This research work was supported by the NSERC/Cisco Industrial Research Chair, Grant IRCPJ 488403-1, and was partially supported by the NBIF Talent Recruitment Fund, Grant TRF 2003-001. We would like to thank the Director of Cisco System Canada, Robert Barton, the CTO of Cisco IoT group, Russ Gyurek, and Rohan Upadhyay from Amazon Canada for the fruitful discussions on this research domain.
2309.12621
The rational hull of modules
In this paper, we provide several new characterizations of the maximal right ring of quotients of a ring by using the relatively dense property. As a ring is embedded in its maximal right ring of quotients, we show that the endomorphism ring of a module is embedded into that of the rational hull of the module. In particular, we obtain new characterizations of rationally complete modules. The equivalent condition for the rational hull of the direct sum of modules to be the direct sum of the rational hulls of those modules under certain assumption is presented. For a right $H$-module $M$ where $H$ is a right ring of quotients of a ring $R$, we provide a sufficient condition to be $\text{End}_R(M)=\text{End}_H(M)$. Also, we give a condition for the maximal right ring of quotients of the endomorphism ring of a module to be the endomorphism ring of the rational hull of a module.
Gangyong Lee
2023-09-22T05:09:48Z
http://arxiv.org/abs/2309.12621v1
# The rational hull of modules ###### Abstract. In this paper, we provide several new characterizations of the maximal right ring of quotients of a ring by using the relatively dense property. As a ring is embedded in its maximal right ring of quotients, we show that the endomorphism ring of a module is embedded into that of the rational hull of the module. In particular, we obtain new characterizations of rationally complete modules. The equivalent condition for the rational hull of the direct sum of modules to be the direct sum of the rational hulls of those modules under certain assumption is presented. For a right \(H\)-module \(M\) where \(H\) is a right ring of quotients of a ring \(R\), we provide a sufficient condition to be \(\operatorname{End}_{R}(M)=\operatorname{End}_{H}(M)\). Also, we give a condition for the maximal right ring of quotients of the endomorphism ring of a module to be the endomorphism ring of the rational hull of a module. 2020 Mathematics Subject Classification: Primary 16D70; 16S50, Secondary 16D50 Key Words: rational hull, injective hull, maximal right ring of quotients ## 1. Introduction The theory of rings of quotients has its origin in the work of \(\mathcal{O}\). Ore [11] and K. Asano [2] on the construction of the total ring of fractions, in the 1930's and 40's. But the subject did not really develop until the end of the 1950's, when a number of important papers appeared (by R.E. Johnson [6], Y. Utumi [15], A.W. Goldie [5], J. Lambek [8] et al). In particular, Johnson(1951), Utumi(1956), and Findlay & Lambek(1958) have studied the maximal right ring of quotients of a ring which is an extended ring of the base ring. For the well-known example, the maximal right ring of quotients of integers is rational numbers. It is the same as the injective hull of integers. For a commutative ring, the classical right ring of quotients of a ring is to its total quotient ring as the maximal right ring of quotients of a ring is to its complete ring of quotients. As we know, the study of the rational hull of a module is the same as that of the maximal right ring of quotients in a different way. Also, like every module has the injective hull, it is known that every module has the rational hull in [4, Theorem 2.6]. Now, we introduce the definition of the rational hull of a module and present its well-known results, briefly. Let \(M\) be a right \(R\)-module and \(T=\operatorname{End}_{R}(E(M))\). Put \[\widetilde{E}(M)=\{x\in E(M)|\,\vartheta(M)=0\;\text{with}\;\vartheta\in T\; \Rightarrow\;\vartheta(x)=0\}=\bigcap_{\begin{subarray}{c}M\subseteq \operatorname{Ker}\vartheta\\ \overline{\vartheta}\in T\end{subarray}}\operatorname{Ker}\vartheta=\mathbf{r }_{E(M)}\left(\mathbf{I}_{T}(M)\right).\] Then \(\widetilde{E}(M)\) is the unique maximal rational extension of \(M\). We call it the _rational hull_ of \(M\). Also, it is known that \(\mathbf{r}_{E(M)}(J(T))\leq\mathbf{r}_{E(M)}\left(\mathbf{I}_{T}(M)\right)= \widetilde{E}(M)\) because \(\mathbf{I}_{T}(M)\subseteq J(T)\) where \(J(T)=\{\alpha\in T\,|\operatorname{Ker}\alpha\leq^{\text{ess}}E(M)\}\) is a Jacobson radical of a ring \(T\). Note that the maximal right ring of quotients of \(R\) is \(Q(R)=\mathbf{r}_{E(R)}(\mathbf{I}_{H}(R))\) where \(H=\operatorname{End}_{R}(E(R))\) (see [8, Proposition 2]). After the necessary background history, results, and notations in this section, we provide several characterizations of the rational hull of a module in Section 2 (see Theorem 2.3 and Corollary 2.10). In addition, characterizations of rationally complete modules are presented. As a corollary, we obtain several new characterizations of the maximal right ring of quotients of a ring. In particular, we show that the endomorphism ring of a module is embedded into that of the rational hull of the module as the inherited property of its maximal right ring of quotients (see Theorem 2.15). Our focus, in Section 3, is on the question of when is the rational hull of the direct sum of modules the direct sum of the rational hulls of those modules. For \(M=\bigoplus_{k\in\Lambda}M_{k}\), we prove that \(\widetilde{E}(M)=\bigoplus_{k\in\Lambda}\widetilde{E}(M_{k})\) if and only if \(M_{i}\) is \(M_{j}\)-dense in \(\widetilde{E}(M_{i})\) for all \(i,j\in\Lambda\) when either \(R\) is right noetherian or \(|\Lambda|\) is finite (see Theorem 3.6). In the last section, we obtain a condition to be \(\operatorname{End}_{R}(M)=\operatorname{End}_{H}(M)\) where \(H\) is a right ring of quotients of a ring \(R\) (Theorem 4.1). This condition is called the _relatively dense property_ to a module. Also, we provide a sufficient condition for the maximal right ring of quotients of the endomorphism ring of a module to be the endomorphism ring of the rational hull of a module (see Theorem 4.5). Throughout this paper, \(R\) is a ring with unity and \(M\) is a unital right \(R\)-module. For a right \(R\)-module \(M\), \(S=\operatorname{End}_{R}(M)\) denotes the endomorphism ring of \(M\); thus \(M\) can be viewed as a left \(S\)- right \(R\)-bimodule. For \(\varphi\in S\), \(\operatorname{Ker}\varphi\) and \(\operatorname{Im}\varphi\) stand for the kernel and the image of \(\varphi\), respectively. The notations \(N\leq M\), \(N\leq^{\operatorname{ess}}M\), \(N\leq^{\operatorname{den}}M\) or \(N\leq^{\oplus}M\) mean that \(N\) is a submodule, an essential submodule, a dense submodule, or a direct summand of \(M\), respectively. By \(E(M)\), \(\widehat{M}\), and \(\widetilde{E}(M)\) we denote the injective hull, the quasi-injective hull, and the rational hull of \(M\), respectively, and \(T=\operatorname{End}_{R}(E(M))\). \(Q(R)\) denotes the maximal right ring of quotients of \(R\). The direct sum of \(\Lambda\) copies of \(M\) is denoted by \(M^{(\Lambda)}\) where \(\Lambda\) is an arbitrary index set. \(\operatorname{\mathsf{CFM}_{\mathbb{N}}}(F)\) denotes the \(\mathbb{N}\times\mathbb{N}\) column finite matrix ring over a field \(F\). By \(\mathbb{Q}\), \(\mathbb{Z}\), and \(\mathbb{N}\) we denote the set of rational, integer, and natural numbers, respectively. \(\mathbb{Z}_{n}\) denotes the \(\mathbb{Z}\)-module \(\mathbb{Z}/n\mathbb{Z}\). For \(x\in M\), \(x^{-1}K=\{r\in R\,|\,xr\in K\}\leq R_{R}\) with a right \(R\)-submodule \(K\) of \(M\). We also denote \(\mathbf{r}_{M}(I)=\{m\in M\,|\,Im=0\}\) for \(I\leq S\) and \(\mathbf{l}_{S}(N)=\{\varphi\in S\,|\,\varphi N=0\}\) for \(N\leq M\). We give some properties of dense submodules. Recall that a submodule \(N\) of \(M\) is said to be _dense_ in \(M\) if for any \(x,0\neq y\in M\), there exists \(r\in R\) such that \(xr\in N\) and \(0\neq yr\). **Proposition 1.1** ([3, Proposition 1.3.6]).: _Let \(N\leq M\) be right \(R\)-modules. Then the following conditions are equivalent:_ 1. \(N\) _is dense in_ \(M\)_;_ 2. \(\operatorname{Hom}_{R}(M/N,E(M))=0\)_;_ 3. _for any submodule_ \(P\) _such that_ \(N\leq P\leq M\)_,_ \(\operatorname{Hom}_{R}(P/N,M)=0\)_._ **Proposition 1.2** ([7, Proposition 8.7]).: _Let \(L,N\) be submodules of a right \(R\)-module \(M\):_ 1. _If_ \(L\leq^{\operatorname{den}}M\) _and_ \(N\leq^{\operatorname{den}}M\) _then_ \(L\cap N\leq^{\operatorname{den}}M\)_._ 2. _Let_ \(L\leq V\leq M\)_. Then_ \(L\leq^{\operatorname{den}}M\) _if and only if_ \(L\leq^{\operatorname{den}}V\) _and_ \(V\leq^{\operatorname{den}}M\)_._ **Proposition 1.3** ([3, Proposition 1.3.7]).: _Let \(M\) be a right \(R\)-module and \(M\leq V\leq E(M)\). Then \(M\leq^{\operatorname{den}}V\) if and only if \(V\leq\widetilde{E}(M)\)._ We remind of some important characterizations of the rational hull of a module. **Proposition 1.4**.: _The following hold true for a right \(R\)-module \(M\) and \(T=\operatorname{End}_{R}(E(M))\):_ 1. ([9, Exercises 5])__\(\widetilde{E}(M)=\{x\in E(M)|\ \vartheta|_{M}=1_{M}\ \text{with}\ \vartheta\in T\ \Rightarrow\ \vartheta(x)=x\}\)_._ 2. ([7, Proposition 8.16])__\(\widetilde{E}(M)=\{x\in E(M)\,|\ \forall y\in E(M)\setminus\{0\},\ y\cdot x^{-1}M \neq 0\}\)_._ ## 2. The rational hull of a module As the injective hull of a module \(M\) is the minimal injective module including \(M\), the next result shows that the rational hull of a module \(M\) is the minimal rationally complete module including \(M\). Recall that a right \(R\)-module \(M\) is said to be _rationally complete_ if it has no proper rational (or dense) extensions, or equivalently \(\widetilde{E}(M)=M\). Thus, the rational hull \(\widetilde{E}(M)\) of a module \(M\) is rationally complete. **Theorem 2.1**.: _The following conditions are equivalent for right \(R\)-modules \(M\) and \(F\):_ 1. \(F\) _is maximal dense over_ \(M\)_;_ 2. \(F\) _is rationally complete, and is dense over_ \(M\)_;_ 3. \(F\) _is minimal rationally complete, and is essential over_ \(M\)_._ _Note that a right \(R\)-module \(F\) is exactly the rational hull of a module \(M\) if \(F\) satisfies any one of the above equivalent conditions._ Proof.: (a)\(\Rightarrow\)(b) From Proposition 1.3, it is easy to see that \(F\) has no proper dense extension. So, \(F\) is a rationally complete module. (b)\(\Rightarrow\)(c) Let \(F^{\prime}\) be rationally complete such that \(M\leq F^{\prime}\leq F\). Since \(M\leq^{\text{den}}F\), from Proposition 1.2(ii) \(M\leq^{\text{den}}F^{\prime}\leq^{\text{den}}F\). Thus, from Proposition 1.3\(F\leq^{\text{den}}\widetilde{E}(F^{\prime})=F^{\prime}\) because \(F^{\prime}\) is rationally complete. Therefore \(F=F^{\prime}\). (c)\(\Rightarrow\)(a) Let \(F\) be minimal rationally complete over \(M\). Since \(F\) is essential over \(M\), \(M\leq F\leq E(M)\). Since \(M\leq^{\text{den}}\widetilde{E}(M)\), \(\text{Hom}_{R}(\widetilde{E}(M)/M,E(M))=0\). Also, since \(E(F)=E(M)\), \(\text{Hom}_{R}(\widetilde{E}(M)/M,E(F))=0\). From [7, Theorem 8.24], an inclusion map \(\iota:M\to F\) extends to \(\rho:\widetilde{E}(M)\to F\) as \(F\) is rationally complete (see also Proposition 2.13). Note that \(\rho\) is a monomorphism. Since \(\widetilde{E}(M)\) is rationally complete and \(F\) is minimal, \(\widetilde{E}(M)=F\). The next example shows that the condition "essential over \(M\)" in Theorem 2.1(c) is not superfluous. **Example 2.2**.: Let \(M=\mathbb{Z}\) and \(F=\mathbb{Z}_{(p)}\oplus\mathbb{Z}_{p}\) be right \(\mathbb{Z}\)-modules where \(\mathbb{Z}_{(p)}\) is the localization of \(\mathbb{Z}\) at the prime ideal \((p)\). It is easy to see that \(M\) is not essential in \(F\), so \(F\) is not a rational hull of \(M\). In fact, \(F\) is minimal rationally complete over \(M\): From [7, Example 8.21], \(F\) is rationally complete because \(F\) is the rational hull of \(L=\mathbb{Z}\oplus\mathbb{Z}_{p}\). It is enough to show that \(F\) is minimal over \(M\): Let \(K\) be a rationally complete module such that \(M\leq K\leq F\). Hence \(1=\text{u.dim}(M)\leq\text{u.dim}(K)\leq\text{u.dim}(F)=2\). Assume that \(\text{u.dim}(K)=1.\) Then \(M\leq^{\text{ess}}K\), and hence \(K\) is nonsingular since \(M\) is nonsingular. Thus \(M\leq^{\text{den}}K\), which implies that \(K\cong\mathbb{Q}\) since \(K\) is rationally complete and \(\widetilde{E}(M)=\mathbb{Q}\). It follows that \(\mathbb{Q}\) can be embedded into \(F=\mathbb{Z}_{(p)}\oplus\mathbb{Z}_{p}\), a contradiction. Therefore, \(\text{u.dim}(K)=2.\) Then \(K\leq^{\text{ess}}F\), and hence \(K\cap\mathbb{Z}_{p}\neq 0\). Thus \(\mathbb{Z}_{p}\leq K\), which implies that \(L=\mathbb{Z}\oplus\mathbb{Z}_{p}\leq K\). Note that \(L\leq^{\text{den}}F\) since \(F=\widetilde{E}(L)\). Hence \(K\leq^{\text{den}}F\), so that \(K=F\) due to the fact that \(K\) is rationally complete. We provide another characterization for the rational hull of a module using the relatively dense property. A right ideal \(I\) of a ring \(R\) is called _relatively dense to a right \(R\)-module_\(M\) (or \(M\)_-dense_) in \(R\) if for any \(r\in R\) and \(0\neq m\in M\), \(m\cdot r^{-1}I\neq 0\). It is denoted by \(I\leq^{\text{den}}_{M}R\). **Theorem 2.3**.: _Let \(M\) be a right \(R\)-module. Then \(\widetilde{E}(M)=\{x\in E(M)\,|\,x^{-1}M\leq^{\text{den}}_{M}R\}\)._ Proof.: Let \(x\in\widetilde{E}(M)\) be arbitrary. Consider a right ideal \(x^{-1}M\leq R\). Let \(0\neq m\in M\) and \(r\in R\). Since \(M\leq^{\text{den}}\widetilde{E}(M)\), there exists \(s\in R\) such that \(ms\neq 0\) and \((xr)s=x(rs)\in M\), that is, \(rs\in x^{-1}M\). Hence \(x^{-1}M\leq^{\text{den}}_{M}R\). For the reverse inclusion, let \(x\in E(M)\) such that \(x^{-1}M\leq^{\text{den}}_{M}R\). For an arbitrary nonzero element \(0\neq y\in E(M)\), it suffices to show that \(y\cdot x^{-1}M\neq 0\). As \(M\leq^{\text{ess}}E(M)\), \(0\neq yr\in M\) for some \(r\in R\). Since \(x^{-1}M\leq^{\text{den}}_{M}R\), there exists \(s\in R\) such that \(yrs\neq 0\) and \(rs\in x^{-1}M\). Hence \(0\neq yrs\in y\cdot x^{-1}M\). Therefore \(x\in\widetilde{E}(M)\). The next definition was shown in [4, pp79] as \(N\leq M(K)\), so we call a submodule \(N\) relatively dense to a module \(K\) in a module \(M\). (For details, see [17].) **Definition 2.4**.: A submodule \(N\) of a right \(R\)-module \(M\) is said to be _relatively dense to a right \(R\)-module \(K\)_ (or \(K\)-_dense_) in \(M\) if for any \(m\in M\) and \(0\neq x\in K\), \(x\cdot m^{-1}N\neq 0\), denoted by \(N\leq_{K}^{\operatorname{den}}M\). Note that \(N\) is \(M\)-dense in \(M\) if and only if \(N\) is dense in \(M\). We provide some characterizations of the relative density property. One can compare the following characterizations to Proposition 1.1. The equivalence (a)\(\Leftrightarrow\)(c) in the following proposition is provided by [4, pp79]. **Proposition 2.5**.: _The following are equivalent for right \(R\)-modules \(M,K\) and \(N\leq M\):_ * \(N\) _is_ \(K\)_-dense in_ \(M\)_;_ * \(\operatorname{Hom}_{R}(M/N,E(K))=0\)_;_ * _for any submodule_ \(P\) _such that_ \(N\leq P\leq M\)_,_ \(\operatorname{Hom}_{R}(P/N,K)=0\)_._ Proof.: (a)\(\Rightarrow\)(b) Assume that there exists \(0\neq\alpha\in\operatorname{Hom}_{R}(M,E(K))\) with \(\alpha N=0\). Since \(\alpha M\cap K\neq 0\), there exist \(x\in M\) and \(0\neq y\in K\) such that \(\alpha(x)=y\). Since \(N\) is \(K\)-dense in \(M\), there exists \(r\in R\) such that \(xr\in N\) and \(0\neq yr\). However, \(0=\alpha(xr)=\alpha(x)r=yr\neq 0\), a contradiction. Hence \(\operatorname{Hom}_{R}(M/N,E(K))=0\). (b)\(\Rightarrow\)(c) Assume that for any submodule \(P\) such that \(N\leq P\leq M\), there exists \(0\neq\eta\in\operatorname{Hom}_{R}(P/N,K)\). Then by the injectivity of \(E(K)\), we can extend \(\eta\) to a nonzero homomorphism from \(M/N\) to \(E(K)\), a contradiction. Hence \(\operatorname{Hom}_{R}(P/N,K)=0\). (c)\(\Rightarrow\)(a) Assume that \(y\cdot x^{-1}N=0\) for some \(x\in M\) and \(0\neq y\in K\). We define \(\gamma:N+xR\to K\) given by \(\gamma(n+xr)=yr\) for \(n\in N\) and \(r\in R\). It is easy to see that \(\gamma\) is a well-defined \(R\)-homomorphism vanishing on \(N\). Since \(N\leq N+xR\leq M\), by hypothesis \(0=\gamma(x)=y\neq 0\), a contradiction. Thus \(N\) is \(K\)-dense in \(M\). We obtain another characterization of the relative density property related to homomorphisms. **Proposition 2.6**.: _Let \(M,K\) be right \(R\)-modules. Then a submodule \(N\) is \(K\)-dense in \(M\) if and only if \(\mathbf{l}_{H}(N)=0\) where \(H=\operatorname{Hom}_{R}(M,E(K))\)._ Proof.: Suppose \(N\) is \(K\)-dense in \(M\). Assume that \(0\neq\varphi\in H\) such that \(\varphi N=0\). Then there exists \(m\in M\setminus N\) such that \(\varphi(m)\neq 0\). Since \(\varphi(m)\in E(K)\), \(0\neq\varphi(m)r\in K\) for some \(r\in R\). Hence there exists \(s\in R\) such that \(mrs\in N\) and \(\varphi(m)rs\neq 0\) because \(N\leq_{K}^{\operatorname{den}}M\). That yields a contradiction that \(0\neq\varphi(m)rs=\varphi(mrs)\in\varphi N=0\). Therefore \(\mathbf{l}_{H}(N)=0\). Conversely, assume that \(x\cdot m^{-1}N=0\) for some \(0\neq x\in K\) and \(m\in M\). We define \(\gamma:N+mR\to E(K)\) by \(\gamma(n+mt)=xt\) for \(n\in N\) and \(t\in R\). Clearly, \(\gamma\) is a nonzero \(R\)-homomorphism vanishing on \(N\). Also, there exists \(\overline{\gamma}:M\to E(K)\) such that \(\overline{\gamma}|_{N+mR}=\gamma\). Since \(0=\overline{\gamma}N\), \(0\neq\overline{\gamma}\in\mathbf{l}_{H}(N)\), a contradiction. Therefore \(x\cdot m^{-1}N\neq 0\). If \(M=R\), the following result is directly provided. **Corollary 2.7** ([14, Proposition 1.1]).: _Let \(K\) be a right \(R\)-module and \(I\) be a right ideal of a ring \(R\). Then \(I\) is \(K\)-dense in \(R\) if and only if \(\mathbf{l}_{E(K)}(I)=0\)._ **Proposition 2.8**.: _Let \(K\) be a right \(R\)-module and \(I\) be an ideal of a ring \(R\). Then \(\mathbf{l}_{K}(I)=0\) if and only if \(\mathbf{l}_{E(K)}(I)=0\)._ Proof.: Since one direction is trivial, we need to show the other direction. Suppose \(\mathbf{l}_{K}(I)=0\). Assume that \(\mathbf{l}_{E(K)}(I)\neq 0\). Then there exists \(0\neq x\in E(K)\) such that \(xI=0\). Also, \(0\neq xr\in K\) for some \(r\in R\) because \(K\leq^{\operatorname{ess}}E(K)\). Since \(xrI\subseteq xI=0\), \(0\neq xr\in\mathbf{l}_{K}(I)\), a contradiction. Therefore \(\mathbf{l}_{E(K)}(I)=0\) **Corollary 2.9** ([3, Proposition 1.3.11(iv)]).: _Let \(I\) be an ideal of a ring \(R\). Then \(I\leq^{\mathrm{den}}R_{R}\) if and only if \(\mathbf{l}_{R}(I)=0\)._ Proof.: The proof also follows from Corollary 2.7 and Proposition 2.8. Using Theorem 2.3 and Corollary 2.7, we obtain another characterization for the rational hull of a module. Also, using the characterization of the relatively dense property, new characterization for the rational hull of a module is provided. **Corollary 2.10**.: _Let \(M\) be a right \(R\)-module. Then the following statements hold true:_ 1. ([14, Proposition 1.4(b)])__\(\widetilde{E}(M)=\{x\in E(M)\,|\,\mathbf{l}_{E(M)}(x^{-1}M)=0\}\)_._ 2. \(\widetilde{E}(M)=\{x\in E(M)\,|\,\mathrm{Hom}_{R}(R/x^{-1}M,E(M))=0\}\)_._ Proof.: It directly follows from Theorem 2.3, Corollary 2.7, and Proposition 2.5. Several new characterizations for the maximal right ring of quotients of a ring are provided as the following. **Theorem 2.11**.: _Let \(R\) be a ring. Then the following statements hold true:_ 1. _A right ideal_ \(I\) _is dense in_ \(R\) _if and only if_ \(\mathbf{l}_{E(R)}(I)=0\)_._ 2. \(Q(R)=\{x\in E(R)\,|\,x^{-1}R\leq^{\mathrm{den}}R\}\)_._ 3. \(Q(R)=\{x\in E(R)\,|\,\mathbf{l}_{E(R)}(x^{-1}R)=0\}\)_._ 4. \(Q(R)=\{x\in E(R)\,|\,\mathrm{Hom}_{R}(R/x^{-1}R,E(R))=0\}\)_._ We give characterizations for a rationally complete module. **Theorem 2.12**.: _The following conditions are equivalent for a right \(R\)-module \(M\):_ 1. \(M\) _is a rationally complete module;_ 2. \(\{\overline{x}\in E(M)/M\,|\,\mathbf{l}_{E(M)}(\mathbf{r}_{R}(\overline{x}))= 0\}=\overline{0}\)_;_ 3. _For any_ \(I\leq^{\mathrm{den}}_{M}R\)_,_ \(\varphi\in\mathrm{Hom}_{R}(I,M)\) _can be uniquely extended to_ \(\widetilde{\varphi}\in\mathrm{Hom}_{R}(R,M)\)_._ Proof.: Take \(A:=\{\overline{x}\in E(M)/M\,|\,\mathbf{l}_{E(M)}(\mathbf{r}_{R}(\overline{x} ))=0\}\). (a)\(\Rightarrow\)(b) Assume that \(x\in E(M)\setminus M\) such that \(\overline{x}\in A\). From Corollary 2.7, \(\mathbf{r}_{R}(\overline{x})\leq^{\mathrm{den}}_{M}R\). Since \(\mathbf{r}_{R}(\overline{x})=x^{-1}M\), \(x^{-1}M\leq^{\mathrm{den}}_{M}R\). Hence from Theorem 2.3\(x\in\widetilde{E}(M)=M\) because \(M\) is rationally complete, a contradiction. Therefore \(A=\overline{0}\). (b)\(\Rightarrow\)(c) Assume to the contrary of the condition (c). For \(I\leq^{\mathrm{den}}_{M}R\), since \(M\subseteq E(M)\), there exists \(\varphi\in\mathrm{Hom}_{R}(I,M)\) such that \(\widetilde{\varphi}\in\mathrm{Hom}_{R}(R,E(M))\), \(\widetilde{\varphi}|_{I}=\varphi\), and \(\widetilde{\varphi}(1)\notin M\). Since \(\overline{0}\neq\widetilde{\varphi}(1)+M\in E(M)/M\) and \(I\subseteq\mathbf{r}_{R}(\widetilde{\varphi}(1)+M)\leq^{\mathrm{den}}_{M}R\), \(\mathbf{l}_{E(M)}(\mathbf{r}_{R}(\widetilde{\varphi}(1)+M))=0\) from Corollary 2.7, a contradiction that \(A=\overline{0}\). Therefore \(\varphi\in\mathrm{Hom}_{R}(I,M)\) is extended to \(\widetilde{\varphi}\in\mathrm{Hom}_{R}(R,M)\). For the uniqueness, the proof is similar to that of Proposition 2.13. (c)\(\Rightarrow\)(a) Assume that \(M\) is not rationally complete. Then there exists \(x\in\widetilde{E}(M)\setminus M\) such that \(x^{-1}M\leq^{\mathrm{den}}_{M}R\) from Theorem 2.3. Define \(\varphi:x^{-1}M\to M\) given by \(\varphi(r)=xr\). By hypothesis, \(\widetilde{\varphi}(1)=x1=x\in M\), a contradiction. Therefore \(M\) is rationally complete. Next, as a ring is embedding into its maximal right ring of quotients, we provide the relationship between the endomorphism rings of a module and its rational hull. **Proposition 2.13**.: _Let \(M\) and \(K\) be right \(R\)-modules. For any \(N\leq^{\mathrm{den}}_{K}M\), \(\varphi\in\mathrm{Hom}_{R}(N,K)\) is uniquely extended to \(\widetilde{\varphi}\in\mathrm{Hom}_{R}(M,\widetilde{E}(K))\) and \(\widetilde{\varphi}|_{N}=\varphi\). In addition, \(\varphi N\leq^{\mathrm{den}}\widetilde{\varphi}M\)._ Proof.: (Existence) Let \(\varphi\in\mathrm{Hom}_{R}(N,K)\) be arbitrary. Then there exists \(\widetilde{\varphi}\in\mathrm{Hom}_{R}(M,E(K))\) such that \(\widetilde{\varphi}|_{N}=\varphi\). Since \(\widetilde{\varphi}\) induces a surjection from \(M/N\) to \((\widetilde{\varphi}M+\widetilde{E}(K))/\widetilde{E}(K)\) and \(\mathrm{Hom}_{R}(M/N,E(K))=0\) (see Proposition 2.5), \(\mathrm{Hom}_{R}\left(\frac{\widetilde{\varphi}M+\widetilde{E}(K)}{E(K)},E(K) \right)=0\). Hence \(\widetilde{E}(K)\leq^{\mathrm{den}}\widetilde{\varphi}M+\widetilde{E}(K)\) by Proposition 1.1. As \(\widetilde{E}(K)\) is rationally complete, \(\widetilde{\varphi}M\subseteq\widetilde{E}(K)\). (Uniqueness) Suppose \(\widetilde{\varphi}\) and \(\widetilde{\psi}\) are in \(\operatorname{Hom}_{R}(M,\widetilde{E}(K))\) such that \(\widetilde{\varphi}|_{N}=\widetilde{\psi}|_{N}\). It is enough to show that \(\widetilde{\varphi}=\widetilde{\psi}\). Assume that \(\widetilde{\varphi}(x)\neq\widetilde{\psi}(x)\) for some \(x\in M\). Take \(0\neq y=(\widetilde{\varphi}-\widetilde{\psi})(x)\in\widetilde{E}(K)\). Thus, there exists \(r\in R\) such that \(0\neq yr\in K\). Since \(N\leq^{\operatorname{den}}_{K}M\), there exists \(s\in R\) such that \(xrs\in N\) and \(yrs\neq 0\). This yields a contradiction that \(0\neq yrs=(\widetilde{\varphi}-\widetilde{\psi})(xrs)=(\widetilde{\varphi}|_ {N}-\widetilde{\psi}|_{N})(xrs)=0\). Therefore \(\widetilde{\varphi}=\widetilde{\psi}\). In addition, let \(x_{1}\in\widetilde{\varphi}M\) and \(0\neq x_{2}\in\widetilde{\varphi}M\). Then \(\widetilde{\varphi}(m_{1})=x_{1},\widetilde{\varphi}(m_{2})=x_{2}\) for some \(m_{1},m_{2}\in M\). As \(\widetilde{\varphi}M\subseteq\widetilde{E}(K)\), \(0\neq x_{2}r\in K\) for some \(r\in R\). Since \(N\leq^{\operatorname{den}}_{K}M\) and \(m_{1}r\in M\), there exists \(s\in R\) such that \(m_{1}rs\in N\) and \(0\neq x_{2}rs\). Thus \(x_{1}rs=\widetilde{\varphi}(m_{1}rs)\in\varphi N\) and \(0\neq x_{2}rs\). Therefore \(\varphi N\leq^{\operatorname{den}}\widetilde{\varphi}M\). Note that the dense property implies the essential property, however the relatively dense property does not imply the essential property in general: See \(\mathbb{Z}_{p}\leq^{\operatorname{den}}_{\mathbb{Z}}\mathbb{Z}_{p}\oplus \mathbb{Z}_{p}\) but \(\mathbb{Z}_{p}\nleq^{\operatorname{ess}}_{K}\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\) as a \(\mathbb{Z}\)-module. However, Proposition 2.13 shows that \(\varphi N\leq^{\operatorname{den}}_{K}\widetilde{\varphi}M\) when \(N\leq^{\operatorname{den}}_{K}M\) for any \(\varphi\in\operatorname{Hom}_{R}(N,K)\). As a corollary, we have a generalized result of Theorem 2.12((a)\(\Rightarrow\)(b)). **Corollary 2.14**.: _Let \(M\) be a right \(R\)-module. If \(K\) is rationally complete, then for any \(N\leq^{\operatorname{den}}_{K}M\), \(\varphi\in\operatorname{Hom}_{R}(N,K)\) is uniquely extended to \(\widetilde{\varphi}\in\operatorname{Hom}_{R}(M,K)\) and \(\widetilde{\varphi}|_{N}=\varphi\)._ **Theorem 2.15**.: _Let \(M\) be a right \(R\)-module. Then \(\operatorname{End}_{R}(M)\) is considered as a subring of \(\operatorname{End}_{R}(\widetilde{E}(M))\)._ Proof.: Since \(M\leq^{\operatorname{den}}\widetilde{E}(M)\), from Proposition 2.13\(\varphi\in\operatorname{End}_{R}(M)\) can be uniquely extended to \(\widetilde{\varphi}\in\operatorname{End}_{R}(\widetilde{E}(M))\) because \(\operatorname{End}_{R}(M)\subseteq\operatorname{Hom}_{R}(M,\widetilde{E}(M))\). Thus we have a one-to-one correspondence between \(\operatorname{End}_{R}(M)\) and \(\{\widetilde{\varphi}\in\operatorname{End}_{R}(\widetilde{E}(M))\,|\, \widetilde{\varphi}|_{M}=\varphi\in\operatorname{End}_{R}(M)\}\) given by \(\Omega(\varphi)=\widetilde{\varphi}\). We need to check that \(\Omega\) is a ring homomorphism. (i) Since \(\Omega(\varphi+\psi)|_{M}=\widetilde{(\varphi+\psi)}|_{M}=\varphi+\psi=\Omega( \varphi)|_{M}+\Omega(\psi)|_{M}=(\Omega(\varphi)+\Omega(\psi))|_{M}\), from the uniqueness of Proposition 2.13 we have \(\Omega(\varphi+\psi)=\Omega(\varphi)+\Omega(\psi)\). (ii) Since \(\Omega(\varphi\circ\psi)|_{M}=\widetilde{(\varphi\circ\psi)}|_{M}=\varphi\circ \psi=\Omega(\varphi)|_{M}\circ\Omega(\psi)|_{M}=(\Omega(\varphi)\circ\Omega( \psi))|_{M}\) because \(\Omega(\varphi)|_{M}\leq M\), from the uniqueness of Proposition 2.13 we have \(\Omega(\varphi\circ\psi)=\Omega(\varphi)\circ\Omega(\psi)\). Thus \(\operatorname{End}_{R}(M)\) is isomorphic to a subring of \(\operatorname{End}_{R}(\widetilde{E}(M))\). Therefore we consider \(\operatorname{End}_{R}(M)\) as a subring of \(\operatorname{End}_{R}(\widetilde{E}(M))\). We conclude this section with results for the rational hulls of quasi-continuous modules and quasi-injective modules. **Theorem 2.16**.: _The following statements hold true for a module \(M\):_ * _If_ \(M\) _is a quasi-continuous module then_ \(\widetilde{E}(M)\) _is a quasi-continuous module._ * _If_ \(M\) _is a quasi-injective module then_ \(\widetilde{E}(M)\) _is a quasi-injective module._ Proof.: (i) Let \(T=\operatorname{End}_{R}(E(\widetilde{E}(M)))=\operatorname{End}_{R}(E(M))\). From [10, Theorem 2.8], we need to show that \(f\widetilde{E}(M)\leq\widetilde{E}(M)\) for all idempotents \(f^{2}=f\in T\): Assume that \(f\widetilde{E}(M)\nleq\widetilde{E}(M)\) for some idempotent \(f^{2}=f\in T\). Then there exists \(x\in\widetilde{E}(M)\) such that \(f(x)\notin\widetilde{E}(M)\). Thus, there exists \(g\in T\) such that \(gM=0\) and \(gf(x)\neq 0\). Since \(gf(x)\in E(M)\), there exists \(r\in R\) such that \(0\neq gf(xr)\in\widetilde{E}(M)\). Thus, as \(M\leq^{\operatorname{den}}\widetilde{E}(M)\) and \(xr\in\widetilde{E}(M)\), there exists \(s\in R\) such that \(0\neq gf(xrs)\) and \(xrs\in M\). Note that \(fM\leq M\) for all idempotents \(f^{2}=f\in T\) because \(M\) is quasi-continuous. However, \(0\neq gf(xrs)\in gfM\leq gM=0\), a contradiction. Therefore \(\widetilde{E}(M)\) is a quasi-continuous module. (ii) The proof is similar to that of part (i) by using [10, Corollary 1.14]. _Remark 2.17_ ([1, Theorem 5.3]).: The rational hull of every extending module is an extending module. Note that if \(M\) is an injective module then \(M=\widetilde{E}(M)\) (see [7, Examples 8.18(1)]). The next examples exhibit that the converses of Theorem 2.16 and Remark 2.17 do not hold true. **Example 2.18**.: (i) Consider \(\mathbb{Z}\) as a \(\mathbb{Z}\)-module. Then \(\widetilde{E}(\mathbb{Z})=\mathbb{Q}\) is (quasi-)injective, while \(\mathbb{Z}\) is not quasi-injective. (ii)([10, Example 2.9]) Consider a ring \(R=\left(\begin{smallmatrix}F&F\\ 0&F\end{smallmatrix}\right)\) where \(F\) is a field. Then \(\widetilde{E}(R_{R})=\left(\begin{smallmatrix}F&F\\ F&F\end{smallmatrix}\right)\) is injective (hence, quasi-continuous), while \(R_{R}\) is not quasi-continuous. (iii) Consider \(\mathbb{Z}^{(\mathbb{N})}\) as a \(\mathbb{Z}\)-module. Then \(\widetilde{E}(\mathbb{Z}^{(\mathbb{N})})=\mathbb{Q}^{(\mathbb{N})}\) is injective (hence, extending), while \(\mathbb{Z}^{(\mathbb{N})}\) is not extending. **Corollary 2.19**.: _The maximal right ring of quotients of a quasi-continuous ring is also a quasi-continuous ring._ _Remark 2.20_ ([7, Exercises 13.8]).: The maximal right ring of quotients of a simple (resp., prime, semiprime) ring is also a simple (resp., prime, semiprime) ring. **Open Question 1**.: _Is the rational hull of a continuous module always a continuous module?_ ## 3. Direct sum of rational hulls of modules As we know, the injective hull of the direct sum of two modules is the direct sum of the injective hulls of each module without any condition. However, the rational hull case is different from the injective hull case. In this section, we discuss the condition for the rational hull of the direct sum of two modules to be the direct sum of the rational hulls of those modules. The next example shows that the rational hull of the direct sum of two modules is not the direct sum of the rational hulls of each module, in general. **Example 3.1**.: Consider \(M=\mathbb{Z}\oplus\mathbb{Z}_{p}\) as a \(\mathbb{Z}\)-module where \(p\) is prime. Then \(\widetilde{E}(\mathbb{Z})=\mathbb{Q}\) and \(\widetilde{E}(\mathbb{Z}_{p})=\mathbb{Z}_{p}\). However, by [7, Example 8.21]\(\widetilde{E}(M)=\mathbb{Z}_{(p)}\oplus\mathbb{Z}_{p}\neq\mathbb{Q}\oplus \mathbb{Z}_{p}\) where \(\mathbb{Z}_{(p)}=\{\frac{m}{n}\in\mathbb{Q}\,|\,m,n\in\mathbb{Z},(n,p)=1\}\). Hence \(M\) is not a dense submodule of \(\mathbb{Q}\oplus\mathbb{Z}_{p}\): For \((\frac{1}{p},\overline{0})\) and \(0\neq(0,\overline{1})\in\mathbb{Q}\oplus\mathbb{Z}_{p}\), there is no \(n\in\mathbb{Z}\) such that \(n(\frac{1}{p},\overline{0})\in\mathbb{Z}\oplus\mathbb{Z}_{p}\) and \(n(0,\overline{1})\neq 0\). **Proposition 3.2**.: _Let \(M=\bigoplus_{k\in\Lambda}M_{k}\) where \(M_{k}\) be a right \(R\)-module and \(\Lambda\) is any index set. If either \(R\) is right noetherian or \(|\Lambda|\) is finite, then \(\widetilde{E}(M)\leq\bigoplus_{k\in\Lambda}\widetilde{E}(M_{k})\)._ Proof.: Suppose \(0\neq m\in\widetilde{E}(M)\). Since \(\widetilde{E}(M)\subseteq E(M)=\oplus_{k\in\Lambda}E(M_{k})\) because \(R\) is right noetherian or \(|\Lambda|\) is finite, there exists \(\ell\in\mathbb{N}\) such that \(m\in\oplus_{i=1}^{\ell}E(M_{i})\). Thus, \(m=(m_{1},\ldots,m_{\ell})\) where \(m_{i}\in E(M_{i})\). Since \((0,\ldots,0,y_{i},0,\ldots,0)\cdot m^{-1}M\neq 0\) for all \(0\neq y_{i}\in E(M_{i})\) and \(m^{-1}M=m_{1}^{-1}M_{1}\cap\cdots\cap m_{\ell}^{-1}M_{\ell}\), \(y_{i}\cdot m_{i}^{-1}M_{i}\neq 0\) for all \(0\neq y_{i}\in E(M_{i})\). Thus, \(m_{i}\in\widetilde{E}(M_{i})\) for all \(1\leq i\leq\ell\) from Proposition 1.4. So, \(m=(m_{1},\ldots,m_{\ell})\in\oplus_{i=1}^{\ell}\widetilde{E}(M_{i})\subseteq \oplus_{k\in\Lambda}\widetilde{E}(M_{k})\). Therefore \(\widetilde{E}(M)\leq\oplus_{k\in\Lambda}\widetilde{E}(M_{k})\). _Remark 3.3_.: Example 3.1 illustrates Proposition 3.2 because \(R=\mathbb{Z}\) is a noetherian ring, that is, \(\widetilde{E}(\mathbb{Z}\oplus\mathbb{Z}_{p})=\mathbb{Z}_{(p)}\oplus\mathbb{Z}_ {p}\lesssim\mathbb{Q}\oplus\mathbb{Z}_{p}=\widetilde{E}(\mathbb{Z})\oplus \widetilde{E}(\mathbb{Z}_{p})\). However, Example 3.7 shows that the condition "either \(R\) is right noetherian or \(|\Lambda|\) is finite" is not superfluous because \(\widetilde{E}(\oplus_{k\in\Lambda}\mathbb{Z}_{2})=\prod_{k\in\Lambda}\mathbb{ Z}_{2}\gtrsim\oplus_{k\in\Lambda}\mathbb{Z}_{2}=\oplus_{k\in\Lambda} \widetilde{E}(\mathbb{Z}_{2})\) with a non-noetherian ring \(R=\langle\oplus_{k\in\Lambda}\mathbb{Z}_{2},1\rangle\). To get the reverse inclusion of Proposition 3.2, first we provide the properties of the relatively dense property. **Lemma 3.4**.: _Let \(N\leq M\) and \(K_{i}\) be right \(R\)-modules for all \(i\in\Lambda\). Then \(N\) is \(K_{i}\)-dense in \(M\) for all \(i\in\Lambda\) if and only if \(N\) is \(\bigoplus_{i\in\Lambda}K_{i}\)-dense in \(M\) if and only if \(N\) is \(\bigoplus_{i\in\Lambda}\widetilde{E}(K_{i})\)-dense in \(M\)._ Proof.: Let \(P\) be any submodule such that \(N\leq P\leq M\). Since \(N\) is \(K_{i}\)-dense in \(M\), \(\operatorname{Hom}_{R}(P/N,K_{i})=0\) for all \(i\in\Lambda\) from Proposition 2.5. Consider the sequence \(0\to\oplus_{i\in\Lambda}K_{i}\to\prod_{i\in\Lambda}K_{i}\). Then we have \(0\to\operatorname{Hom}_{R}(P/N,\oplus_{i\in\Lambda}K_{i})\to\operatorname{ Hom}_{R}(P/N,\prod_{i\in\Lambda}K_{i})\cong\prod_{i\in\Lambda}(\operatorname{Hom}_{R} (P/N,K_{i})=0\). Thus \(\operatorname{Hom}_{R}(P/N,\oplus_{i\in\Lambda}K_{i})=0\). Therefore \(N\) is \(\oplus_{i\in\Lambda}K_{i}\)-dense in \(M\) from Proposition 2.5. Conversely, since \(\operatorname{Hom}_{R}(P/N,\oplus_{i\in\Lambda}K_{i})=0\), \(\operatorname{Hom}_{R}(P/N,K_{i})=0\) for each \(i\in\Lambda\). Hence \(N\) is \(K_{i}\)-dense in \(M\) for all \(i\in\Lambda\). For the second equivalence, since \(E(\widetilde{E}(K_{i}))=E(K_{i})\), from Proposition 2.5 it is easy to see that \(N\) is \(K_{i}\)-dense in \(M\) if and only if \(N\) is \(\widetilde{E}(K_{i})\)-dense in \(M\), for all \(i\in\Lambda\). After using the first equivalence, we have the second equivalence. Using Lemma 3.4, we obtain a characterization for \(\oplus_{k\in\Lambda}N_{k}\) to be a dense submodule of \(\oplus_{k\in\Lambda}M_{k}\) where \(N_{i}\) is a submodule of \(M_{i}\) for each \(i\in\Lambda\). **Proposition 3.5**.: _Let \(N_{i}\leq M_{i}\) be right \(R\)-modules for all \(i\in\Lambda\) where \(\Lambda\) is any index set. Let \(N=\bigoplus_{k\in\Lambda}N_{k}\) and \(M=\bigoplus_{k\in\Lambda}M_{k}\). Then \(N\leq^{\operatorname{den}}M\) if and only if \(N_{i}\) is \(M_{j}\)-dense in \(M_{i}\) for all \(i,j\in\Lambda\)._ Proof.: Suppose \(N\leq^{\operatorname{den}}M\). Then \(N\) is \(M\)-dense in \(M\) by the definition. From Lemma 3.4\(N\) is \(M_{j}\)-dense in \(M\) for all \(j\in\Lambda\). Let \(x_{i}\in M_{i}\) and \(0\neq y_{j}\in M_{j}\) be arbitrary for each \(i,j\in\Lambda\). Since \((0,\dots,0,x_{i},0,\dots)\in M\) and \(0\neq y_{j}\in M_{j}\), there exists \(r\in R\) such that \((0,\dots,0,x_{i},0,\dots)r\in N\) and \(y_{j}r\neq 0\). Since \(x_{i}r\in N_{i}\) and \(y_{j}r\neq 0\), \(N_{i}\) is \(M_{j}\)-dense in \(M_{i}\) for all \(i,j\in\Lambda\). Conversely, suppose \(N_{i}\) is \(M_{j}\)-dense in \(M_{i}\) for all \(i,j\in\Lambda\). From Lemma 3.4, \(N_{i}\) is \(\oplus_{k\in\Lambda}M_{k}\)-dense in \(M_{i}\) for all \(i\in\Lambda\). Let \(x\in M\) and \(0\neq y\in M\) be arbitrary. Then there exists \(\ell\in\mathbb{N}\) such that \(x=(x_{1},\dots,x_{\ell})\in\oplus_{k=1}^{\ell}M_{k}\leq M\). Since \(N_{1}\) is \(M\)-dense in \(M_{1}\), there exists \(r_{1}\in R\) such that \(x_{1}r_{1}\in N_{1}\) and \(0\neq yr_{1}\in M\). Also, since \(N_{2}\) is \(M\)-dense in \(M_{2}\), there exists \(r_{2}\in R\) such that \(x_{2}r_{1}r_{2}\in N_{2}\) and \(0\neq yr_{1}r_{2}\in M\). By the similar processing, we have \(r=r_{1}r_{2}\cdots r_{\ell}\in R\) such that \(xr\in\oplus_{k=1}^{\ell}N_{k}\leq N\) and \(yr\neq 0\). Therefore \(N\leq^{\operatorname{den}}M\). From Propositions 3.2 and 3.5, we have a characterization for the rational hull of the direct sum of modules to be the direct sum of the rational hulls of each module. **Theorem 3.6**.: _Let \(M=\bigoplus_{k\in\Lambda}M_{k}\) where \(M_{k}\) is a right \(R\)-module and \(\Lambda\) is any index set. If either \(R\) is right noetherian or \(|\Lambda|\) is finite, then \(\widetilde{E}(M)=\bigoplus_{k\in\Lambda}\widetilde{E}(M_{k})\) if and only if \(M_{i}\) is \(M_{j}\)-dense in \(\widetilde{E}(M_{i})\) for all \(i,j\in\Lambda\)._ Proof.: Suppose \(\widetilde{E}(M)=\oplus_{k\in\Lambda}\widetilde{E}(M_{k})\). Since \(M\leq^{\operatorname{den}}\oplus_{k\in\Lambda}\widetilde{E}(M_{k})\), from Proposition 3.5\(M_{i}\) is \(\widetilde{E}(M_{j})\)-dense in \(\widetilde{E}(M_{i})\) for all \(i,j\in\Lambda\). Thus, \(M_{i}\) is \(M_{j}\)-dense in \(\widetilde{E}(M_{i})\) for all \(i,j\in\Lambda\) from Lemma 3.4. Conversely, suppose \(M_{i}\) is \(M_{j}\)-dense in \(\widetilde{E}(M_{i})\) for all \(i,j\in\Lambda\). Then \(M_{i}\) is \(\widetilde{E}(M_{j})\)-dense in \(\widetilde{E}(M_{i})\) for all \(i,j\in\Lambda\) from Lemma 3.4. Thus, from Proposition 3.5\(M\leq^{\operatorname{den}}\oplus_{k\in\Lambda}\widetilde{E}(M_{k})\). Hence \(\oplus_{k\in\Lambda}\widetilde{E}(M_{k})\leq\widetilde{E}(M)\) from Proposition 1.3. Also, from Proposition 3.2\(\widetilde{E}(M)\leq\oplus_{k\in\Lambda}\widetilde{E}(M_{k})\). Therefore \(\widetilde{E}(M)=\oplus_{k\in\Lambda}\widetilde{E}(M_{k})\). The next examples show that the condition "\(R\) is right noetherian or \(|\Lambda|\) is finite" in Theorem 3.6 is not superfluous. **Example 3.7**.: (i) Let \(R=\langle\oplus_{k\in\Lambda}\mathbb{Z}_{2},1\rangle\) and \(M=\oplus_{k\in\Lambda}M_{k}\) where \(M_{k}=\mathbb{Z}_{2}\). Note that \(R\) is not noetherian. Since \(\mathbb{Z}_{2}\) is an injective \(R\)-module, \(\widetilde{E}(\mathbb{Z}_{2})=\mathbb{Z}_{2}\). Thus \(M_{i}\) is \(M_{j}\)-dense in \(\widetilde{E}(M_{i})\) for all \(i,j\in\Lambda\). However, \(\widetilde{E}(\oplus_{k\in\Lambda}\mathbb{Z}_{2})=\prod_{k\in\Lambda}\mathbb{Z}_ {2}\geq\oplus_{k\in\Lambda}\mathbb{Z}_{2}=\oplus_{k\in\Lambda}\widetilde{E}( \mathbb{Z}_{2})\). (ii) Let \(R=\{(a_{k})\in\prod_{k\in\Lambda}\mathbb{Z}\,|\,a_{k}\text{ is eventually constant}\}\) and \(M=\oplus_{k\in\Lambda}\mathbb{Z}\). Note that \(R\) is not noetherian. Then \(\widetilde{E}(\mathbb{Z})=\mathbb{Q}\) and \(\mathbb{Z}\) is \(\mathbb{Z}\)-dense in \(\widetilde{E}(\mathbb{Z})\). However, \(\widetilde{E}(\oplus_{k\in\Lambda}\mathbb{Z})=\prod_{k\in\Lambda}\mathbb{Q} \geq\oplus_{k\in\Lambda}\mathbb{Q}=\oplus_{k\in\Lambda}\widetilde{E}(\mathbb{Z})\). The next example illustrates Theorem 3.6. **Example 3.8**.: Consider \(M=\mathbb{Z}\oplus\mathbb{Z}_{p}\) as a \(\mathbb{Z}\)-module where \(p\) is prime. Then \(\mathbb{Z}_{p}\) is \(\mathbb{Z}\)-dense in \(\widetilde{E}(\mathbb{Z}_{p})=\mathbb{Z}_{p}\), but \(\mathbb{Z}\) is not \(\mathbb{Z}_{p}\)-dense in \(\mathbb{Q}\) because for \(\frac{1}{p}\in\mathbb{Q},\overline{\Gamma}\in\mathbb{Z}_{p}\), there is no element \(t\in\mathbb{Z}\) such that \(t\frac{1}{p}\in\mathbb{Z}\) and \(t\overline{1}\neq 0\). Thus, from Theorem 3.6\(\widetilde{E}(M)=\mathbb{Z}_{(p)}\oplus\mathbb{Z}_{p}\leq\mathbb{Q}\oplus \mathbb{Z}_{p}=\widetilde{E}(\mathbb{Z})\oplus\widetilde{E}(\mathbb{Z}_{p})\). (See Example 3.1 for details.) **Corollary 3.9**.: _Let \(M\) be a right \(R\)-module. If either \(R\) is right noetherian or \(\Lambda\) is a finite index set, then \(\widetilde{E}(M^{(\Lambda)})=(\widetilde{E}(M))^{(\Lambda)}\)._ **Corollary 3.10**.: _Let \(\{M_{k}\}_{k\in\Lambda}\) be a class of rationally complete right \(R\)-module for any index set \(\Lambda\). If either \(R\) is right noetherian or \(|\Lambda|\) is finite, then \(M=\bigoplus_{k\in\Lambda}M_{k}\) is rationally complete._ Proof.: Since \(\widetilde{E}(M_{i})=M_{i}\), \(M_{i}\) is \(M_{j}\)-dense in \(\widetilde{E}(M_{i})\) for all \(i,j\in\Lambda\). From Theorem 3.6, \(\widetilde{E}(M)=\oplus_{k\in\Lambda}\widetilde{E}(M_{k})=\oplus_{k\in \Lambda}M_{k}=M\). Therefore \(\oplus_{k\in\Lambda}M_{k}\) is rationally complete. **Proposition 3.11** ([14, Proposition 1.9]).: _Let \(\{S_{i}\}_{i\in\Lambda}\) be a set of nonisomorphic simple modules, representing all singular simple modules. Then every module containing the module \(P=\bigoplus_{i\in\Lambda}S_{i}\) is rationally complete._ ## 4. The endomorphism ring of a module over a right ring of quotients of a ring In this section, we obtain some condition to be \(\operatorname{End}_{R}(M)=\operatorname{End}_{H}(M)\) where \(H\) is a right ring of quotients of a ring \(R\). Recall that an extension ring \(H\) of a ring \(R\) is called a _right ring of quotients_ of \(R\) if for any two elements \(x\neq 0\) and \(y\) of \(H\), there exists an element \(r\in R\) such that \(xr\neq 0\) and \(yr\in R\). **Theorem 4.1**.: _Let \(M\) be a right \(H\)-module where \(H\) is a right ring of quotients of a ring \(R\). If \(R\) is \(M_{R}\)-dense in \(H_{R}\) then \(\operatorname{End}_{R}(M)=\operatorname{End}_{H}(M)\)._ Proof.: Since \(\operatorname{End}_{H}(M)\subseteq\operatorname{End}_{R}(M)\), it suffices to show that \(\operatorname{End}_{R}(M)\subseteq\operatorname{End}_{H}(M):\) Let \(\varphi\in\operatorname{End}_{R}(M)\) be arbitrary. Assume that \(\varphi\notin\operatorname{End}_{H}(M)\). Then there exist \(m\in M,t\in H\) such that \(\varphi(mt)-\varphi(m)t\neq 0.\) Since \(R\) is \(M_{R}\)-dense in \(H_{R}\), there exists \(r\in R\) such that \((\varphi(mt)-\varphi(m)t)r\neq 0\) and \(tr\in R\). Hence \(0\neq(\varphi(mt)-\varphi(m)t)r=\varphi(mt)r-\varphi(m)(tr)=\varphi(mtr)- \varphi(mtr)=0\), a contradiction. Therefore \(\operatorname{End}_{R}(M)=\operatorname{End}_{H}(M)\). _Remark 4.2_.: (i) \(R\) is always \(E(R)\)-dense in \(H_{R}\) where \(H\) is a right ring of quotients of \(R\). For, let \(x\in H_{R}\) and \(0\neq y\in E(R)\). Since \(H\leq^{\text{ess}}E(R)_{R}\), there exists \(s\in R\) such that \(0\neq ys\in H\). Also, \(xs\in H\). Since \(R\leq^{\text{den}}H_{R}\), there exists \(t\in R\) such that \(xst\in R\) and \(0\neq yst\). Therefore \(R\) is \(E(R)\)-dense in \(H_{R}\). (ii) If \(M\) is a nonsingular \(R\)-module, then \(R\) is \(M_{R}\)-dense in \(H_{R}\): For, let \(0\neq m\in M\) and \(t\in H\) be arbitrary. Take \(t^{-1}R=\{r\in R\,|\,tr\in R\}\) a right ideal of \(R\). Note that \(t^{-1}R\leq^{\text{ess}}R_{R}\). Since \(t^{-1}R\not=^{\text{ess}}\mathbf{r}_{R}(m)\), there exists \(r\in t^{-1}R\) and \(r\notin\mathbf{r}_{R}(m)\). Thus, \(tr\in R\) and \(mr\neq 0\). Therefore \(R\) is \(M_{R}\)-dense in \(H_{R}\). (iii) If \(M\) is a submodule of a projective right \(H\)-module, then \(R\) is \(M_{R}\)-dense in \(H_{R}\). For, let \(P\) be a projective right \(R\)-module including \(M\), that is, \(M\leq P\) where \(P\leq^{\oplus}H^{(\Lambda)}\) with some index set \(\Lambda\). Then there is a right \(R\)-module \(K\leq E(P)\) such that \(E(P)=E(M)\oplus K\). Since \(R\leq^{\text{den}}H_{R}\), we get that \(R\) is \(H^{(\Lambda)}\)-dense in \(H_{R}\) from Lemma 3.4. Hence \(R\) is \(P\)-dense in \(H_{R}\). Thus \(\operatorname{Hom}_{R}(H/R,E(P))=0\) from Proposition 2.5. Since \(\operatorname{Hom}_{R}(H/R,E(P))\cong\operatorname{Hom}_{R}(H/R,E(M))\oplus \operatorname{Hom}_{R}(H/R,K)\), we obtain \(\operatorname{Hom}_{R}(H/R,E(M))=0\). It follows that \(R\) is \(M_{R}\)-dense in \(H_{R}\). **Corollary 4.3**.: _Let \(M\) be a projective right \(H\)-module where \(H\) is a right ring of quotients of \(R\). Then \(\operatorname{End}_{R}(M)=\operatorname{End}_{H}(M)\)._ The next example illustrates Corollary 4.3. **Example 4.4**.: Let \(Q=\prod_{n=1}^{\infty}\mathbb{Z}_{2}\) and \(R=\{(a_{n})\in Q\mid a_{n}\text{ is eventually constant}\}\). Then \(Q\) is a maximal right ring of quotients of \(R\). Hence from Theorem 4.1, \(\operatorname{End}_{R}(Q^{(\Lambda)})\)\(=\operatorname{End}_{Q}(Q^{(\Lambda)})=\operatorname{\sf CFM}_{\Lambda}(Q)\). **Theorem 4.5**.: _Let \(M\) be a finitely generated free \(R\)-module with \(S=\operatorname{End}_{R}(M)\). If either \(R\) is right noetherian or \(\Lambda\) is any finite index set, then \(\operatorname{End}_{R}\left(\widetilde{E}(M^{(\Lambda)})\right)=\operatorname{ \sf CFM}_{\Lambda}\left(Q(S)\right)\)._ Proof.: Let \(M=R^{(n)}\) for some \(n\in\mathbb{N}\). From Corollary 3.9, \(\widetilde{E}(R^{(n)})=\widetilde{E}(R)^{(n)}=Q(R)^{(n)}\) as \(\widetilde{E}(R)=Q(R)\). Hence \(\operatorname{End}_{R}\left(\widetilde{E}(M^{(\Lambda)})\right)=\operatorname {End}_{R}\left(\widetilde{E}(M)^{(\Lambda)}\right)=\operatorname{End}_{R}\left( (Q(R)^{(n)})^{(\Lambda)}\right)=\operatorname{End}_{Q(R)}\left((Q(R)^{(n)})^{ (\Lambda)}\right)=\operatorname{\sf End}_{M}\left(\operatorname{End}_{Q(R)}(Q( R)^{(n)})\right)=\operatorname{\sf CFM}_{\Lambda}\left(\operatorname{\sf Mat}_{n}(Q(R))\right)\) from Theorem 4.1. Therefore \(\operatorname{End}_{R}\left(\widetilde{E}(M^{(\Lambda)})\right)=\operatorname{ \sf CFM}_{\Lambda}\left(Q(\operatorname{End}_{R}(M))\right)\) because \(\operatorname{\sf Mat}_{n}(Q(R))=Q(\operatorname{\sf Mat}_{n}(R))\) by [15, 2.3] and \(\operatorname{End}_{R}(M)=\operatorname{\sf Mat}_{n}(R)\). The next result is generalized from [15, 2.3]. **Corollary 4.6**.: _Let \(M\) be a finitely generated free \(R\)-module. Then \(Q(\operatorname{End}_{R}(M))=\operatorname{End}_{R}(\widetilde{E}(M))\)._ The following example shows that the above result can not be extended to _flat_ modules. This example also shows that \(R\) is not \(M_{R}\)-dense in \(Q_{R}\) where \(Q\) is a right ring of quotients of \(R\). **Example 4.7**.: Let \(Q=\prod_{n=1}^{\infty}\mathbb{Z}_{2}\), \(R=\{(a_{n})\in Q\mid a_{n}\text{ is eventually constant}\}\), and \(I=\{(a_{n})\in Q\mid a_{n}=0\text{ eventually}\}\). Note that \(Q=Q(R)\). Let \(M=Q/I\), which is a flat \(Q\)-module but not projective. We claim that \(\operatorname{End}_{Q}(M)\subsetneq\operatorname{End}_{R}(M)\). Indeed, define \(f:M\to M\) via \[f[(a_{1},a_{2},\dots,a_{n},a_{n+1},\dots)+I]=(a_{1},0,a_{2},0,\dots,a_{n},0,a_ {n+1},0,\dots)+I,\] for any \(\overline{a}=a+I=(a_{1},a_{2},\dots,a_{n},a_{n+1},\dots)+I\in M\). It is easy to see that \(f(\overline{a}+\overline{b})=f(\overline{a})\)\(+\)\(f(\overline{b})\) for any \(\overline{a}\), \(\overline{b}\in M\). Meanwhile, for any \(r=(r_{1},r_{2},\dots,r_{n},r_{n+1},\dots)\in R\), we have \[(a+I)r=ar+I=\left\{\begin{array}{ll}(0,\dots,0,a_{n},a_{n+1},\dots)+I,&\text {if }r_{n}=r_{n+1}=\dots=1;\\ (0,\dots,0,0,0,\dots)+I,&\text{if }r_{n}=r_{n+1}=\dots=0.\end{array}\right.\] Note that \(a+I=(0,0,\dots,0,a_{n},a_{n+1},\dots)+I\) for some \(n\in\mathbb{N}\). One can easily see that \(f[(a+I)r]=[f(a+I)]r\) for all \(a\in Q\), \(r\in R\). This shows \(f\in\operatorname{End}_{R}(M)\). However, for \(q=(0,1,0,1,\dots)=q^{2}\in Q\), we have \([f(q+I)]q=0+I\) while \(f[(q+I)q]=f(q+I)\neq 0+I\). This means \(f\not\in\operatorname{End}_{Q}(M)\). Thus, \(\operatorname{End}_{Q}(M)\subsetneq\operatorname{End}_{R}(M)\). Note that \(R\) is not \(M_{R}\)-dense in \(Q\). For, let \(q\in Q\setminus R\) and \(m=1+I\in M\). Since \((1+I)r=0+I\) for all \(r\in R\setminus 1\), it has to be \(r=1\) to get \(mr\neq 0+I\). However, \(qr\notin R\). Recall that a module \(M\) is said to be _polyform_ if every essential submodule of \(M\) is a dense submodule. **Lemma 4.8**.: _A module \(M\) is polyform iff \(\widetilde{E}(M)\) is a polyform quasi-injective module._ Proof.: Let \(X\) be essential in \(\widetilde{E}(M)\). Then \(X\cap M\leq^{\operatorname{ess}}M\). Hence \(X\cap M\) is a dense submodule of \(M\) because \(M\) is polyform. Since \(X\cap M\leq^{\operatorname{den}}M\leq^{\operatorname{den}}\widetilde{E}(M)\), \(X\cap M\leq^{\operatorname{den}}\widetilde{E}(M)\). Thus \(X\) is a dense submodule of \(\widetilde{E}(M)\) from Proposition 1.2(ii). Therefore \(\widetilde{E}(M)\) is a polyform module. In addition, \(\widehat{M}\) is also a polyform module from [16, 11.1]. Since \(M\leq^{\operatorname{ess}}\widehat{M}\) \(M\leq^{\mathrm{den}}\widehat{M}\). Thus \(\widetilde{E}(M)=\widetilde{E}(\widehat{M})\). Since the rational hull of a quasi-injective module is also quasi-injective from Theorem 2.16, \(\widetilde{E}(M)\) is a quasi-injective module. Therefore \(\widetilde{E}(M)\) is a polyform quasi-injective module. Conversely, let \(N\) be any essential submodule of \(M\). Then \(N\) is also essential in \(\widetilde{E}(M)\). Hence \(N\) is a dense submodule of \(\widetilde{E}(M)\) as \(\widetilde{E}(M)\) is polyform. So \(N\) is a dense submodule of \(M\). Therefore \(M\) is polyform. We show from Theorem 2.15 that there is a canonical embedding of the ring \(\mathrm{End}_{R}(M)\) into the ring \(\mathrm{End}_{R}(\widetilde{E}(M))\). Next, we obtain a condition when \(\mathrm{End}_{R}(M)\) and \(\mathrm{End}_{R}(\widetilde{E}(M))\) are isomorphic. It is a generalization of [7, Exercises 7.32]. **Proposition 4.9**.: _If \(M\) is a quasi-injective module then \(\mathrm{End}_{R}(M)\stackrel{{\Omega}}{{\cong}}\mathrm{End}_{R} (\widetilde{E}(M))\). In particular, if \(M\) is a polyform module, then the converse holds true._ Proof.: In the proof of Theorem 2.15, we only need to show that \(\Omega:\mathrm{End}_{R}(M)\to\mathrm{End}_{R}(\widetilde{E}(M))\) given by \(\Omega(\varphi)=\widetilde{\varphi}\), is surjective: Let \(\psi\in\mathrm{End}_{R}(\widetilde{E}(M))\) be arbitrary. Then there exists \(\widehat{\psi}\in\mathrm{End}_{R}(E(M))\) such that \(\widehat{\psi}|_{\widetilde{E}(M)}=\psi\). Since \(\widehat{\psi}M\leq M\) as \(M\) is quasi-injective, \(\widehat{\psi}|_{M}=\psi|_{M}\in\mathrm{End}_{R}(M)\). Thus, \(\Omega(\psi|_{M})=\psi\), which shows that \(\Omega\) is surjective. In addition, suppose that \(M\) is a polyform module. Then from Lemma 4.8, \(\widetilde{E}(M)\) is quasi-injective. Thus, for any \(\vartheta\in\mathrm{End}_{R}(E(M))\), \(\vartheta\widetilde{E}(M)\leq\widetilde{E}(M)\). Since \(\vartheta|_{\widetilde{E}(M)}\in\mathrm{End}_{R}(\widetilde{E}(M))\) and \(\mathrm{End}_{R}(M)\stackrel{{\Omega}}{{\cong}}\mathrm{End}_{R} (\widetilde{E}(M))\), there exists \(\varphi\in\mathrm{End}_{R}(M)\) such that \(\Omega(\varphi)=\vartheta|_{\widetilde{E}(M)}\). Also by Theorem 2.15, \(\vartheta|_{M}=\varphi\). Thus, \(\vartheta M=\varphi M\leq M\). Therefore \(M\) is a quasi-injective module. **Corollary 4.10**.: _If \(M\) is a quasi-injective module, then \(Q(\mathrm{End}_{R}(M))\cong\mathrm{End}_{R}(\widetilde{E}(M))\)._ Proof.: Since \(M\) is a quasi-injective module \(\mathrm{End}_{R}(M)\) is a right self-injective ring. So, \(Q(\mathrm{End}_{R}(M))=\mathrm{End}_{R}(M)\). Thus, \(Q(\mathrm{End}_{R}(M))\cong\mathrm{End}_{R}(\widetilde{E}(M))\) by Proposition 4.9. Remark that if \(M\) is a quasi-injective module then \(\widetilde{E}(M)\) is a quasi-injective module from Theorem 2.16 and \(\mathrm{End}_{R}(M)\cong\mathrm{End}_{R}(\widetilde{E}(M))\) from Proposition 4.9. However, the next example shows that there exists a quasi-injective module \(M\) such that \(M\neq\widetilde{E}(M)\). **Example 4.11**.: Let \(R=\left(\begin{smallmatrix}F&F\\ 0&F\end{smallmatrix}\right)\) and \(M=\left(\begin{smallmatrix}0&0\\ 0&F\end{smallmatrix}\right)\) where \(F\) is a field. Then \(M\) is a quasi-injective \(R\)-module. However, \(\widetilde{E}(M)=E(M)=\left(\begin{smallmatrix}0&0\\ F&F\end{smallmatrix}\right)\) because \(M\) is nonsingular. Thus \(M\) is a quasi-injective \(R\)-module such that \(M\lesssim\widetilde{E}(M)\) and \(\mathrm{End}_{R}(M)\cong\left(\begin{smallmatrix}0&0\\ 0&F\end{smallmatrix}\right)\cong\mathrm{End}_{R}(\widetilde{E}(M))\). Because \(\widetilde{E}(M)=E(M)\) for a right nonsingular module \(M\), we have the following well-known results as a consequence of Proposition 4.9. **Corollary 4.12** ([7, Exercises 7.32]).: _For any nonsingular module \(M\), the following statements hold true:_ 1. _there is a canonical embedding_ \(\Omega\) _of the ring_ \(\mathrm{End}_{R}(M)\) _into the ring_ \(\mathrm{End}_{R}(E(M))\)_._ 2. \(M\) _is a quasi-injective_ \(R\)_-module if and only if_ \(\Omega\) _is an isomorphism._ **Corollary 4.13**.: _Let \(M\) be a right \(H\)-module where \(H\) is a right ring of quotients of a ring \(R\). The following statements hold true:_ 1. _If_ \(M\) _is a nonsingular_ \(R\)_-module then_ \(\mathrm{End}_{R}(M)=\mathrm{End}_{H}(M)\)_._ 2. _If_ \(M\) _is a submodule of a projective_ \(H\)_-module, then_ \(\mathrm{End}_{R}(M)=\mathrm{End}_{H}(M)\)_._ 3. _If_ \(M\) _is a nonsingular quasi-injective_ \(R\)_-module then_ \(\mathrm{End}_{R}(M)\cong\mathrm{End}_{R}(E(M))\) _and_ \(\mathrm{End}_{H}(M)\cong\mathrm{End}_{H}(E(M))\)_._ _._ * _If_ \(M\) _is a quasi-injective_ \(R\)_-module and is a submodule of a projective_ \(H\)_-module then_ \(\operatorname{End}_{R}(M)\cong\operatorname{End}_{R}(\widetilde{E}(M))\) _and_ \(\operatorname{End}_{H}(M)\cong\operatorname{End}_{H}(\widetilde{E}(M))\)_._
2309.11328
Transport-based fusion that distinguishes between Majorana and Andreev bound states
It has proven difficult to distinguish between topological Majorana bound states and nontopological Andreev bound states and to measure the unique properties of the former. In this work, we aim to alleviate this problem by proposing and theoretically analyzing a new type of fusion protocol based on transport measurements in a Majorana box coupled to normal leads. The protocol is based on switching between different nanowire pairs being tunnel coupled to one of the leads. For a Majorana system, this leads to switching between different states associated with parity blockade. The charge being transmitted at each switch provides a measurement of the Majorana fusion rules. Importantly, the result is different for a system with nontopological Andreev bound states. The proposed protocol only requires measuring a DC current combined with fast gate-control of the tunnel couplings.
Maximilian Nitsch, Rubén Seoane Souto, Stephanie Matern, Martin Leijnse
2023-09-20T13:57:12Z
http://arxiv.org/abs/2309.11328v1
# Transport-based fusion that distinguishes between ###### Abstract It has proven difficult to distinguish between topological Majorana bound states and nontopological Andreev bound states and to measure the unique properties of the former. In this work, we aim to alleviate this problem by proposing and theoretically analyzing a new type of fusion protocol based on transport measurements in a Majorana box coupled to normal leads. The protocol is based on switching between different nanowire pairs being tunnel coupled to one of the leads. For a Majorana system, this leads to switching between different states associated with parity blockade. The charge being transmitted at each switch provides a measurement of the Majorana fusion rules. Importantly, the result is different for a system with nontopological Andreev bound states. The proposed protocol only requires measuring a DC current combined with fast gate-control of the tunnel couplings. ## I Introduction Finding Majorana bound states (MBSs) in topological superconducting systems has been an intensely pursued goal in condensed matter physics for over a decade [1; 2; 3; 4; 5; 6]. A promising platform to find MBSs is based on one-dimensional superconductor-semiconductor hybrid structures [7; 8; 9; 10]. There have been encouraging results in the form of observations of zero-bias peaks consistent with Majorana physics, see, e.g. Refs. [11; 12; 13; 14; 15; 16; 17]. However, these experiments offer no definite proof of topological MBSs, as topologically trivial systems hosting Andreev bound states (ABSs) can exhibit similar features [18; 19; 20; 21; 22; 23; 24; 25; 26]. To obtain definite proof for the topological nature of the observed states it is necessary to probe their nonabelian properties. Despite a large number of theoretical proposals, see Refs. [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] for a few examples, there has been no experimental realization of a braiding protocol. Instead of aiming for braiding, a simpler but conceptually related concept is fusion for which there also exist various suggestions for experimental realizations [33; 38; 39; 40; 28; 33; 34; 29; 35; 36; 37; 38]. The idea of fusion protocols is to measure the MBS system in different basis combinations, thereby effectively accessing the nonabelian properties. Unfortunately, some fusion protocols can give the same outcome for zero-energy ABSs as for MBSs [38; 42] and therefore do not offer sufficient proof of a nonabelian topological phase. The main goal of this paper is to provide a proposal for a Majorana fusion experiment that explicitly distinguishes between topological and trivial systems. Our proposal is based on DC transport measurement and does not require fast or single-shot read-out, only fast gate voltage pulses. The platform we consider is a Majorana box qubit [43; 44] a candidate for scaleable topologically protected quantum computing [45; 46; 35; 47]. We connect it in a transport setup with two normal metallic leads (source, drain) and tune the connections between the source and box via a magnetic flux to establish parity blockade. Parity blockade, destructive interference between two paths via two Majoranas, was introduced in previous works on Majorana box qubits connected to quantum dots [48; 49; 50; 51] and in transport setups [52]. The proposed fusion protocol works as follows: In a transport setup with three MBS wires coupled to the same lead, parity blockade projects the box state onto Figure 1: (\(a\)) Floating Majorana box with four topological nanowires hosting three MBSs \(\gamma_{0,1,2}\) connected to the source (S) and one \(\gamma_{3}\) to the drain (D) via tunnel couplings \(t_{0,1,2,3}\). Two magnetic fluxes \(\Phi_{01},\Phi_{12}\) are threaded through the loops \(t_{0},t_{1}\) and \(t_{1},t_{2}\) to adjust the relative phases. (\(b\)) Tunnel couplings \(t_{1},\tilde{t}_{1}\) to two Majorana operators \(\gamma_{i},\tilde{\gamma}_{l}\) describing an ABS in a topologically trivial system. (\(c\)) Basis rotation \(U^{\dagger}\) to describe two ABSs via two coupled and two decoupled MBSs. well-defined blocking states. By repeatedly switching between different configurations of the lead-wire couplings, the system alternates between different blocking states. The projection of one blocking state onto another is similar to MBS fusion and, at the same time, determines the probability for a single electron to be transferred. A DC current measurement reveals the average outcome of the fusion protocol. The paper starts in Sec. II with an introduction to the system and transport setup, the concept of parity blockade and the quantum master equation (QME) used for the transport simulations. Afterwards, we investigate how to employ parity blockade to distinguish between topological and trivial systems in Secs. III and IV. First, we show that the possibility to block the current via parity blockade is not sufficient to distinguish topological from trivial systems (Sec. III.1). In order to distinguish these cases, we first introduce a simple protocol based on establishing parity blockade followed by switching off different tunnel couplings (Sec. III.2), before moving on to the fusion protocol (Sec. IV). ## II Setup and transport description ### Majorana box We consider a Majorana box consisting of four topological nanowires hosting MBSs at their ends, connected by a small piece of superconductor in the trivial regime, see Fig. 1(\(a\)). The entire system, wires and connecting superconductor, is floating and has a charging energy \(E_{C}\), which we take to be the largest energy scale of the problem. A gate voltage \(V_{g}\) capacitively coupled to the box tunes the amount of energetically favorable charge \(n_{g}\), thereby setting the electron number \(N\) of the ground state. Furthermore, we include small but finite energy splitting in the degenerate ground state sector due to exponentially small overlaps \(\varepsilon_{lk}\) between MBSs \(\gamma_{l},\gamma_{l^{\prime}}\). The Majorana box Hamiltonian reads \[H_{\rm MB}=\frac{i}{2}\sum_{ll^{\prime}}\varepsilon_{ll^{\prime}}\gamma_{l} \gamma_{l^{\prime}}+E_{C}(N-n_{g})^{2}, \tag{1}\] where the sum runs over \(l<l^{\prime}\). We note that there will in general be four additional MBSs at the points where the nanowires meet the trivial superconductor, and the overlaps with these MBSs might be larger than between MBSs at different nanowire ends. However, in the limit where the overlaps set the smallest energy scale of the problem, the results presented below remain qualitatively the same independent of which MBSs couple, and these additional MBSs can safely be neglected [53]. To enable charge flow, we connect the box to source (S) and drain (D) leads, described by noninteracting electrons, \(H_{res}=\sum_{r=S,D}H_{r}\), with \(H_{r}=\sum_{k}\xi_{rk}c_{rk}^{\dagger}c_{rk}\). The operators \(c_{rk}^{\dagger},c_{rk}\) create/destroy an electron in lead \(r\) with momentum \(k\) and energy \(\xi_{rk}\). We neglect spin, assuming that either the leads are spin polarized by a large magnetic field or all MBSs have equal spin polarization [52]. The leads are characterized by a temperature \(T_{r}=T\) and a symmetrically applied bias voltage setting the chemical potentials \(\mu_{S,D}=\pm V_{b}/2\). The leads are connected to the box via tunnel amplitudes \(t_{lr}\) between the \(l\)th MBS and lead \(r\) \[H_{\rm MB}^{T}=\sum_{lrk}\gamma_{l}\left(t_{lr}c_{rk}-t_{lr}^{*}c_{rk}^{ \dagger}\right). \tag{2}\] We consider wide-band leads with energy-independent tunnel couplings. Only one MBS (\(\gamma_{3}\)) is coupled to the drain, while three MBSs (\(\gamma_{0,1,2}\)) are coupled to the source, and to enable parity blockade these MBSs need to connect to the same channel of the source [52]. The relative phases of \(t_{l}\) and \(t_{l^{\prime}}\) can be controlled via fluxes \(\Phi_{ll^{\prime}}\). Unless stated otherwise, we assume \(|t_{l}|=t>0\) for all \(l\). This introduces a tunneling rate \(\Gamma=2\pi\nu\,t^{2}\), where \(\nu\) is the density of states in the leads, which we take to be energy-independent and equal for source and drain. ### Andreev box For comparison we will consider also an "Andreev box", i.e., a system that is equivalent to the Majorana box, but where the (near) zero-energy states are non-topological ABSs. To facilitate the comparison to the Majorana box, we decompose each ABS into two MBSs, \(\gamma_{l}\) and \(\tilde{\gamma}_{l}\), see Fig. 1(\(b\)). This allows writing the Hamiltonian for the Andreev box in a very similar way to Eq. (1) \[H_{\rm AB}=\frac{i}{2}\sum_{l}\varepsilon_{l}\gamma_{l}\tilde{\gamma}_{l}+E_{C }(N-n_{g})^{2}, \tag{3}\] where the energy \(\varepsilon_{l}\) of ABS \(l\) is included as an overlap between the constituent MBSs and we have neglected overlaps between different ABSs. We assume that \(\varepsilon_{l}\) constitutes the smallest energies in the problem, which is the case where ABSs and MBSs are hard to distinguish. The source and drain leads are described in exactly the same way as for the Majorana box and their coupling to the Andreev box is given by \[H_{\rm AB}^{T}=\sum_{lrk}\left[\gamma_{l}\left(t_{lr}c_{rk}-t_{lr }^{*}c_{rk}^{\dagger}\right)+\tilde{\gamma}_{l}\left(\tilde{t}_{lr}c_{rk}- \tilde{t}_{lr}^{*}c_{rk}^{\dagger}\right)\right], \tag{4}\] where for ABS \(l\), electrons can tunnel into/out of both MBS constituents \(\gamma_{l}\) and \(\tilde{\gamma}_{l}\) with amplitudes \(t_{l}\) and \(\tilde{t}_{l}\), see Fig. 1(\(b\)). For nontopological ABSs, even if \(\varepsilon_{l}\) is small, \(\gamma_{l}\) and \(\tilde{\gamma}_{l}\) have similar spatial distribution [54; 55; 56] and one expects both \(t_{l}\) and \(\tilde{t}_{l}\) to be finite. The situation is complicated by the fact that both the relative amplitudes and phases of \(t_{l}\) and \(\tilde{t}_{l}\) are important, but there is freedom in choosing the way each ABS is decomposed into MBSs which allows us to fix one of them. We choose the Majorana basis such that \(|t_{l}|=|\tilde{t}_{l}|\), leaving the relative phase \[\tilde{t}_{l}=e^{i\theta_{l}}\,t_{l}, \tag{5}\] as a parameter that is determined by the specific realization of the system. We will show in Secs. III, IV that the value of \(\theta_{l}\) quantifies how well an ABS resembles a single localized MBS. ### Transport description via QME To calculate the transport properties of the Majorana and Andreev box, we use a QME approach based on leading-order perturbation theory in \(\Gamma\). After tracing out the lead degrees of freedom, the equation of motion of the Majorana box density matrix is given by \[\partial_{t}\rho=\mathcal{L}\rho=-i[H_{\text{XB}},\rho]+\mathcal{D}\,\rho. \tag{6}\] The Liouvillian \(\mathcal{L}\) consists of two general parts, the unitary time evolution, determined by the box Hamiltonian \(H_{\text{XB}}\), either \(H_{\text{MB}}\) (Eq. 1) or \(H_{\text{AB}}\) (Eq. 3), and a dissipative part \(\mathcal{D}\) introduced by the coupling to the leads. The QME we are using is a Redfield-type approach (called 1st order von Neumann in Ref. [57]), which is equivalent to the first order of real-time diagrammatics [58; 59; 60]. The procedure for the numerical solution is as follows: We start by diagonalizing \(H_{\text{XB}}\), thereby obtaining the eigenenergies and many-body eigenstates of the disconnected system. In the next steps, we express Eq. (6) in the many-body eigenbasis. Based on the tunneling Hamiltonian in Eqs. (2) and (4) we calculate the tunnel matrix elements between eigenstates \(\ket{a}\) and \(\ket{b}\) as \[T_{b\to a}^{r,\text{MB}}=\sum_{l}t_{lr}\bra{a}\gamma_{l}\ket{b} \tag{7}\] for the Majorana box and as \[T_{b\to a}^{r,\text{AB}}=\sum_{l}\left[t_{lr}\bra{a}\gamma_{l}\ket{b}+ \tilde{t}_{lr}\bra{a}\tilde{\gamma}_{l}\ket{b}\right] \tag{8}\] for the Andreev box. Afterwards follows the calculation of the dissipative part \(\mathcal{D}\) of Eq. (6), according to the 1st order von Neumann method. See Appendix A and [57] for details on the calculation of \(\mathcal{D}\). Equation (6) is now expressed in superoperator notation \[\partial_{t}|\rho)=\hat{\mathcal{L}}\ket{\rho}. \tag{9}\] The density matrix is rearranged into the vector \(\ket{\rho}\) and the Liouvillian is expressed as a matrix \(\hat{\mathcal{L}}\) called the kernel. We obtain the solution of this equation via numerical diagonalization of the kernel. As the kernel is non-Hermitian, the left and right eigenvectors \(\ket{l_{h}}\), \(\ket{r_{h}}\) for a given eigenvalue \(\chi_{h}\) are not guaranteed to be the same. We calculate both and obtain the solution as \[\ket{\rho}(t)=\ket{\rho}_{\text{ss}}+\sum_{h>0}e^{\chi_{h}t}c_{h}|r_{h}), \tag{10}\] where \(c_{h}\) is obtained from the initial state \(\ket{\rho_{0}}\) as \[c_{h}=(l_{h}|\rho_{0}). \tag{11}\] The solution consists of two parts: The stationary state solution \(\ket{\rho}_{\text{ss}}\) and the finite-time contribution. Due to the non-Hermiticity of the kernel, its eigenvalues are complex-valued. In order for a physical solution, there are two conditions that have to be fulfilled by the eigenvalues. First, there needs to be a zero eigenvalue \(\chi_{0}=0\) yielding the stationary state solution. Second, the real-valued parts of the remaining eigenvalues are strictly negative and lead to a decay of all finite-eigenvalue contributions. Depending on the system, this decay can be decorated with oscillations due to an imaginary part. As a last step, we calculate the current through the system from the density matrix as in [57]. ### Parity blockade As in previous studies on Majorana box qubits connected to quantum dots [48; 49; 50; 51] and in transport setups [52], we use the magnetic flux \(\Phi\) as a tunable parameter to establish the parity blockade. In the most simple case of the Majorana box, the source connects to the MBSs \(\gamma_{0}\) and \(\gamma_{1}\) with the same strength for the tunnel couplings \(|t_{0}|=|t_{1}|=t\) and is disconnected from \(\gamma_{2}\). We describe the systems via the fermionic occupation \(n_{01}=f_{01}^{\dagger}f_{01}\) with \(f_{01}=\gamma_{0}+i\gamma_{1}\). The combined tunnel matrix element for an electron to enter the system via \(f_{01},f_{01}^{\dagger}\) reads \[T_{0\to 1}^{\text{S,MB}}=t(1+ie^{i\phi}),\hskip 14.226378ptT_{1\to 0}^{ \text{S,MB}}=t(1-ie^{i\phi}), \tag{12}\] where \(\phi\) is tuned via a magnetic flux \(\Phi_{01}\) threaded between the connections from the source to \(\gamma_{0}\) and \(\gamma_{1}\). We use it to establish constructive or destructive interference between the two available paths. For example, tuning the phases to \(\phi=\frac{\pi}{2}\) results in \(T_{0\to 1}^{\text{S,MB}}=0\) and therefore prohibits the transition \(n=0\to n=1\), which we refer to as the establishing of parity blockade. Returning to the full Majorana box (Sec. II.1) we choose the fermionic basis by combining MBSs \(\gamma_{0}\) with \(\gamma_{1}\) and \(\gamma_{2}\) with \(\gamma_{3}\), defining the Fock states \(\ket{n_{01}n_{23}}\). Parity blockade at the source projects the system on a state with total even parity (\(-1^{n_{01}+n_{23}}=1\)) spanned by \(\ket{00}\) and \(\ket{11}\). The exact form of the blocking state depends on the way it is established via choices of \(t_{0},t_{1},t_{2}\). In the Andreev box (Sec. II.2) the intuitive way to understand parity blockade for zero-energy ABSs is that there always exists a unitary rotation of the Majorana basis to effectively only couple two MBSs to the lead [51], see Fig. 1(\(c\)). In our case, the ABSs and therefore also the effective MBSs are on separate sites with a magnetic flux threaded in between, such that we can use the connections and flux to establish parity blockade. ## III Stationary state protocols In this section, we focus on results obtained by solving for the stationary state current. In Sec. III.1 we conclude that observing parity blockade is insufficient to distinguish the Majorana box from the Andreev box. Afterwards, we provide a protocol allowing their distinction in Sec. III.2. In the remainder of the paper, the system parameters are chosen as follows. First of all, the temperature of the leads is assumed to be far larger than the tunneling rate to the leads \(T=10^{2}\,\Gamma\). We set the chemical potential \(\mu_{S/D}=\pm 10^{3}\,\Gamma\). All energies resulting from MBS overlaps are of the order of \(\varepsilon=10^{-3}\,\Gamma\). The box is tuned via electrostatic gates to a degeneracy point \(n_{g}=N+\frac{1}{2}\) between \(N\) and \(N+1\) charges. The exact values for the gate and bias voltage don't influence the general transport behavior. But to enable transport, we must make sure that the system is in the conducting regime and not Coulomb blockaded. For the Majorana box, the overlaps are zero except for the combinations \(\varepsilon_{01}=1.0\,\varepsilon\), \(\varepsilon_{12}=1.5\,\varepsilon\), \(\varepsilon_{23}=2.0\,\varepsilon\). In the Andreev box, we choose the overlaps of MBSs on each site as \(\varepsilon_{0}=0.5\,\varepsilon\), \(\varepsilon_{1}=1.0\,\varepsilon\), \(\varepsilon_{2}=1.5\,\varepsilon\), \(\varepsilon_{3}=2.0\,\varepsilon\). The exact choice does not influence the general behavior, but to avoid numerical problems we need to ensure that each MBS overlaps with at least one other MBS and avoid fine-tuning two overlaps to the exact same value. ### Conditions for parity blockade Related research on Majorana box qubits connected to quantum dots [51] suggests that parity blockade from a mode \(m\) connected to several MBSs \(k\) with tunnel couplings \(t_{mk}\) is established by fulfilling \[\sum_{k}t_{mk}^{2}=0. \tag{13}\] Note that in general, \(t_{\alpha k}\in\mathbb{C}\) such that there exist non-trivial solutions of Eq. (13). Furthermore, the real and imaginary parts of Eq. (13) introduce two restrictions on the four-dimensional parameter space, introducing a two-dimensional parity blockade sub-manifold. In the following we will show how Eq. 13 is manifested in a transport measurement for both the Majorana box and Andreev box. We start by investigating the sub-manifold of tunnel couplings \(t_{0},t_{1},t_{2}\) resulting in parity blockade. For a current to flow, at least one coupling needs to be non-zero. We use the gauge degree of freedom to choose \(t_{1}=t\in\mathbb{R}\). The remaining four variables are the absolute values \(|t_{l}|\) and the phases \(\phi_{l}\), \(t_{l}=|t_{l}|e^{i\phi_{l}}\) for \(l=0,2\). The MBS \(\gamma_{3}\) connecting to the drain does not influence the parity blockade at the source, and we chose \(t_{3}=t\). Figure 2 shows the regime where the current is suppressed due to the parity blockade. In Fig. 2\((a)\), we vary \(|t_{0}|\) and \(|t_{2}|\) and for each point \((|t_{0}|,|t_{2}|)\) tune \(\phi_{0}\) and \(\phi_{2}\) to find the minimal possible current \(I_{\text{min}}\). In Fig. 2\((b)\), we explore the opposite and vary \(\phi_{0}\) and \(\phi_{2}\) while tuning \(|t_{0}|\) and \(|t_{2}|\). For this, we introduce the average and difference of the phase \[\phi_{\text{avg}}=\frac{\phi_{0}+\phi_{2}}{2},\hskip 28.452756pt\phi_{\text{diff }}=\frac{\phi_{0}-\phi_{2}}{2}. \tag{14}\] These results confirm that parity blockade defines a two-dimensional sub-manifold (dark, low-current regions in Fig. 2) within the four-dimensional parameter space. We emphasize that Eq. (13) holds independent of whether the MBSs are topological or just trivial zero-energy ABSs expressed via Majorana (Hermitian) operators, see Fig. 1\((b)\). As discussed in Sec. II.4, parity blockade in the Andreev box can be understood via a unitary rotation of the Majorana basis, reducing it effectively to the same mechanism that blocks the Majorana box. Accordingly, the system containing zero-energy ABSs mimics a Majorana box with additional MBSs within the wires decoupled from the lead. Consequently, the features of the parity/current blockade in Fig. 2 remain qualitatively the same for the Andreev box. The only effect of the additional ABS parameters \(\theta_{0,1,2,3}\), introduced in Eq. (5), is to stretch/narrow Fig. 2\((a)\) along \(t_{0},t_{1}\) and to displace Fig. 2\((b)\) along \(\phi_{\text{avg}}\), \(\phi_{\text{diff}}\). We show an exam Figure 2: Minimal current \(I_{\text{min}}\) plotted on a logarithmic color scale for \((a)\) varying \(|t_{l}|\) and optimizing \(\phi_{l}\) and \((b)\) varying \(\phi_{\text{avg}},\phi_{\text{diff}}\) and optimizing \(|t_{l}|\). The dark blue patches represent the parameter ranges with suppressed current due to parity blockade. In the remaining lighter regions parity blockade is not possible. With the orange dotted lines, we show the analytical results for the boundaries of the blockade regions found via Eq. (13). ple of parity blockade in the Andreev box in Appendix B. ### Simple parity-blockade-based protocol to distinguish between MBSs and ABSs Here, we present a protocol that yields different measurement results for the Majorana box compared to the Andreev box. During this protocol, we turn off certain connections between the source and the box. To avoid singular matrices \(\tilde{L}\) we define the minimum possible pinch off as \(\Gamma_{\min}=10^{-6}\). In this protocol, we will only need to couple the source to two wires and without loss of generality take \(t_{2}=t_{2}^{\prime}=0\) [see Fig. 3(\(a\))], but other realization are possible as well [52]. The tunneling amplitudes \(t_{0}\), \(t_{1}\) and the flux \(\Phi_{01}\) are tuned to establish parity blockade between the source and the box. Note that \(\tilde{t}_{0}=t_{0}\) and \(\tilde{t}_{1}=t_{1}\) in the case of the Andreev box due to our previous choice of the Majorana basis (Sec II.2). By establishing parity blockade we enforce a fixed relation between \(t_{0}\) and \(t_{1}\). After the blockade is established, we keep the tunnel coupling to wire \(l\) constant but pinch off the other one and measure the current \(I_{l}\propto t_{l}^{2}\). We then repeat the same procedure but switch which tunnel coupling is pinched off. In the case of the Majorana box, parity blockade enforces that \(t_{0}=t_{1}\). Therefore, the current measurement yields \(I_{0}=I_{1}\). The Andreev box contains the additional degrees of freedom \(\theta_{0},\theta_{1}\). Except in the fine-tuned cases \(\theta_{1}=\theta_{0}\) or \(\theta_{1}=\pi-\theta_{0}\), parity blockade is reached for \(|t_{0}|\neq|t_{1}|\) leading to different currents \(I_{0}\neq I_{1}\). Figure 3(\(b\)) shows the difference of the currents \(\Delta I=\left|I_{1}-I_{0}\right|\) normalized by the total current \(I_{\text{tot}}=I_{1}+I_{0}\) as a function of \(\theta_{0}\) and \(\theta_{1}\) for an Andreev box, measured according to the protocol described above. The value zero indicates a perfect imitation of the Majorana box. Exact zeros occur only on the diagonals \(\theta_{1}=\theta_{0},\,\theta_{1}=\pi-\theta_{0}\). The one-dimensional diagonals represent only a parameter space of volume zero within the two-dimensional parameter space. Therefore, only highly fine-tuned ABSs would yield the same results as MBSs. For an almost perfectly fine-tuned Andreev box, \(\theta_{1}\approx\theta_{0},\,\pi-\theta_{0},\,\,\,\Delta I\) is finite but perhaps too small for detection. ## IV Fusion-rule protocol Next, we introduce the fusion-rule protocol. We will start by introducing the time evolution of a Majorana box in the fusion protocol. Afterwards follows an analysis of the charge transfer during the protocol. We finish with a comparison to an Andreev box. ### Time evolution of the Majorana box Figure 4(\(a\)) sketches the fusion-rule protocol. Two different parity blockades are established via two different choices of the tunneling amplitudes \(t_{0},t_{1},t_{2}\), where the phases are controlled by \(\Phi_{01}\) and \(\Phi_{12}\). We refer to the blockades as the z-blockade (\(|t_{0}|=|t_{1}|=t,\,t_{2}=0\)) and the x-blockade (\(t_{0}=0,\,|t_{1}|=|t_{2}|=t\)). The names z-, and x-blockade are motivated by the orientation of the blocking states in terms of the basis \(|n_{01}n_{23}\rangle\). They are \[\begin{split}\text{z-blockade:}&|\psi_{z}\rangle= \left|00\right\rangle,\\ \text{x-blockade:}&|\psi_{x}\rangle=\frac{\left|00 \right\rangle+\left|11\right\rangle}{\sqrt{2}},\end{split} \tag{15}\] which are eigenstates of the \(\sigma_{z},\sigma_{x}\) operators. The protocol repeatedly switches between both blockades, thereby introducing two timescales, see Fig. 4(\(a\)). First, \(\tau\) constitutes the waiting time in a blocking configuration. Second, the changing time \(\delta\tau\) represents the time between different blocking configurations. During \(\delta\tau\) the system is completely decoupled from the source but still coupled to the drain. We assume that the ramping up and down of the tunnel amplitudes is much faster than the relevant timescales of the system, but sufficiently slow to avoid transitions to excited states. In the following, we will analyze the time evolution of the system during a full cycle, i.e., switching from z-blockade to x-blockade and then back to z-blockade. To understand the system dynamics on an intuitive level we consider the limit of large waiting times, \(\tau\gg 1/\Gamma\). The system is initialized in \(|\psi_{x}\rangle\), then we Figure 3: Protocol to distinguish between ABSs and MBSs based on different currents after establishing and breaking parity blockade. (\(a\)) We pinch off \(t_{2}\) and establish parity blockade by tuning the loop \(t_{0},t_{1}\). Afterwards, we measure two currents, \(I_{0}\) (\(i\)) and \(I_{1}\) (\(ii\)), by pinching off \(t_{1}\) or \(t_{0}\). (\(b\)) Normalized current difference \(\Delta I/I_{\text{tot}}\) on a logarithmic scale for different values of \(\theta_{0},\theta_{1}\) in the ABSs. Compare to \(\Delta I=0\) for the Majorana box. switch on the z-blockade. Under z-blockade conditions, a charge cannot tunnel onto the box if it is in the state \(\ket{00}\). Therefore, time evolution will eventually result in a projective measurement in the basis \(\ket{00}\), \(\ket{11}\), each occurring with \(50\,\%\) probability since the initial state is \(\ket{\psi_{x}}\). If the measurement yields \(\ket{00}\), parity blockade prevents the electron from tunneling. But if the measurement results in \(\ket{11}\), the charge can tunnel into the box, projecting the system onto \(\ket{01}\). Afterwards, a charge tunnels out into the drain, projecting the box on the blocking state \(\ket{\psi_{z}}=\ket{00}\). We can summarize the dynamics in the following sequence: \[\ket{\psi_{x}}=\frac{\ket{00}+\ket{11}}{\sqrt{2}}\Rightarrow\begin{cases}\ket{ 00}\\ \ket{11}\rightarrow\ket{01}\rightarrow\ket{00}\end{cases}. \tag{16}\] The time scale for each transition of the dynamics is \(\sim 1/\Gamma\). It is important to note that at each step in the dynamics, there is only one possible tunneling event due to the large charging energy of the box. In total, either zero or one charge tunnels through the system. Both options happen with a \(50\,\%\) probability. One can check that the same holds for the opposite direction (\(\ket{\psi_{z}}\rightarrow\ket{\psi_{x}}\)), which finishes a full cycle of the protocol. Therefore, on average, the protocol transmits one charge per full cycle. For a finite waiting time, it is not guaranteed that the system is fully projected onto \(\ket{\psi_{x/z}}\) by the blockade. To achieve periodicity (equivalence between protocol cycles \(n\) and \(n+1\)) we need to make sure, that the state at the end of one cycle is the same as in the beginning. Therefore, we run the protocol for 1000 cycles to ensure this self-consistency between states. Figure 4(\(b\)) shows the time evolution of the system for a long waiting time \(\tau=6/\Gamma\) (upper panel) and a short waiting time \(\tau=2/\Gamma\) (lower panel). It shows the time evolution during the 1001st protocol cycle. The self-consistency is seen in the equivalence of states at the first and last red dotted line. We start our discussion with \(\tau=6/\Gamma\), where the behavior follows the intuitive arguments above. Initially, the system is approximately in the state \(\ket{\psi_{x}}\). We switch on the z-blockade, starting a time evolution into \(\ket{\psi_{z}}\) intermediately occupying the states \(\ket{10},\ket{01}\). The moment we switch to the x-blockade, this causes the system to evolve from \(\ket{\psi_{z}}\) to \(\ket{\psi_{x}}\). Again it intermediately occupies \(\ket{10},\ket{01}\). The time evolution works qualitatively the same for the shorter waiting time \(\tau=2/\Gamma\). But in this case, the waiting time is too short to complete the transition between the blocking states. Note that the time evolution is also nontrivial during the changing time \(\delta\tau\) because tunneling to the drain is still possible. ### Charge transfer in the Majorana box We now investigate how the waiting time affects the amount of transmitted charge. Figure 5 shows the numerical result for the time evolution of the current through the source (solid blue line), which decays exponentially on the timescale \(1/\Gamma\). We obtain the transmitted charge (solid red line) by numerically integrating the current. The amount of transmitted charge develops a plateau at value \(1\,e\) for \(\tau\gg 1/\Gamma\), confirming the intuitive reasoning in Sec. IV.1. Although the current is suppressed by parity blockade, there exists a small, but finite remnant current \(I_{\rm rem}\) because of the MBS overlaps \(\varepsilon\), \(I_{\rm rem}\propto\varepsilon^{2}/\Gamma\)[52]. The contribution of this remnant current develops a magnitude on the order of \(1\,e\) if \(\tau\gtrsim\Gamma/\varepsilon^{2}\) and leads to the upwards-bending of the charge plateau at long times. The same happens for a small deviation from the blockade conditions and we expect similar behavior also for a small but finite quasi-particle poisoning rate. Until now we investigated the charge transfer by ini Figure 4: (\(a\)) Fusion protocol for a Majorana box. We establish two blockades with the connections \(t_{0},t_{1}\) and \(t_{1},t_{2}\), and repeatedly switch between them. The first pulse connects the box via \(t_{0},t_{1}\) establishing the z-blockade and the second connects the box via \(t_{1},t_{2}\) establishing the x-blockade. Each blockade is established for a waiting time \(\tau\) with a changing time \(\delta\tau\) in between the pulses. Throughout the whole protocol, the box is connected to the drain via \(t_{3}\). (\(b\)) Time evolution for the total even occupations \(p_{00}\), \(p_{11}\) and the sum of the total odd occupations \(p_{10}+p_{10}\). The red dotted lines mark the connection in a new z/x-blockade to highlight the induced time evolution. We compare the results for a large waiting time \(\tau=6/\Gamma\) and a shorter waiting time \(\tau=2/\Gamma\). The changing time is fixed to \(\delta\tau=\tau/4\). tializing the system in a perfect projection on \(|\psi_{z}\rangle\) and considered the time evolution in the x-blockade. For a self-consistent treatment of the blocking states (as described in Sec. IV.1) the average amount of transmitted charge per cycle of the protocol is shown by the red dotted line in Fig. 5. We find a decrease in the transmitted charge already at larger \(\tau\) compared to the charge transfer starting at a perfectly projected blocking state. For large waiting times \(\tau\gg 1/\Gamma\) the previous results are recovered. The experiment we envision aims at detecting the plateau at \(1/\Gamma\ll\tau\ll\Gamma/\varepsilon^{2}\). The measured DC current is quantized to \(I_{\rm DC}=fe\), where \(f=1/2(\tau+\delta\tau)\) is the frequency associated with a full cycle of the protocol. ### Fusion protocol result for the Andreev box Finally, we investigate the results of the fusion protocol for the Andreev box, demonstrating the absence of a quantized current. As explained in Sec. III.1, also for ABSs we are guaranteed to find a setting for the tunnel couplings \(|t_{l}|\) and magnetic fluxes \(\Phi_{ll^{\prime}}\) to establish parity blockade in the previously introduced x- and z-blockade configurations. The blocking states are determined by the unitary operation \(U^{\dagger}\) rotating the two ABSs into a basis where only one MBS is coupled from each of the two wires, see Fig. 1\((c)\). This rotation depends on the additional degree of freedom \(\theta_{l}\) for each ABS. A further complication arises as the MBSs uncoupled from the lead, after application of the unitary \(U^{\dagger}\), still have a small overlap with the coupled MBSs. These overlaps introduce dynamics on an additional time scale \(1/\varepsilon\). Figure 6 shows the results of the fusion protocol with an Andreev box. The gray lines represent the results of the fusion protocol for ten configurations of \(\theta_{0,1,2,3}\), randomly drawn from a uniform distribution between \(0\) and \(2\pi\), as a function of waiting time \(\tau\). We also included the result for the pure Majorana box (red dashed line), for comparison. For short waiting times, \(\tau<1/\Gamma\), the transmitted charge for both systems tends to zero, as expected. In the other limit, \(\tau>(\varepsilon^{2}/\Gamma)^{-1}\), the transmitted charge increases linearly due to the remnant current \(I_{\rm rem}\propto\varepsilon^{2}/\Gamma\) introduced by the finite overlaps of order \(\varepsilon\). The main difference between the Majorana box and the Andreev box appears for intermediate waiting times, \(1/\Gamma<\tau<(\varepsilon^{2}/\Gamma)^{-1}\). As discussed above, the Majorana box features a plateau at the predicted value of \(1\,e\). Detecting this quantized plateau for the Majorana box is the feature that allows us to distinguish it from the Andreev box, which shows a similar but non-quantized plateau before going over to oscillations at \(\tau\approx\varepsilon^{-1}\). Therefore, we conclude that both the quantized current at the plateau and the stability of the plateau are signatures of MBSs which are very unlikely to appear in a similar system with nontopological zero-energy ABSs. Finally, we comment on why our fusion protocol is able to distinguish between MBSs and zero-energy ABSs when some other fusion protocols fail to do so [38; 42]. The difference lies in the number of involved MBSs. For an Andreev box, the number of fermionic states increases to four described by eight MBSs compared to just four MBSs in the Majorana box. These additional MBSs take Figure 5: Current \(I_{\rm trans}\) (solid blue line) and transmitted charge (solid red line) for the time evolution \(|\psi_{z}\rangle\rightarrow|\psi_{x}\rangle\) as a function of the waiting time \(\tau\) (logarithmic scale). The transmitted charge is multiplied by a factor of \(2\) to obtain the result of a full cycle of the fusion protocol. The dotted red line represents the transmitted charge during a full cycle for a self-consistent treatment of states in the fusion protocol. The changing time is fixed to instantaneous changes \(\delta\tau=0\). Figure 6: Transmitted charge for the Andreev box in a full protocol cycle versus the waiting time \(\tau\). The changing time is fixed to instantaneous changes \(\delta\tau=0\). Each gray line represents the results for one realization of randomly drawn ABS parameters \(\theta_{l}\in[0,2\pi]\) for each ABS on the sites \(l=0,1,2,3\). Depending on these parameters we determine the tunnel couplings and fluxes to establish parity blockade while connecting the source to ABSs \(0,1\) (z-blockade) and \(1,2\) (x-blockade). Afterwards, we run the fusion protocol as we did for the Majorana box, Fig. 4\((a)\). We run this protocol 1000 times to establish the same blocking states at each cycle. We compare the results to the previously obtained charge transfer for a Majorana box (red dashed line). part in the fusion process and add additional states to the possible fusion outcomes. This is not the case for the fusion protocols in, e.g., Refs. [38; 42], which considered trivial cases with only two fermionic states. ## V Conclusions In this paper, we have studied transport through a Majorana box using a QME, aiming to identify unique signatures of topological MBSs originating from the physics of parity blockade. Although parity blockade seems to be a special property of MBSs, we showed that non-topological ABSs also give rise to parity blockade that looks qualitatively similar in steady-state transport experiments. To distinguish between MBSs and ABSs, we first proposed a simple experiment based on comparing two current measurements with different configurations of lead tunnel couplings. Then we turned to a transport-based cyclic fusion protocol, where the system is interchangingly projected onto two different blocking states, also here by switching tunnel couplings on and off. For the Majorana box, we showed that the fusion rules result in a quantized DC current given by exactly the electron charge times the protocol frequency. We also discussed the limiting effects of MBS overlaps, quasiparticle poisoning, and deviations from the ideal blockade condition, showing that the current quantization can remain over a large frequency range. In contrast, for the Andreev box, we found a current that is not quantized and furthermore much less stable to changes in the protocol frequency. This not only provides a way to identify the presence of topological MBSs based on qualitative transport features, but also allows access to the fusion rules without the need for fast or single-shot readout. ###### Acknowledgements. We acknowledge stimulating discussions with Jens Schulenborg and Athanasios Tsintzis and funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 856526, the Spanish CM "Talento Program" (project No. 2022-T1/IND-24070), the Swedish Research Council under Grant Agreement No. 2020-03412, and NanoLund. ## Appendix A Time-evolution first order von Neumann Here we briefly introduce the explicit form of the 1st order von Neumann master equation, for more details see [57]. To obtain the dissipative part we define the tunneling rate matrix \(\Gamma^{r}_{ba,a^{\prime}b^{\prime}}\) \[\Gamma^{r}_{ba,a^{\prime}b^{\prime}}=2\pi v_{F}T^{r,\text{XB}}_{a\to b}T^{r, \text{XB}}_{b^{\prime}\to a^{\prime}}, \tag{10}\] where the tunnel matrix elements are defined in Eq. 7 for the Majorana box and in Eq. 8 for the Andreev box, and an integral over the Fermi distribution \(f\) \[2\pi I^{r\pm}_{ba}=\int_{-K}^{K}\frac{f\left(\pm\frac{E-\mu_{r}}{T_{r}}\right) }{E-(E_{b}-E_{a})+i\eta}\,dE, \tag{11}\] where \(K\) is the bandwith and \(\eta\to 0^{+}\). We obtain the time evolution of the density matrix as \[i\partial_{t}\rho_{bb^{\prime}}= (E_{b}-E_{b^{\prime}})\rho_{bb^{\prime}} \tag{12}\] \[+\sum_{b_{\prime\alpha}}\rho_{bb^{\prime\prime}}\left[\sum_{a} \Gamma_{b^{\prime\alpha},ab^{\prime}}I^{\alpha-}_{ba}-\sum_{c}\Gamma_{b^{ \prime\prime}c,cb^{\prime}}I^{\alpha+*}_{cb}\right]\] \[+\sum_{b_{\prime\alpha}}\rho_{b^{\prime\prime}b^{\prime}}\left[ \sum_{c}\Gamma_{bc,cb^{\prime\prime}}I^{\alpha+}_{cb^{\prime}}-\sum_{a}\Gamma _{ba,ab^{\prime\prime}}I^{\alpha-*}_{b^{\prime}a}\right]\] \[+\sum_{aa^{\prime}\alpha}\rho_{aa^{\prime}}\Gamma_{ba,a^{\prime} b^{\prime}}[I^{\alpha+*}_{b^{\prime}a}-I^{\alpha+}_{ba^{\prime}}]\] \[+\sum_{cc^{\prime}\alpha}\rho_{cc^{\prime}}\Gamma_{bc,c^{\prime}b^ {\prime}}[I^{\alpha-*}_{c^{\prime}b}-I^{\alpha-}_{cb^{\prime}}],\] where the indices \(c\) and \(a\) run over states with fixed electron number \(N_{c}=N_{b}+1\), \(N_{a}=N_{b}-1\). ## Appendix B Conditions for parity blockade in the Andreev box We show that the Andreev box shows qualitatively the same regions for parity blockade as the Majorana box analyzed in Sec. II.4. For a perfectly fine-tuned Andreev box Fig. 7 would exactly coincide with Fig. 2 for the Majorana box. Finite values for \(\theta_{0,1,2}\) stretch/quench Fig. 7\((a)\) and shift Fig. 7\((b)\). Changing \(\theta_{3}\) does not change Fig. 7 as this site only connects to the drain and therefore has no influence on the parity blockade.
2309.10750
Measuring Line-of-sight Distances to Haloes with Astrometric Lensing B-mode
Relative astrometric shifts between multiply lensed images provide a valuable tool to investigate haloes in the intergalactic space. In strong lens systems in which a single lens plays the primary role in producing multiple images, the gravitational force exerted by line-of-sight (LOS) haloes can slightly change the relative positions of multiply lensed images produced by the dominant lens. In such cases, a LOS halo positioned sufficiently far from the dominant lens along the LOS can create a pattern in the reduced deflection angle that corresponds to the B-mode (magnetic or divergence-free mode). By measuring both the B-mode and E-mode (electric or rotation-free mode), we can determine the LOS distance ratios, as well as the 'bare' convergence and shear perturbations in the absence of the dominant lens. However, scale variations in the distance ratio lead to mass-sheet transformations in the background lens plane, introducing some uncertainty in the distance ratio estimation. This uncertainty can be significantly reduced by measuring the time delays between the lensed images. Additionally, if we obtain the redshift values of both the dominant and perturbing haloes, along with the time delays between the multiply lensed images that are affected by the haloes, the B-mode can break the degeneracy related to mass-sheet transformations in both the foreground and background lens planes. Therefore, measuring the astrometric lensing B-mode has the potential to substantially decrease the uncertainty in determining the Hubble constant.
Kaiki Taro Inoue
2023-09-19T16:51:26Z
http://arxiv.org/abs/2309.10750v1
# Measuring Line-of-sight Distances to Haloes with ###### Abstract Relative astrometric shifts between multiply lensed images provide a valuable tool to investigate haloes in the intergalactic space. In strong lens systems in which a single lens plays the primary role in producing multiple images, the gravitational force exerted by line-of-sight (LOS) haloes can slightly change the relative positions of multiply lensed images produced by the dominant lens. In such cases, a LOS halo positioned sufficiently far from the dominant lens along the LOS can create a pattern in the reduced deflection angle that corresponds to the B-mode (magnetic or divergence-free mode). By measuring both the B-mode and E-mode (electric or rotation-free mode), we can determine the LOS distance ratios, as well as the 'bare' convergence and shear perturbations in the absence of the dominant lens. However, scale variations in the distance ratio lead to mass-sheet transformations in the background lens plane, introducing some uncertainty in the distance ratio estimation. This uncertainty can be significantly reduced by measuring the time delays between the lensed images. Additionally, if we obtain the redshift values of both the dominant and perturbing haloes, along with the time delays between the multiply lensed images that are affected by the haloes, the B-mode can break the degeneracy related to mass-sheet transformations in both the foreground and background lens planes. Therefore, measuring the astrometric lensing B-mode has the potential to substantially decrease the uncertainty in determining the Hubble constant. keywords: cosmology: theory - gravitational lensing - dark matter ## 1 Introduction Cold dark matter (CDM) models encounter challenges on scales below \(\sim 100\,\mathrm{kpc}\), particularly concerning dwarf galaxies. For instance, the observed number count of dwarf satellite galaxies in nearby galaxies in the Local Group falls significantly short of the predicted number of subhaloes capable of hosting dwarf galaxies as projected by CDM models (Kauffmann et al., 1993; Klypin et al., 1999; Moore et al., 1999). Hydrodynamic simulations that incorporate baryonic feedback and cosmic reionization have emerged as potential solutions to this discrepancy for the Milky Way (Wetzel et al., 2016; Brooks et al., 2017; Fielder et al., 2019). However, it remains uncertain whether the Milky Way represents a "typical galaxy," and the extent to which this discrepancy can be explained for other galaxies remains an open question (Nashimoto et al., 2022). It is plausible that dark matter differs from CDM, and there may be fewer low-mass haloes on scales below \(\sim 100\,\mathrm{kpc}\) capable of hosting dwarf galaxies than CDM predictions suggest. Gravitational lensing serves as a potent tool for investigating low-mass haloes with masses \(\lesssim 10^{9}\,M_{\odot}\) in the distant universe. In particular, the study of quadruply lensed quasar-galaxy and galaxy-galaxy strong lens systems has proven valuable for probing low-mass haloes. This is because the strong lensing effect induced by a foreground galaxy amplifies the weak lensing signals of low-mass haloes, which are otherwise challenging to detect without such enhancement. Typically, the relative positions of lensed images can be accurately fitted using a smooth gravitational potential within a few milliarcseconds. However, in some of these systems, the flux ratios of the lensed images in radio or mid-infrared wavelengths deviate by 10-40% from the predictions of the theoretical models. It has been suggested that such anomalies in flux ratios, particularly in radio or mid-infrared emissions, may be attributed to subhaloes residing within the host galaxy (Mao & Schneider, 1998; Metcalf & Madau, 2001; Chiba, 2002; Dalal & Kochanek, 2002; Keeton et al., 2003; Inoue & Chiba, 2003, 2005; Xu et al., 2009, 2010). Nevertheless, it is worth noting that these anomalies might also find explanations in the complex gravitational potential of the foreground galaxy (Evans & Witt, 2003; Oguri, 2005; Gilman et al., 2017; Hsueh et al., 2017, 2018). The interpretation gains support from discrepancies observed in the relative astrometric shifts of lensed extended images (Treu & Koopmans, 2004; Koopmans, 2005; Vegetti & Koopmans, 2009; Vegetti et al., 2010; Chantry et al., 2010; Vegetti et al., 2012, 2014; Inoue et al., 2016; Hezaveh et al., 2016; Chatterjee & Koopmans, 2018; \(\lx@sectionsign\)2.0.1.1). However, any small-mass haloes along the line of sight (LOS) can also influence the flux ratios and relative positions of lensed images (Metcalf, 2005; Xu et al., 2012). Based on a semi-analytical approach, Inoue and Takahashi (2012) argued that the primary cause of anomalies in flux ratios for quadruply lensed quasars with an Einstein radius of approximately \(1^{\prime\prime}\) is the presence of small-mass haloes within the intergalactic space, rather than subhaloes within the foreground galaxy. This assertion was subsequently confirmed by Takahashi and Inoue (2014), which employed \(N\)-body simulations capable of resolving dwarf galaxy-sized haloes within a cosmological context. Building on semi-analytical approach, Inoue (2016) highlighted that for source redshifts \(z_{\rm s}>1\), the cumulative effect of line-of-sight structures, including haloes and troughs, on altering the flux ratios of quadruply lensed images ranges from 60 to 80 percent in the CDM models. A subsequent analysis arrived at a similar conclusion (Despali et al., 2018). The observational evidence supporting the significant role of LOS haloes in'substructure-lensing' is mounting. Based on \(N\)-body simulations, Takahashi and Inoue (2014) pointed out that the 'object X', previously assumed to be a satellite galaxy in the lensed quasar MG J0414+0534, may in fact be located within the intergalactic space. Additionally, Sengil et al. (2022) argued that a presumed dark perturber, assumed to be a satellite galaxy in the lensed quasar B1938+666, actually constitutes an intergalactic halo with a mass of approximately \(2\times 10^{9}\,M_{\odot}\). Furthermore, a recent assessment of the lensing power spectra at an angular wave number of \(l=1.2\times 10^{6}\) or roughly 9 kpc within the primary lens plane aligns with CDM models, where LOS haloes exert the predominant influence on the alterations in flux and relative positions of lensed images (Inoue et al., 2023). To validate this claim, it becomes imperative to determine line-of-sight (LOS) distances to perturbing dark haloes. This necessitates the measurement of astrometric shifts between perturbed multiple images and their unperturbed counterparts. These astrometric shifts have the potential to break the degeneracy between subhalo mass and the LOS distance, provided that they are resolved at the scale of the Einstein radius of the perturbing object. Additionally, precise determination of sky positions using a dipole structure is required (Inoue and Chiba, 2005). In practical scenarios, the presence of observable dipole structures or a fifth image due to the strong lensing effects caused by intergalactic small-mass haloes is rare. Moreover, numerous perturbers with varying masses, residing at different redshifts and sky positions, can influence astrometric shifts. Consequently, modeling each dark perturber individually becomes a formidable challenge. To achieve precise modeling of astrometric shifts, we must comprehend the coupling effect between the strong lensing exerted by a dominant primary lens and the weak lensing introduced by a subdominant secondary lens located _outside_ the primary lens plane. While frameworks for modeling LOS lensing' by intergalactic haloes have been explored in previous literature (Erdl and Schneider, 1993; Bar-Kana, 1996; McCully et al., 2014, 2017; Birrer et al., 2017; Fleury et al., 2021), none of these analyses have fully addressed the coupling effect for general perturbations, encompassing all multipole moments (beyond flexions). Particularly, the ambiguity of LOS distances stemming from multi-plane mass-sheet transformations (Schneider, 2014, 2019) remains poorly understood. In this paper, we investigate the property of B-modes (magnetic modes) in the two-dimensional vector field of astrometric shifts as a means to determine the distance ratio to a perturber. The concept is straightforward: if all perturbers are confined to the primary lens plane, their reduced deflection angles can be expressed as gradients of a scalar potential, resulting in no generation of B-modes. However, if some perturbers are residing at a different lens plane, the reduced deflection angles of those perturbers that _are assumed to be_ in the primary lens plane cannot be represented as gradients of a scalar potential due to the coupling effect. Consequently, this generates B-modes akin to those induced by weak lensing in the cosmic microwave background (CMB) polarization (Zaldarriaga and Seljak, 1998; Lewis and Challinor, 2006). It should be noted that, in our scenario, perturbers can exist in the background of the primary lens, whereas in weak lensing of the CMB, perturbers can only reside in the foreground of the CMB. In Section 2, we present the theoretical framework of our method. In Section 3, we explore how multi-plane mass-sheet transformations impact the LOS distance ratio and time delay in a double lens system. In Section 4, we examine the astrometric Lensing B-mode and evaluate the accuracy of distance ratio estimators using simple toy models. Finally, in Section 5, we offer conclusions and discuss the observational feasibility of our proposed method. ## 2 Theory ### E/B decomposition Helmholtz' theorem states that a smooth three dimensional vector field \(\boldsymbol{\alpha}(\boldsymbol{r}),\boldsymbol{r}=(x_{1},x_{2},x_{3})\) that decays faster than \(|\boldsymbol{r}|^{-2}\) for \(|\boldsymbol{r}|\rightarrow\infty\) can be uniquely decomposed as a sum of a rotation free 'electric' field \(\boldsymbol{\alpha}^{\rm E}\) and a divergence free'magnetic' field \(\boldsymbol{\alpha}^{\rm B}\) as \(\boldsymbol{\alpha}=\boldsymbol{\alpha}^{\rm E}+\boldsymbol{\alpha}^{\rm B}\). By constraining \(\alpha_{3}\) to be a constant, we can apply the theorem to two dimensional vectors: In terms of an 'electric' potential \(\psi^{\rm E}\) and a'magnetic' potential \(\psi^{\rm B}\), a smooth two dimensional vector field \(\boldsymbol{\alpha}(\boldsymbol{x}),\boldsymbol{x}=(x_{1},x_{2})\) can be decomposed as \[\alpha_{i} = \psi^{\rm E}_{,i}+\varepsilon^{i}_{i}\psi^{\rm B}_{,j}, \tag{1}\] \[= \alpha^{\rm E}_{i}+\alpha^{\rm B}_{i},\ \ (i,j=1,2), \tag{2}\] where \(\varepsilon\) is the Levi-Civita tensor and \(,i\) is a partial derivative operator in the direction \(x_{i}\). For brevity, we denote the two dimensional rotation as \(\nabla\times\boldsymbol{\alpha}^{\rm B}=\alpha^{\rm B}_{2,1}-\alpha^{\rm B}_{ 1,2}=-\psi^{\rm B}_{1,1}-\psi^{\rm B}_{,2,2}\) and divergence as \(\nabla\cdot\boldsymbol{\alpha}^{\rm E}=\alpha^{\rm E}_{1,1}+\alpha^{\rm B}_{ 2,2}=\psi^{\rm E}_{,1}+\psi^{\rm E}_{,2,2}\). In the following, we consider a lens system in which a source at a redshift \(z_{\rm s}\) is lensed by a primary (dominant) lens at a redshift \(z_{\rm d}\) with a reduced deflection angle \(\boldsymbol{\alpha}\) and a secondary lens (perturber) at a redshift \(z_{\rm r}(<z_{\rm d})\) or \(z_{\rm b}(>z_{\rm d})\) with a small reduced deflection angle \(\delta\boldsymbol{\alpha}\,(|\delta\boldsymbol{\alpha}|\ll|\boldsymbol{\alpha}|)\). If the perturber resides at the lens plane of the dominant lens, i.e., \(z_{\rm r}=z_{\rm d}\), \(\nabla\times\delta\boldsymbol{\alpha}\) is null. However, if the perturber does not reside at the lens plane, the rotation of the effective deflection angle1\(\nabla\times\delta\boldsymbol{\alpha}_{\rm eff}\) is not null. We denote the angular diameter distance between a dominant lens and the source as \(D_{\rm ad}\), between a foreground perturber and a dominant lens as \(D_{\rm df}(z_{\rm f}<z_{\rm d})\), between a dominant lens and a background perturber as \(D_{\rm bd}(z_{\rm b}>z_{\rm d})\), between a foreground perturber and the source as \(D_{\rm df}\), between a background perturber and the source as \(D_{\rm bd}\). ### Rotation and divergence First, we consider a double lens system in which a perturber resides in front of a dominant lens, i.e., \(z_{\rm f}<z_{\rm d}\). Then, the angular position \(\mathbf{y}\) of the source can be written as a function of the angular position \(\mathbf{x}\) of the lensed image as \[\mathbf{y}=\mathbf{x}-\delta\mathbf{\alpha}(\mathbf{x})-\mathbf{\alpha}(\mathbf{y}_{\rm d}), \tag{3}\] where \(\mathbf{y}_{\rm d}\equiv\mathbf{x}-\beta\,\delta\mathbf{\alpha}(\mathbf{x}),(0<\beta<1)\) and \[\beta=\frac{D_{\rm df}D_{\rm s}}{D_{\rm d}D_{\rm sf}}, \tag{4}\] is the LOS distance ratio parameter, which encodes the information of the LOS distance from an observer to the perturber in the foreground with respect to the dominant lens. If we model the system as the one in which a perturber with an effective deflection angle \(\delta\mathbf{\alpha}^{\rm f}_{\rm eff}\) resides at the dominant lens plane, equation (3) can be written as \[\mathbf{y} = \mathbf{x}-\mathbf{\alpha}(\mathbf{x})-\delta\mathbf{\alpha}^{\rm f}_{\rm eff}( \mathbf{x}),\] \[\delta\mathbf{\alpha}^{\rm f}_{\rm eff}(\mathbf{x}) \equiv \delta\mathbf{\alpha}(\mathbf{x})+\mathbf{\alpha}(\mathbf{y}_{\rm d}(\mathbf{x}))- \mathbf{\alpha}(\mathbf{x}). \tag{5}\] Because of coupling between the strong lensing by the dominant lens and the weak lensing by the foreground perturber, the effective deflection angle \(\delta\mathbf{\alpha}^{\rm f}_{\rm eff}\) has a magnetic component if and only if \(\beta>0\). The rotation of \(\delta\mathbf{\alpha}^{\rm f}_{\rm eff}\) is \[\nabla\times\delta\mathbf{\alpha}^{\rm b}_{\rm eff}(\mathbf{x})=2\beta(\tilde{\gamma} _{1}\delta\gamma_{2}-\tilde{\gamma}_{2}\delta\gamma_{1}), \tag{6}\] where \(\gamma_{i}\) is the \(i\)-component of the shear tensor of the dominant lens at \(\tilde{\mathbf{x}}\) and \(\delta\tilde{\gamma}_{i}\) is the \(i\)-component of the shear tensor of the secondary lens at \(\mathbf{x}\). Second, we consider a double lens system, in which a perturber resides in the background of a dominant lens, i.e., \(z_{\rm b}>z_{\rm d}\). Then, the angular position \(\mathbf{y}\) of the source can be written as a function of the angular position \(\mathbf{x}\) of the lensed image as \[\mathbf{y}=\mathbf{x}-\mathbf{\alpha}(\mathbf{x})-\delta\mathbf{\alpha}(\mathbf{y}_{\rm b}), \tag{7}\] where \(\mathbf{y}_{\rm b}\equiv\mathbf{x}-\xi\,\mathbf{\alpha}(\mathbf{x}),(0<\xi<1)\) and \[\xi=\frac{D_{\rm bd}D_{\rm s}}{D_{\rm b}D_{\rm bd}}, \tag{8}\] is the LOS distance ratio parameter, which encodes the information of the LOS distance from an observer to perturber in the background of the dominant lens. As in the foreground case, we can model the system as the one in which a secondary perturber with an effective deflection angle \(\delta\mathbf{\alpha}^{\rm b}_{\rm eff}\) resides at the dominant lens plane, equation (7) can be written as \[\mathbf{y} = \mathbf{x}-\mathbf{\alpha}(\mathbf{x})-\delta\mathbf{\alpha}^{\rm b}_{\rm eff}( \mathbf{x}),\] \[\delta\mathbf{\alpha}^{\rm b}_{\rm eff}(\mathbf{x}) \equiv \delta\mathbf{\alpha}(\mathbf{y}_{\rm b}). \tag{9}\] Because of the difference between the angular position of the lensed image \(\mathbf{x}\) and that of the background perturber \(\mathbf{y}_{\rm b}\), \(\delta\mathbf{\alpha}^{\rm b}_{\rm eff}(\mathbf{x})\) has a magnetic component if and only if \(\xi>0\). The rotation of \(\delta\mathbf{\alpha}^{\rm b}_{\rm eff}\) is \[\nabla\times\delta\mathbf{\alpha}^{\rm b}_{\rm eff}(\mathbf{x})=-2\xi(\gamma_{1} \delta\gamma_{2}-\gamma_{2}\delta\gamma_{1}), \tag{10}\] where \(\gamma_{i}\) is the \(i\)-component of the shear tensor of the dominant lens at \(\mathbf{x}\) and \(\delta\gamma_{i}\) is the \(i\)-component of the shear tensor of the dominant lens at \(\mathbf{y}_{\rm b}\) Ignoring second order terms, equations (6) and (10) can be combined to yield \[\eta=\frac{\nabla\times\delta\mathbf{\alpha}_{\rm eff}(\mathbf{x})}{2(\gamma_{1} \delta\gamma_{2}-\gamma_{2}\delta\gamma_{1})}, \tag{11}\] where \(-1<\eta<1\) and \(\delta\mathbf{\alpha}_{\rm eff}(\mathbf{x})=\delta\mathbf{\alpha}^{\rm f}_{\rm eff}(\mathbf{x})\) and \(\eta=\beta\) for \(\eta>0\) and \(\delta\mathbf{\alpha}_{\rm eff}(\mathbf{x})=\delta\mathbf{\alpha}^{\rm f}_{\rm eff}(\mathbf{x})\) and \(\eta=-\xi\) for \(\eta<0\). Thus, from measured rotation of the effective deflection angle and shears of the dominant lens and the perturber, we can measure the dimensionless distance ratio parameter \(\eta\), that encodes the information of LOS distance to the perturber. However, it should be noted that \(\eta\) is not a directly measured quantity as \(\delta\gamma_{1}\) and \(\delta\gamma_{2}\) are not directly measured. Therefore, we need to estimate \(\eta\) using a certain approximation, which we will discuss in the next section. If \(\eta\) is negative/positive, the perturber resides in the background/foreground of the dominant lens. In shear-aligned coordinates \((x^{\prime}_{1},x^{\prime}_{2})\) in which the shear of the dominant lens is aligned with \(x^{\prime}_{1}\) or \(x^{\prime}_{2}\) axis, i.e. \(\gamma^{\prime}_{2}=0\), the rotation is proportional to \(\delta\gamma^{\prime}_{2}\). In other words, the amplitude of rotation is proportional to the non-diagonal shear component of the perturber. In a similar manner, we can derive the divergence of the effective deflection \(\delta\mathbf{\alpha}_{\rm eff}\). In the foreground and background cases, the divergences are \[\nabla\cdot\delta\mathbf{\alpha}^{\rm f}_{\rm eff}(\mathbf{x})=2\delta\kappa-2\beta( \tilde{\kappa}\delta\kappa+\tilde{\gamma_{1}}\delta\gamma_{1}+\tilde{\gamma_{2} }\delta\gamma_{2}), \tag{12}\] and \[\nabla\cdot\delta\mathbf{\alpha}^{\rm b}_{\rm eff}(\mathbf{x})=2\delta\kappa-2\xi( \kappa\delta\kappa+\gamma_{1}\delta\gamma_{1}+\gamma_{2}\delta\gamma_{2}), \tag{13}\] respectively. Ignoring second order terms, in terms of \(\eta\), equations (12) and (13) give a modified Poisson equation, \[\nabla\cdot\delta\mathbf{\alpha}_{\rm eff}(\mathbf{x}) = 2\delta\kappa-2|\eta|(\kappa\delta\kappa+\gamma_{1}\delta\gamma_{1}+ \gamma_{2}\delta\gamma_{2}), \tag{14}\] \[= 2\delta\kappa_{\rm eff},\] where \(\delta\kappa_{\rm eff}\) is the effective convergence perturbation which encodes the information of coupling between the dominant lens and the perturber. The effective deflection angle can be decomposed as magnetic and electric components, \[\delta\mathbf{\alpha}_{\rm eff}=\delta\mathbf{\alpha}^{\rm B}_{\rm eff}+\delta\mathbf{ \alpha}^{\rm E}_{\rm eff}, \tag{15}\] where \(\nabla\cdot\delta\mathbf{\alpha}^{\rm B}_{\rm eff}=\nabla\times\delta\mathbf{\alpha}^{ \rm E}_{\rm eff}=0\). Using equations (11) and (14), we can estimate the ratio of the magnetic component to the Electric one in the limit \(\eta\to 0\) as \[r^{\rm BE} \equiv \frac{|\delta\mathbf{\alpha}^{\rm B}_{\rm eff}|}{|\delta\mathbf{\alpha}^{ \rm E}_{\rm eff}|} \tag{16}\] \[\sim \frac{|\nabla\times\delta\mathbf{\alpha}^{\rm B}_{\rm eff}|}{|\nabla \cdot\delta\mathbf{\alpha}^{\rm E}_{\rm eff}|}\] \[\sim \frac{|\eta\,\gamma_{1}|}{\sqrt{2}},\] where prime denotes the coordinates in which the magnification matrix of the dominant lens is diagonalized, i.e., \(\gamma^{\prime}_{2}=0\) and we assumed that \(|\delta\gamma^{\prime}_{2}|\sim|\delta\kappa^{\prime}|/\sqrt{2}\). Let us suppose a typical lens system in which the dominant lens is modelled by a singular isothermal sphere (SIS). In the vicinity of an Einstein ring, the shear is \(\gamma_{1}^{\prime}\sim 0.5\). Then the ratio is \[r^{\rm{RE}}\sim\frac{\sqrt{2}|\eta|}{4}. \tag{17}\] Let us suppose a lens system with a perturber at \(|\eta|=0.4\). Then we have \(r^{\rm{BE}}\sim 1/7\). Thus the contribution from magnetic component is expected to be subdominant unless the perturber resides too far from the dominant lens. ### Estimators of LOS distance In order to estimate the LOS distance ratio \(\eta\), we need to assess the components \(\delta\kappa\), \(\delta\gamma_{1}\), and \(\delta\gamma_{2}\) using the spatial derivatives of the effective deflection \(\delta\mathbf{\alpha}_{\rm eff}\), \[\delta M_{\rm eff}=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\equiv\begin{pmatrix}\delta\alpha_{\rm eff1,1}&\delta\alpha_{ \rm eff1,2}\\ \delta\alpha_{\rm eff2,1}&\delta\alpha_{\rm eff2,1}\end{pmatrix}. \tag{18}\] First, we consider the foreground perturber case. From equations (5, 18), neglecting second order terms, we have \[A = \delta\kappa+\delta\gamma_{1}-\beta[(\kappa+\gamma_{1})(\delta \kappa+\delta\gamma_{1})+\gamma_{2}\delta\gamma_{2}]\] \[+ \tilde{\kappa}+\tilde{\gamma_{1}}-\kappa-\gamma_{1},\] \[B = \delta\gamma_{2}-\beta(\kappa\delta\gamma_{2}+\gamma_{2}\delta \kappa+\gamma_{1}\delta\gamma_{2}-\gamma_{2}\delta\gamma_{1})\] \[+ \tilde{\gamma_{2}}-\gamma_{2},\] \[C = \delta\gamma_{2}-\beta(\kappa\delta\gamma_{2}+\gamma_{2}\delta \kappa+\gamma_{2}\delta\gamma_{1}-\gamma_{1}\delta\gamma_{2})\] \[+ \tilde{\gamma_{2}}-\gamma_{2},\] \[D = \delta\kappa-\delta\gamma_{1}-\beta[(\kappa-\gamma_{1})(\delta \kappa-\delta\gamma_{1})+\gamma_{2}\delta\gamma_{2})] \tag{19}\] \[+ \tilde{\kappa}-\tilde{\gamma_{1}}-\kappa+\gamma_{1}.\] In first order in \(\delta\mathbf{\alpha}\), the last terms that represent small changes in the magnification matrix of the dominant lens can be linearly approximated as \[c_{1} \equiv \left.\frac{\partial(\kappa+\gamma_{1})}{\partial\mbox{\boldmath $x$}}\right|_{\mathbf{x}}\cdot\delta\mathbf{\alpha}( \mathbf{x})\approx-\beta^{-1}(\tilde{\kappa}+\tilde{\gamma_{1}}- \kappa-\gamma_{1}),\] \[c_{2} \equiv \left.\frac{\partial(\kappa-\gamma_{1})}{\partial\mbox{\boldmath $x$}}\right|_{\mathbf{x}}\cdot\delta\mathbf{\alpha}( \mathbf{x})\approx-\beta^{-1}(\tilde{\kappa}-\tilde{\gamma_{1}}+ \kappa-\gamma_{1}),\] \[c_{3} \equiv \left.\frac{\partial\gamma_{2}}{\partial\mathbf{x}} \right|_{\mathbf{x}}\cdot\delta\mathbf{\alpha}(\mathbf{x})\approx-\beta^{-1}(\tilde{\gamma_{2}}-\gamma_{2}).\] Let us suppose that a perturber resides in the vicinity of the dominant lens, i.e., \(\beta\ll 1\). Then the magnetic component of \(\delta\mathbf{\alpha}_{\rm eff}\) is much smaller than the electric component and thus \(\delta\mathbf{\alpha}(\mathbf{x})\approx\delta\mathbf{\alpha}_{\rm eff}(\mathbf{x})\). Plugging this relation and equation (2.2) into equation (19), we obtain a quadratic equation in \(\beta\) whose positive solution \[\hat{\beta}^{\rm{A}} \equiv \frac{-\hat{b}-\mbox{sgn}\,\hat{c}\sqrt{\hat{b}^{2}-4\hat{a}\hat {c}}}{2\hat{a}}\approx\beta,\] \[\hat{a} \equiv 2c_{3}\gamma_{1}+(c_{2}-c_{1})\gamma_{2},\] \[\hat{b} \equiv (B+C)\gamma_{1}+(D-A)\gamma_{2}+(C-B)\kappa,\] \[\hat{c} \equiv B-C=-\nabla\times\delta\mathbf{\alpha}_{\rm eff}, \tag{21}\] is an estimator of \(\beta\). If \(|\hat{c}|=2\beta|(\gamma_{2}\delta\gamma_{1}-\gamma_{1}\delta\gamma_{2})| \ll\hat{b}^{2}/|\hat{a}|\) and \(\beta>0\), equation (2.2) can be further simplified as \[\hat{\beta}^{\rm{B}} \equiv -\frac{\hat{c}}{\hat{b}}, \tag{22}\] \[= \frac{C-B}{(D-A)\gamma_{2}+(B+C)\gamma_{1}+(C-B)\kappa}\] \[\approx \beta.\] One can easily confirm that equation (2.2) is equivalent to equation (11) provided that \(\beta c_{i}\)'s are sufficiently small. If not, equation (2.2), which include effects from \(\beta c_{i}\)'s may give a much better approximation compared to equation (2.2). Note that equations (2.2), (2.2) are only valid in some limited regions of \(x\) in which the amplitude of rotation \(|\nabla\times\delta\mathbf{\alpha}_{\rm eff}|\) is not'very' small and the strong lensing effect due to the dominant lens or the perturber is not too large. For instance, if \(|C-B|\) is sufficiently smaller than the typical amplitude of linear perturbations \(\delta\kappa,\delta\gamma_{1},\delta\gamma_{2}\), the equations (2.2), (2.2) give a bad approximation. In that case, second order or higher order correction is necessary to give an accurate estimate. If the gradient of a projected gravitational potential of either of the dominant or subdominant lens is too large, equation (2.2) gives a bad approximation. In the former case, \(c_{i}\)'s become too large. Then the linear approximation in equation (2.2) is no longer valid. In the latter case, the subdominant lens dominates the lensing effect over that of the dominant lens and thus \(\delta\alpha\) is no longer smaller than \(\alpha\). Next, we consider the background perturber case. The components of the perturbed magnification matrix \(\delta M_{\rm eff}\) \[A = \delta\kappa+\delta\gamma_{1}-\xi[(\kappa+\gamma_{1})(\delta\kappa+ \delta\gamma_{1})+\gamma_{2}\delta\gamma_{2}],\] \[B = \delta\gamma_{2}-\xi(\kappa\delta\gamma_{2}+\gamma_{2}\delta\kappa+ \gamma_{1}\delta\gamma_{2}-\gamma_{2}\delta\gamma_{1}),\] \[C = \delta\gamma_{2}-\xi(\kappa\delta\gamma_{2}+\gamma_{2}\delta\kappa+ \gamma_{2}\delta\gamma_{1}-\gamma_{1}\delta\gamma_{2}),\] \[D = \delta\kappa-\delta\gamma_{1}-\xi[(\kappa-\gamma_{1})(\delta\kappa- \delta\gamma_{1})+\gamma_{2}\delta\gamma_{2}], \tag{23}\] yield an exact solution \[\xi=\frac{B-C}{(D-A)\gamma_{2}+(B+C)\gamma_{1}+(B-C)\kappa}. \tag{24}\] If the magnitude of rotation \(|C-B|=|\hat{c}|\) is sufficiently smaller than \(|(D-A)\gamma_{2}+(B+C)\gamma_{1}|\), equations (2.2) and (24) yield an estimator of \(\eta\), \[\hat{\eta} \equiv \frac{C-B}{(D-A)\gamma_{2}+(B+C)\gamma_{1}} \tag{25}\] \[\approx \eta.\] If \(\hat{\eta}\) is positive(negative), it is likely that a perturber resides in the foreground (background). ### Estimators of LOS perturbations In a similar manner, we can express the 'bare'2 perturbations \(\delta\hat{\kappa},\delta\hat{\gamma}_{1},\delta\hat{\gamma}_{2}\) due to a perturber in the LOS in terms of the derivatives of \(\delta\mathbf{\alpha}_{\rm eff}\). In the following, we assume that \(\beta\ll 1\) and \(\xi\ll 1\). Footnote 2: Here ‘bare’ means not influenced by a dominant lens In the foreground perturber case, if \(\beta c_{i}\)'s are sufficiently small, we have \[\delta\kappa^{\rm{B}} = F[2(AC+BD)\gamma_{1}-(A^{2}+B^{2}-C^{2}-D^{2})\gamma_{2}]/2,\] \[\delta\gamma_{1}^{\rm{1}} = F[2(AC-BD)\gamma_{1}\] \[- (A+B-C-D)(A-B+C-D)\gamma_{2}]/2,\] \[\delta\gamma_{2}^{\rm{B}} = F[2BC\gamma_{1}-(AB-CD)\gamma_{2}],\] \[F \equiv (\hat{\beta}^{\rm{B}})^{-1}(C-B)[4BC\gamma_{1}^{2}-2(B+C)(A-D) \gamma_{1}\gamma_{2} \tag{26}\] \[- (A+B-C-D)(-A+B-C+D)\gamma_{2}^{2}]^{-1}.\] Interchanging B and C and substituting \(\xi\) into \(\beta\) in equation (26), the exact perturbations for a background perturber are \[\delta\kappa = G[2(AB+CD)\gamma_{1}-(A^{2}-B^{2}+C^{2}-D^{2})\gamma_{2}]/2,\] \[\delta\gamma_{1} = G[2(AB+CD)\gamma_{1} \tag{27}\] \[- (A+B-C-D)(A-B+C-D)\gamma_{2}]/2\] \[\delta\gamma_{2} = G[2BC\gamma_{1}-(AC-BD)\gamma_{2}],\] \[G \equiv \xi^{-1}(B-C)[4BC\gamma_{1}^{2}-2(B+C)(A-D)\gamma_{1}\gamma_{2}\] (28) \[- (A+B-C-D)(-A+B-C+D)\gamma_{2}^{2}]^{-1}.\] ## 3 Extended multi-plane mass-sheet transformation In this section, we study how multi-plane mass-sheet transformation (MMST) affects the LOS distance ratio and time delay in a double lens system with a single source at a certain redshift. A scale transformation in the distance ratio allow a non-zero mass-sheet transformation in both the foreground and background lens planes. This implies that we have two degrees of freedom in scale transformation if the redshift of a perturber is not known. We call such a transformation (multi-plane MST plus a scale transformation in the distance ratio of a perturber) an 'extended MMST' (eMMST). ### Distance ratio Let us recall mass-sheet transformation (MST) in a single lens system with a single source. The position of the source at a certain redshift is given by \[\mathbf{y}=\mathbf{x}-\mathbf{\alpha}(\mathbf{x}). \tag{29}\] A scale transformation by a factor of \(\lambda\) \[\mathbf{y}\rightarrow\mathbf{y}^{\prime}=\lambda\mathbf{y} \tag{30}\] and a scale transformation by a factor of \(\lambda\) accompanied by an addition of a deflection by a constant convergence \[\mathbf{\alpha}(\mathbf{x})\rightarrow\mathbf{\alpha}^{\prime}(\mathbf{x})=\lambda\mathbf{ \alpha}+(1-\lambda)\mathbf{x} \tag{31}\] leave a lens equation \[\mathbf{y}^{\prime}=\mathbf{x}-\mathbf{\alpha}^{\prime}(\mathbf{x}) \tag{32}\] invariant. The transformed lens system is equivalent to the original lens system(29) except for the physical size and the intensity of the source. If we do not know the true size or the intensity of the source, we cannot distinguish between the two systems with different deflection angles. The set of transformations (30) and (31) is called as MST for a single lens system. In what follows, we consider MMST in a double lens system that consists of a dominant lens and a subdominant lens which acts as a perturber. First, we assume that the subdominant lens with a distance ratio parameter \(\beta\) resides in the foreground of the dominant lens. We consider a scale transformation for the dominant lens \[\mathbf{y}\rightarrow\mathbf{y}^{\prime} = \lambda_{\rm d}\mathbf{y}\] (33) \[= \mathbf{x}-\delta\tilde{\mathbf{\alpha}}-(1-\lambda_{\rm d})\mathbf{x}- \lambda_{\rm d}\mathbf{\alpha}(\mathbf{x}-\beta\delta\mathbf{\alpha})\] \[\ \ and that for the distance ratio parameter \[\xi\rightarrow\xi^{\prime}=\lambda_{\xi}\xi. \tag{43}\] By applying a similar argument above to equations (41), (42), and (43), we have \[\lambda_{\xi}=\frac{1}{(1-\lambda_{\rm d})\xi+\lambda_{\rm d}}. \tag{44}\] Thus the mass-sheet degeneracy (MSD) in the distance ratios \(\beta\) and \(\xi\), which is related to the scale transformation in the background lens remain. ### Time delay First we examine a double lens system with a foreground perturber. Then the time delay \(\tau\) of a source at \(\boldsymbol{y}\), which is defined as the arrival time difference between a light path that is lensed by a dominant lens at known redshift \(z_{\rm d}\) and a foreground perturber at unknown redshift \(z_{\rm f}\) and an unlensed path is \[c\tau = c\tau_{\rm f}+c\tau_{\rm d} \tag{45}\] \[= (1+z_{\rm f})\frac{D_{\rm f}D_{\rm d}}{D_{\rm df}}\biggl{[}- \nabla_{\boldsymbol{y}_{\rm d}}^{-1}\bigl{(}\delta\delta\boldsymbol{\alpha}( \boldsymbol{x})\bigr{)}+\frac{(\boldsymbol{y}_{\rm d}-\boldsymbol{y}_{\rm d} )^{2}}{2}\biggr{]},\] where \(\tau_{\rm f}\) and \(\tau_{\rm d}\) are the arrival time differences between light paths DFO and DO, and SDO and SO, respectively (as depicted in figure 1). The constant \(c\) is the light speed. We investigate how the time delay \(\tau\) behaves under eMMST. The scalings (32), (33), and (34) yield \[c\tau^{\prime} = c\tau_{\rm f}^{\prime}+c\tau_{\rm d}^{\prime} \tag{46}\] \[= (1+z_{\rm f}^{\prime})\frac{D_{\rm f}^{\prime}D_{\rm d}}{D_{\rm df }}\biggl{[}-\nabla_{\boldsymbol{x}}^{-1}\bigl{(}\beta^{\prime}\delta \boldsymbol{\alpha}^{\prime}(\boldsymbol{x})\bigr{)}+\frac{(\boldsymbol{x}- \boldsymbol{y}_{\rm d}^{\prime})^{2}}{2}\biggr{]}\] \[+ (1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm sd}}\biggl{[}- \nabla_{\boldsymbol{y}_{\rm d}}^{-1}\bigl{(}\boldsymbol{\alpha}^{\prime}( \boldsymbol{y}_{\rm d}^{\prime})\bigr{)}+\frac{(\boldsymbol{y}_{\rm d}^{ \prime}-\boldsymbol{y}^{\prime})^{2}}{2}\biggr{]}\] \[= (1+z_{\rm f}^{\prime})\frac{D_{\rm f}^{\prime}D_{\rm d}}{D_{\rm df }^{\prime}}\biggl{[}-\lambda_{\rm f}\nabla_{\boldsymbol{x}}^{-1}\bigl{(} \delta\delta\boldsymbol{\alpha}(\boldsymbol{x})\bigr{)}-\frac{(1-\lambda_{\rm f })x^{2}}{2}\] \[+ \frac{(\boldsymbol{x}-\lambda_{\boldsymbol{y}_{\rm d}})^{2}}{2} \biggr{]}+(1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm sd}}\biggl{[}-\lambda_ {\rm c}\lambda_{\rm d}\nabla_{\boldsymbol{y}_{\rm d}}^{-1}\bigl{(}\boldsymbol {\alpha}(\boldsymbol{y}_{\rm d})\bigr{)}\] \[- \frac{\lambda_{\rm f}^{2}h\,y_{\rm d}^{2}}{2}+\frac{(\lambda_{ \rm d}\boldsymbol{y}_{\rm d}-\lambda_{\boldsymbol{y}}\boldsymbol{y})^{2}}{2} \biggr{]},\] \[h \equiv -\frac{\lambda_{\rm d}}{\beta\lambda_{\rm f}}+\frac{1}{\lambda_{ \beta}\beta}.\] Since \[(1+z_{\rm f}^{\prime})\frac{D_{\rm f}^{\prime}D_{\rm d}}{D_{\rm df }^{\prime}}\biggl{[}-\frac{(1-\lambda_{\rm f})x^{2}}{2}+\frac{(\boldsymbol{x} -\lambda_{\boldsymbol{y}_{\rm d}})^{2}}{2}\biggr{]} \tag{47}\] \[+ (1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm sd}}\biggl{[}-\frac {\lambda_{\rm f}^{2}h\,y_{\rm d}^{2}}{2}+\frac{(\lambda_{\rm d}\boldsymbol{y} _{\rm d}-\lambda_{\boldsymbol{y}}\boldsymbol{y})^{2}}{2}\biggr{]}\] \[- (1+z_{\rm f}^{\prime})\frac{D_{\rm f}^{\prime}D_{\rm d}}{D_{\rm df }^{\prime}}\frac{\lambda_{\rm f}(\boldsymbol{x}-\boldsymbol{y}_{\rm d})^{2}}{2}\] \[- (1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm sd}}\lambda_{\rm d} \lambda_{\rm f}(\boldsymbol{y}_{\rm d}-\boldsymbol{y})^{2}\] \[= (1+z_{\rm f}^{\prime})\frac{D_{\rm f}^{\prime}D_{\rm d}}{D_{\rm df }^{\prime}}\frac{\lambda_{\rm d}(\lambda_{\rm d}-\lambda_{\rm f})y^{2}}{2}\] \[+ (1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm sd}}\frac{\lambda_{ \rm f}(\lambda_{\rm d}-1)(\lambda_{\rm d}-1)y_{\rm d}^{2}}{2},\] and \[(1+z_{\rm f}^{\prime})\frac{D_{\rm f}^{\prime}D_{\rm d}}{D_{\rm df }^{\prime}}=-(1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm sd}}\lambda_{\rm d }\biggl{[}1-\frac{1}{\beta}\biggr{]}, \tag{48}\] equation (46) gives \[c\tau^{\prime} = \lambda_{\rm f}\lambda_{\rm d}c\tau+\frac{\lambda_{\rm d}(\lambda_ {\rm d}-\lambda_{\rm f})F_{\rm sd}y^{2}}{2},\] \[F_{\rm sd} \equiv (1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm sd}}. \tag{49}\] Since the last term in equation (49) depend only on the position of a source, it does not contribute to time delay between lensed images. Thus eMMST admits a scale transformation in the foreground lens plane and another one in the background lens plane. Let us suppose that quadruply lensed images consist of two images that are lensed by only a dominant lens and the other two images that are perturbed by a foreground halo. Then the time delay between unperturbed images gives \(\lambda_{\rm d}\) and that between perturbed images gives \(\lambda_{\rm f}\) given a Hubble constant \(H_{0}\). Next we examine a double lens system with a background perturber. In this case, the time delay of a source at \(\boldsymbol{y}\) is \[c\tau = c\tau_{\rm d}+c\tau_{\rm b}\] \[= (1+z_{\rm d})\frac{D_{\rm d}D_{\rm b}}{D_{\rm sd}}\biggl{[}- \nabla_{\boldsymbol{x}}^{-1}\bigl{(}\xi\boldsymbol{\alpha}(\boldsymbol{x}) \bigr{)}+\frac{(\boldsymbol{x}-\boldsymbol{y}_{\rm b})^{2}}{2}\biggr{]}\] \[+ (1+z_{\rm b})\frac{D_{\rm b}D_{\rm s}}{D_{\rm sb}}\biggl{[}-\nabla_ {\boldsymbol{y}_{\rm b}}^{-1}\bigl{(}\delta\boldsymbol{\alpha}(\boldsymbol{y}_{ \rm d})\bigr{)}+\frac{(\boldsymbol{y}_{\rm b}-\boldsymbol{y})^{2}}{2}\biggr{]},\] Figure 1: An example of a light path for a foreground perturber. A light ray emitted from a point source at S in the source plane is deflected by a dominant lens at D and a subdominant lens at F and reaches to an observer at O. where \(\tau_{\rm d}\) and \(\tau_{\rm b}\) are the arrival time differences between light paths BDO and BO, and SBO and SO, respectively (as depicted in figure 2). By applying a similar argument above, the scalings (41), (42), and (43) yield \[c\tau^{\prime} = c\tau_{\rm d}^{\prime}+c\tau_{\rm b}^{\prime} \tag{51}\] \[= (1+z_{\rm d})\frac{D_{\rm d}D_{\rm b}^{\prime}}{D_{\rm bd}^{\prime }}\biggl{[}-\nabla_{\rm a}^{-1}\bigl{(}\xi^{\prime}\alpha^{\prime}({\mathbf{x}}) \bigr{)}+\frac{({\mathbf{x}}-{\mathbf{y}}_{\rm b}^{\prime})^{2}}{2}\biggr{]}\] \[+ (1+z_{\rm b}^{\prime})\frac{D_{\rm b}^{\prime}D_{\rm s}}{D_{\rm sb }^{\prime}}\biggl{[}-\nabla_{\mathbf{y}_{\rm b}^{\prime}}^{-1}\bigl{(}\delta{\mathbf{ \alpha}}^{\prime}({\mathbf{y}}_{\rm b}^{\prime})\bigr{)}+\frac{({\mathbf{y}}_{\rm b}^{ \prime}-{\mathbf{y}}^{\prime})^{2}}{2}\biggr{]}\] \[= (1+z_{\rm d})\frac{D_{\rm d}D_{\rm b}^{\prime}}{D_{\rm bd}^{\prime }}\biggl{[}-\lambda_{\rm d}\nabla_{\rm a}^{-1}\bigl{(}\xi{\mathbf{\alpha}}({\mathbf{x} })\bigr{)}-\frac{(1-\lambda_{\rm d})x^{2}}{2}\biggr{.}\] \[+ \frac{({\mathbf{x}}-\lambda_{\rm d}{\mathbf{y}}_{\rm b})^{2}}{2}\biggr{]} +(1+z_{\rm b}^{\prime})\frac{D_{\rm b}^{\prime}D_{\rm s}}{D_{\rm sb }^{\prime}}\biggl{[}-\lambda_{\rm d}\lambda_{\rm b}\nabla_{\mathbf{y}_{\rm b}}^{- 1}\bigl{(}\delta{\mathbf{\alpha}}({\mathbf{y}}_{\rm b})\bigr{)}\] \[- \frac{\lambda_{\rm d}^{2}g\,y_{\rm b}^{2}}{2}+\frac{(\lambda_{\rm d }{\mathbf{y}}_{\rm b}-\lambda_{\rm b}{\mathbf{y}})^{2}}{2}\biggr{]},\] \[g \equiv -\frac{\lambda_{\rm b}}{\xi\lambda_{\rm d}}+\frac{1}{\lambda_{\rm d }\xi}\] Since \[(1+z_{\rm d})\frac{D_{\rm d}D_{\rm b}^{\prime}}{D_{\rm bd}^{\prime }}\biggl{[}-\frac{(1-\lambda_{\rm d})x^{2}}{2}+\frac{({\mathbf{x}}-\lambda_{\rm d }{\mathbf{y}}_{\rm b})^{2}}{2}\biggr{]} \tag{52}\] \[+ (1+z_{\rm b}^{\prime})\frac{D_{\rm b}^{\prime}D_{\rm s}}{D_{\rm sb }^{\prime}}\biggl{[}-\frac{\lambda_{\rm d}^{2}g\,y_{\rm b}^{2}}{2}+\frac{( \lambda_{\rm d}{\mathbf{y}}_{\rm b}-\lambda_{\rm b}{\mathbf{y}})^{2}}{2}\biggr{]}\] \[- (1+z_{\rm d})\frac{D_{\rm d}D_{\rm b}^{\prime}}{D_{\rm bd}^{\prime }}\frac{\lambda_{\rm d}({\mathbf{x}}-{\mathbf{y}}_{\rm b})^{2}}{2}\] \[- (1+z_{\rm b}^{\prime})\frac{D_{\rm b}^{\prime}D_{\rm s}}{D_{\rm sb }^{\prime}}\frac{\lambda_{\rm b}\lambda_{\rm d}({\mathbf{y}}_{\rm b}-{\mathbf{y}})^{2} }{2}\] \[= (1+z_{\rm b}^{\prime})\frac{D_{\rm b}^{\prime}D_{\rm s}}{D_{\rm sb }^{\prime}}\frac{\lambda_{\rm b}(\lambda_{\rm b}-\lambda_{\rm d})y^{2}}{2}\] \[+ (1+z_{\rm b}^{\prime})\frac{D_{\rm b}^{\prime}D_{\rm s}}{D_{\rm sb }^{\prime}}\frac{\lambda_{\rm b}\lambda_{\rm d}(\lambda_{\rm d}-1)(1-\xi^{-1}) y_{\rm b}^{2}}{2}\] \[+ (1+z_{\rm d})\frac{D_{\rm d}^{\prime}D_{\rm b}^{\prime}}{D_{\rm bd }^{\prime}}\frac{\lambda_{\rm d}(\lambda_{\rm d}-1)y_{\rm b}^{2}}{2}\] and \[(1+z_{\rm d})\frac{D_{\rm d}D_{\rm b}^{\prime}}{D_{\rm bd}^{\prime }} = (1+z_{\rm d})\frac{D_{\rm s}D_{\rm d}}{D_{\rm sd}}\frac{1}{\lambda_{\rm \xi}\xi},\] \[(1+z_{\rm b}^{\prime})\frac{D_{\rm b}^{\prime}D_{\rm s}}{D_{\rm sb }^{\prime}} = -(1+z_{\rm d})\frac{D_{\rm s}D_{\rm d}}{D_{\rm sd}}\frac{1}{\lambda_{ \rm\xi}\xi-1}, \tag{53}\] equation (51) gives \[c\tau^{\prime} = \frac{\lambda_{\rm d}}{\lambda_{\rm\xi}}c\tau+\frac{\lambda_{\rm b }(\lambda_{\rm b}-\lambda_{\rm d})F_{\rm sb}^{\prime}y^{2}}{2},\] \[F_{\rm sb}^{\prime} \equiv (1+z_{\rm b}^{\prime})\frac{D_{\rm b}^{\prime}D_{\rm s}}{D_{\rm sb }^{\prime}}, \tag{54}\] where \(\lambda_{\rm\xi}\) is given by equation (44). Since the last term in equation (54) depends solely on the position of the source, it does not contribute to the time delay between lensed images. In the case of background perturbers, eMMST admits a scale transformation in the foreground lens plane and another one in the background lens plane. However, the transformed time delay between lensed images is inversely proportional to the scale factor \(\lambda_{\rm\xi}\). If \(\lambda_{\rm d}\) is determined by the observation of time delays between unperturbed images, it is possible to measure \(\lambda_{\rm\xi}\) with astrometric lensing B-mode for an assumed Hubble constant \(H_{0}\). In scenarios where the redshifts of both foreground and background perturbers, as well as the dominant lens and the source, are known, observations of astrometric lensing B-mode can break the MSD. This leads to a reduction in systematic errors in our estimated value of \(H_{0}\). The reason is as follows: Let's consider a scenario where a quadruple lens system, with images A, B, C, and D, is perturbed by a foreground perturber affecting image A and a background perturber affecting image B. We assume that their gravitational effects are confined to the vicinity of lensed images A and B, respectively. Additionally, we assume that the gravitational influence of all other perturbers, apart from the dominant lens, can be considered negligible.In this context, a measurement of astrometric lensing B-mode in the de-lensed image A can provide us with \(\lambda_{\rm d}\), as we already have knowledge of \(\beta\). Similarly, a measurement of astrometric lensing B-mode in the de-lensed image B can furnish us with either \(\lambda_{\rm\xi}\) or \(\lambda_{\rm b}\) since \(\xi\) is a known quantity. Subsequently, by measuring the time delay between BC or BD, we can determine \(H_{0}\), while measuring the time delay between AC or AD allows us to ascertain \(\lambda_{\rm f}\). ## 4 Toy Models We investigate the property of astrometric lensing B-mode and the accuracy of the approximated distance ratios \(\hat{\beta}^{\rm A},\hat{\beta}^{\rm B}\), and \(\hat{\eta}\) for a foreground perturber using simple toy models. As a model of a galaxy halo, we adopt an singular isothermal sphere (SIS) whose reduced deflection angle is given by \({\mathbf{\alpha}}({\mathbf{x}})=\theta_{\rm E}{\mathbf{x}}/{\mathbf{x}}\). One can easily show that any constant shift perturbation in a secondary lens plane can be explained by a translation in the source plane without a secondary lens. Since we do not have any information about the original position of the source, we cannot measure a constant shift. Moreover, a constant external convergence perturbation cannot be measured due to the eMMST. Therefore, in order to probe the effects of large-scale perturbation, we analyse the dominant SIS with a constant external shear at an arbitrary redshift. In order to probe the effects of small-scale perturbation, we analyse the dominant SIS with another subdominant SIS with an Einstein radius of \(\theta_{\rm Ep}\ll\theta_{\rm E}\) that acts as a perturber at an arbitrary redshift. Figure 2: An example of a light path for a background perturber. A light ray emitted from a point source at S in the source plane is deflected by a subdominant lens at B and a dominant lens at F and reaches to an observer at O. ### SIS + external shear Here we study a lens model with a dominant SIS and a constant external shear \(\delta\gamma\). We assume that the center of the effective gravitational potential of the dominant SIS and that of the constant shear coincide. If a perturber resides in the foreground of a dominant SIS, the effective deflection angle of the perturber is \[\delta\mathbf{\alpha}_{\rm eff}^{\rm f}(\mathbf{x})=\delta \gamma\mathcal{I}\mathbf{x}+\theta_{\rm E}\frac{\mathbf{x}- \beta\delta\gamma\mathcal{I}\mathbf{x}}{\|\mathbf{x}-\beta \delta\gamma\mathcal{I}\mathbf{x}\|}-\theta_{\rm E}\frac{\mathbf{x}}{\mathbf{x}}, \tag{55}\] where \(\mathcal{I}\) is a unit matrix. If a perturber resides in the background of a dominant SIS, the effective deflection angle of the perturber is \[\delta\mathbf{\alpha}_{\rm eff}^{\rm b}(\mathbf{x})=\delta \gamma\mathcal{I}\Big{[}\mathbf{x}-\xi\theta_{\rm E}\frac{\mathbf{x}}{\mathbf{x}}\Big{]}. \tag{56}\] The'magnetic' potential \(\psi^{\rm B}\) can be obtained by solving the Poisson equation \[-\Delta\psi^{\rm B}=\nabla\times\delta\mathbf{\alpha}_{\rm eff}^{\rm f }(\mathbf{x}). \tag{57}\] To solve equation (57) numerically, we used the Finite Element Method (FEM). We discretized the inner region of a disk with a radius of \(\theta=100\,\theta_{\rm E}\) into \(\sim 2\times 10^{5}\) triangle meshes in the polar coordinates \((\theta,\phi)\). We used finer meshes in regions with \(\theta<2\,\theta_{\rm E}\) in order to resolve sudden changes in the perturbed magnetic potential. We imposed a Dirichlet boundary condition at \(\theta=100\,\theta_{\rm E}\). The amplitude of rotation can be estimated as \(|\nabla\times\delta\mathbf{\alpha}_{\rm eff}^{\rm f}|\sim\beta\delta \gamma/\theta\) or \(|\nabla\times\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|\sim\xi\delta \gamma/\theta\). Then, the amplitude at \(\theta=100\,\theta_{\rm E}\) is expected to be \(\lesssim 10^{-3}\) for \(\delta\gamma\leqslant 0.1\). We numerically checked that the tiny error at the boundary of the disk does not significantly affect the accuracy of the obtained solution at \(\theta\lesssim\theta_{\rm E}\). In order to avoid a singularity at the center of the dominant lens, we used a cored isothermal sphere with very small core radius \(\theta_{\rm c}\ll\theta_{\rm E}\). The effective deflection angle is given by \(\mathbf{\alpha}(\mathbf{x})=\theta_{\rm E}(\nabla\sqrt{x^{2 }+\theta_{\rm E}^{2}}-\theta_{\rm c})\mathbf{x}/\mathbf{x}\). Except for the neighbourhood of singular points (centres of SIS), the absolute errors in the Poisson equation (57) for numerically obtained solutions of astrometric lensing B-mode \(\delta\mathbf{\alpha}_{\rm eff}^{\rm B}\) were typically \(\lesssim 10^{-2}\). The effective deflection angles \(\delta\mathbf{\alpha}_{\rm eff}\) in an SIS with an external shear of \(\gamma=0.1\) are shown in figure 3. If objects that causes an external shear reside in the foreground(background) of a dominant lens, the constant shear produces a cross shape (ring shape) pattern in the field of \(\|\delta\mathbf{\alpha}_{\rm eff}\|\). Except for the central region in the center of an SIS, the constant shear reduces the amplitude of \(\delta\mathbf{\alpha}_{\rm eff}\). The amplitudes of B-mode \(|\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|\) is as large as \(\sim 0\aas@@fstack{\prime\prime}03\) in the vicinity of coordinate axes and negligibly small in the vicinity of diagonal lines if \(\beta=0.5\) or \(\xi=0.5\). The direction of \(\delta\mathbf{\alpha}_{\rm eff}^{\rm B}\) in the foreground case is opposite to that in the background case. In contrast, the amplitudes of E-mode \(|\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|\) are largest in the diagonal lines and smallest in the coordinate axes except for the central region (figure 4). For a given distance from the center, the amplitudes of rotation \(\nabla\times\delta\mathbf{\alpha}_{\rm eff}^{\rm B}\) are largest in the diagonal lines (figure fig:SISES-rot). The 'direction' of rotation in the foreground case is opposite to that in the background case. From the 'direction' of rotation, we can determine whether Figure 4: Astrometric lensing B and E-modes for an external shear of \(\delta\gamma=0.1\). The arrows show the B and E components in the effective astrometric shift \(\delta\mathbf{\alpha}_{\rm eff}\). The model parameters are the same as in figure 3. The colors show the amplitudes of the shift in arcsec. Figure 3: Line-of-sight effect for an SIS model with an external constant shear in the shear-aligned coordinates. The dominant lens is an SIS with Einstein radius \(\theta_{\rm E}=1\,^{\prime\prime}\) centered at \((0,0)\). The arrows show the effective deflection angle \(\delta\mathbf{\alpha}_{\rm eff}\) due to an external shear with \(\delta\gamma=0.1\). The colors show the amplitudes of the shift in arcsec. the perturber resides in the foreground or background of the dominant lens. In figure 6, we show the ratio of the B-mode to E-mode \(r^{\rm BE}=|\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|/|\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|\), averaged over the Einstein ring with radius \(\theta_{\rm E}=1^{\prime\prime}\). The ratio \(r^{\rm BE}\) is a monotonically increasing function of \(\beta\) or \(\xi\), which equals \(\sim 0.5\) for \(\beta=0.5\), and \(\sim 0.3\) for \(\xi=0.5\). We found that the ratio of rotation to divergence, \(|\nabla\times\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|/|\nabla\cdot \delta\mathbf{\alpha}_{\rm eff}^{\rm B}|\) averaged over the Einstein ring gives a good approximation of \(r^{\rm BE}\). The analytic formula in equation (17) gives a good estimate for small distance ratio \(\beta\lesssim 0.1\) or \(\xi\lesssim 0.1\), but the difference is conspicuous for \(\beta\gtrsim 0.1\) or \(\xi\gtrsim 0.1\). Next, we show the relative errors of the distance ratio estimators \(\hat{\beta}^{\rm A}\), \(\hat{\beta}^{\rm B}\), and \(\hat{\eta}\) in the dominant lens plane in figure 7. We found that these estimators give a worse approximation in the central region within the Einstein radius \(\theta_{\rm E}\) and a good approximation in the outer regions \(\theta>\theta_{\rm E}\), especially in the vicinity of the horizontal axis \(x_{1}\). At \(\theta=\theta_{\rm E}=1^{\prime\prime}\), for \(\beta\lesssim 0.4\), \(\hat{\beta}^{\rm A}\) gives the best approximation but for \(\beta\gtrsim 0.4\), \(\hat{\eta}\) gives the best approximation (figure 8). Thus, a hybrid use of \(\hat{\beta}^{\rm A}\) and \(\hat{\eta}\) may be a best way to estimate the distance ratio \(\beta\) in the SIS + external shear model. ### Sis + Sis Here we study lens models with a dominant SIS with an Einstein radius of \(\theta_{\rm E}=1^{\prime\prime}\) and a subdominant SIS with an Einstein radius of \(\theta_{\rm E0}\ll\theta_{\rm E}\) apparently centred at \(\mathbf{x}_{0}=(x_{1},x_{2})=(1^{\prime\prime},0)\). If a subdominant SIS resides in the foreground of the dominant SIS, the effective deflection angle of the perturber is \[\delta\mathbf{\alpha}_{\rm eff}^{\rm b}(\mathbf{x}) = \theta_{\rm E0}\frac{\mathbf{x}-\mathbf{x}_{0}}{ \|\mathbf{x}-\mathbf{x}_{0}\|}+\theta_{\rm E}\bigg{(}\frac{ \mathbf{y}_{\rm d}}{y_{\rm d}}-\frac{\mathbf{x}}{x}\bigg{)},\] \[\mathbf{y}_{4} = \mathbf{x}-\beta\theta_{\rm E0}\frac{\mathbf{x} -\mathbf{x}_{0}}{\|\mathbf{x}-\mathbf{x}_{0}\|}. \tag{58}\] If an SIS resides in the background of a dominant SIS, the effective deflection angle of the perturber is \[\delta\mathbf{\alpha}_{\rm eff}^{\rm b}(\mathbf{x}) = \theta_{\rm E0}\frac{\mathbf{y}_{\rm b}-\mathbf{y$ }_{\rm b0}}{\|\mbox{\boldmath$y}_{\rm b}-\mathbf{y}_{\rm b0}\|},\] \[\mathbf{y}_{\rm b} = \mathbf{x}-\xi\theta_{\rm E}\frac{\mathbf{x}}{x},\] \[\mathbf{y}_{\rm b0} = \mathbf{x}_{0}-\xi\theta_{\rm E}\frac{\mathbf{x} _{0}}{x_{0}}. \tag{59}\] The effective deflection angles \(\delta\mathbf{\alpha}_{\rm eff}\) in an SIS+SIS model with \(\theta_{E}=1^{\prime\prime}\) and \(\theta_{\rm E0}=0^{\prime}\!\!.1\) are shown in figure 9. If a subdominant SIS resides in the background of a dominant lens, the amplitude of the effective deflection angle \(\delta\mathbf{\alpha}_{\rm eff}\) is constant but the direction is anisotropic. The directions resemble those of an electric field of a dipole. If a subdominant SIS resides in the foreground of a dominant lens, the amplitude of the effective deflection angle \(\delta\mathbf{\alpha}_{\rm eff}\) is not constant and the direction is anisotropic. The streamlines resemble those in an SIS model with a background SIS with the same Einstein radius except for those in the central region of the dominant SIS (figure 10). For \(\beta=0.5\) and \(\xi=0.5\), the maximum amplitudes of the astrometric lensing B-mode \(\delta\mathbf{\alpha}_{\rm eff}^{\rm B}\) are \(\sim 0^{\prime\prime}\!\!.03\). As shown in figure 11, the contours of the amplitude of lensing B-mode have a dumbbell-like shape with a spindle-shaped void for both the foreground and background cases. In contrast, the contours of the amplitude of lensing E-mode \(\delta\mathbf{\alpha}_{\rm eff}^{\rm B}\) have a complex structure that depends on the position of the subdominant SIS. The amplitude is largest in the central region of the the central region of the dominant SIS in the foreground case whereas the amplitude is largest in the central region of the subdominant SIS. This implies that any fitting without considering the difference between the subdominant lens plane and the dominant lens plane may lead to systematic residual in the positions of quadruple images of a point-like source. As shown in figure 12, the rotation of the effective deflection angle shows an octopole pattern that consists of a pair of quadrupoles centred at the centres of the SISs. The amplitudes of rotations are maximised in the diagonal lines and minimised in the vertical and horizontal directions of each SIS. The 'directions' of a rotation in the foreground SIS is opposite to that in the background SIS. In figure 13 and figure 14, we show the ratio of the B-mode to E-mode \(r^{\rm BE}=|\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|/|\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|\), averaged over an arc with \(\theta=1.1\,\)arcsec and \(\phi=0.1\,\)arcsec. We did not use rotations in the Einstein ring of the dominant lens for averaging because the rotations in such regions are extremely small especially in the vicinity of the centre of the subdominant SIS. The ratio \(r^{\rm BE}\) is a monotonically increasing function of \(\beta\) or \(\xi\), which equals \(\sim 0.3\) for \(\beta=0.5\), and \(\sim 0.2\) for \(\xi=0.5\). We found that the ratio of rotation to divergence, \(|\nabla\times\delta\mathbf{\alpha}_{\rm eff}^{\rm B}|/|\nabla\cdot \delta\mathbf{\alpha}_{\rm eff}^{\rm B}|\) averaged over the arc are larger than \(r^{\rm BE}\). The analytic formula in equation (17) gives a good estimate up to moderate values of \(\beta\sim 0.5\) and \(\xi\sim 0.5\). Next, we show the values of the distance ratio estimators \(\hat{\beta}^{\rm A}\), \(\hat{\beta}^{\rm B}\), and \(\hat{\eta}\) in the dominant lens plane in figure 15. We found that these estimators give a good approximation in the vicinity of the subdominant SIS. \(\hat{\beta}^{\rm B}\) and \(\hat{\eta}\) give a worse result around the Einstein ring of the dominant lens in which the rotation is almost zero. For \(\beta=0.5\), \(\hat{\beta}^{\rm A}\) gives the best result. Finally, we show the effect of sample region, which was used to estimate the distant ratio using \(\hat{\beta}^{\rm A}\) in figure 16. The center of a subdominant SIS is fixed to \((x_{1},x_{2})=(1^{\prime\prime},0)\). We consider two types of sample regions: 1) an arc with a radius of \(1^{\prime\prime}+\theta_{\rm Ep}\) that subtends an azimuthal angle of \(\phi=\theta_{\rm Ep}\), 2) an arc with a radius of \(1^{\prime\prime}\!\!.1\) that subtends an azimuthal angle of \(\phi=0.1\,\)rad. Note that we avoided the neighbourhood of an Einstein ring of the dominant lens as the rotation is very small. The errors in the case 1) is much smaller than the case 2) as the sample region is much closer to the centre of the subdominant SIS. For \(\beta<0.5\), the relative errors in both 1) and 2) were found to be \(\lesssim 0.1\). Even for large distance ratios \(\beta>0.5\), the relative errors are \(<0.1\) for \(\theta_{\rm Ep}<0^{\prime\prime}\!\!.04\) in case 1). Thus, the foreground distance ratio estimator \(\hat{\beta}^{\rm A}\) gives a relatively accurate approximation if it is used for estimating Figure 5: Rotation of effective deflection angle for an external shear of \(\delta\gamma=0.1\). The colors show the amplitudes of rotation. The model parameters are the same as in figure 3. Figure 8: Relative errors of approximated distance ratio as a function of \(\beta\) and \(\delta\gamma\). The colors show the relative errors averaged over an Einstein ring with radius \(\theta_{\rm E}\) for a given set of \(\delta\gamma\) and \(\beta\). Figure 6: The ratio of B and E-modes for an external shear of \(\delta\gamma=0.1\) and \(\theta_{\rm E}=1^{\prime\prime}\). The orange full curves denote the ratio of the averaged amplitude of B-mode astrometric shift and that of E-mode astrometric shift as a function of \(\beta\) (top) and \(\xi\) (bottom). Both the modes are averaged over a circle with \(\theta=\theta_{\rm E}\). The blue dashed curves show the ratio of averaged rotation and averaged convergence. Both the two contributions are averaged over a circle with \(\theta=\theta_{\rm E}\). The blue dot-dashed curves represent the approximations of the ratios for the foreground case \(\sqrt{2}\beta/4\) (top) and the background case \(\sqrt{2}\xi/4\) (bottom). the property of the central region of a perturbing halo. Even if the lensed arc of an extended source is not on the centre of a perturbing halo, \(\hat{\beta}^{\rm A}\) can be used as a good estimator if \(\beta\) is sufficiently small. ## 5 Conclusion and Discussion In this study, we investigated the characteristics of astrometric lensing B-mode in strong lensing systems that consist of a dominant and a subdominant lens residing at distinct redshifts. The B-mode arises from the coupling between strong lensing induced by a dominant lens, and weak lensing generated by a subdominant lens. By measuring both B and E-modes, we can deduce the distance ratio, and 'bare' convergence and shear perturbations attributed to the subdominant lens. In cases where a subdominant lens is located behind the dominant lens, we can derive an exact formula if the dominant lens is perfectly modelled. However, when the situation reverses, with a dominant lens resides behind a subdominant lens, we cannot obtain an exact formula even with a perfect model of the dominant lens. In such cases, we employ certain approximations that yield an exact value when the distance between the dominant and subdominant lenses approaches zero. We demonstrated that any scale transformation in the distance ratio of a subdominant lens corresponds to a mass-sheet transformation in the background lens plane. Consequently, determining the distance ratio necessitates assumptions about the values of the mass-sheet within the background lens plane. Nevertheless, if we measure time delays between perturbed multiple lensed images and know the redshifts of a subdominant lens, we can break the mass-sheet degeneracy for a given \(H_{0}\), enabling us to determine it without uncertainty. Moreover, if the redshifts of both foreground and background perturbers, as well as the dominant lens and the source, are known, observations of astrometric lensing B-mode can break the mass-sheet degeneracies. This would lead to a reduction in systematic errors in the estimated value of \(H_{0}\). Our analysis focuses on systems with a single subdominant lens whose deflection angle is significantly smaller than that of the dominant lens. In reality, the gravitational influence of multiple subdominant lenses along each photon's path must be considered. When both the second and third dominant lenses are located either in the foreground or the background of the dominant lens, the rotation signal may be amplified. Conversely, if they are positioned separately in the foreground and background, the signal could weaken due to cancellation. Investigating these effects in systems comprising three or more lenses falls beyond the scope of this paper. Assuming that the impact of model degeneracy due to the extended multi-plane mass-sheet transformation (eMMST) we discussed is negligible, the measurement of astrometric lensing B-modes holds the potential to constrain the abundance of intergalactic dark haloes with masses of \(\lesssim 10^{8}\,M_{\odot}\) along the line of sight (LOS) in quasar-galaxy or galaxy-galaxy strong lens systems. Our toy models suggest that such feasibility can be assessed as follows: As a reference lens model, we employ a dominant SIS with an Einstein radius of \(1\arcsec\). Then, a subdominant SIS with an Einstein radius of \(\gtrsim 0\farcs 03\) in the LOS would produce B-modes with a shift of \(\gtrsim 0\farcs 003\) for \(\beta>0.2\) or \(\xi>0.2\). These shifts can be observed with telescopes featuring an angular resolution of \(\lesssim 0\farcs 03\), assuming a typical magnification of \(\mu\sim 10\) for lensed image separations of \(\lesssim 0\farcs 5\). Hence, instruments like the Atacama Large Millimeter/Submillimeter Array (ALMA) possess the capability to detect astrometric lensing B-modes resulting from less massive LOS haloes, provided that the distance between the dominant and subdominant lenses is sufficiently large and the signal-to-noise ratio of intensity in the lens plane is suitably high. In the near future, we plan to investigate the practicality of measuring astrometric lensing B-modes with ALMA, utilizing more sophisticated models and taking into account observational capabilities.
2309.10988
An optimal ALMA image of the Hubble Ultra Deep Field in the era of JWST: obscured star formation and the cosmic far-infrared background
We combine archival ALMA data targeting the Hubble Ultra Deep Field (HUDF) to produce the deepest currently attainable 1-mm maps of this key region. Our deepest map covers 4.2arcmin^2, with a beamsize of 1.49''x1.07'' at an effective frequency of 243GHz (1.23mm). It reaches an rms of 4.6uJy/beam, with 1.5arcmin^2 below 9.0uJy/beam, an improvement of >5% (and up to 50% in some regions) over the best previous map. We also make a wider, shallower map, covering 25.4arcmin^2. We detect 45 galaxies in the deep map down to 3.6sigma, 10 more than previously detected, and 39 of these galaxies have JWST counterparts. A stacking analysis on the positions of ALMA-undetected JWST galaxies with z<4 and stellar masses from 10^8.4 to 10^10.4 M_sun yields 10% more signal compared to previous stacking analyses, and we find that detected sources plus stacking contribute (10.0+/-0.5)Jy/deg^2 to the cosmic infrared background (CIB) at 1.23mm. Although this is short of the (uncertain) background level of about 20Jy/deg^2, we show that our measurement is consistent with the background if the HUDF is a mild (~2sigma) negative CIB fluctuation, and that the contribution from faint undetected objects is small and converging. In particular, we predict that the field contains about 60 additional 15uJy galaxies, and over 300 galaxies at the few uJy level. This suggests that JWST has detected essentially all of the galaxies that contribute to the CIB, as anticipated from the strong correlation between galaxy stellar mass and obscured star formation.
Ryley Hill, Douglas Scott, Derek J. McLeod, Ross J. McLure, Scott C. Chapman, James S. Dunlop
2023-09-20T01:05:28Z
http://arxiv.org/abs/2309.10988v3
An optimal ALMA image of the Hubble Ultra Deep Field in the era of JWST: obscured star formation and the cosmic far-infrared background ###### Abstract We combine archival ALMA data targeting the Hubble Ultra Deep Field (HUDF) to produce the deepest currently attainable 1-mm maps of this key, deep, extragalactic survey field. Combining all existing data in Band 6, our deepest map covers 4.2 arcmin\({}^{2}\), with a beamsize of \(1.49\,\mathrm{arcsec}\times 1.07\,\mathrm{arcsec}\) at an effective frequency of 243 GHz or \(\lambda=1.23\,\mathrm{mm}\). It reaches a minimum pixel rms of \(4.6\,\mathrm{\mu Jy\,beam^{-1}}\), with an area of \(1.5\,\mathrm{arcmin^{2}}\) reaching below \(9.0\,\mathrm{\mu Jy\,beam^{-1}}\). This is an improvement of at least 5 per cent compared with the best previously published map over the same area, with as much as a 50 per cent improvement in some regions. We also make a wider, but shallower map, covering \(25.4\,\mathrm{arcmin^{2}}\). We detect 45 galaxies in the deep map down to \(3.6\,\sigma\), including 10 more 1-mm sources than previously detected within the same area. 39/45 of these galaxies have a _JWST_ identification from the JADES NIRCam imaging and we find that the new sources are typically faint and red. A stacking analysis on the positions of ALMA-undetected JADES galaxies as a function of stellar mass and redshift yields significant detections in our image for redshifts below 4 and stellar masses from \(M_{*}=10^{10.4}\,\mathrm{M_{\odot}}\) down to \(M_{*}=10^{8.4}\,\mathrm{M_{\odot}}\), enabling us to extract a modest amount (\(\simeq 10\) per cent) of additional stacked signal from our map as compared to previous analyses. Combining detected sources and statistical estimates from stacking, we have detected \((10.0\pm 0.5)\) Jy deg\({}^{-2}\) of the cosmic infrared background (CIB) at \(1.23\,\mathrm{mm}\). Although this is short of the (uncertain) average background level of about \(20\,\mathrm{Jy\,deg^{-2}}\), after taking into account intrinsic fluctuations in the CIB, we find that our measurement is consistent with the average background level if the HUDF is a mild (\(\simeq 2\,\sigma\)) negative fluctuation. This suggests that within the HUDF, _JWST_ may have detected essentially all of the galaxies that contribute to the CIB at \(1.23\,\mathrm{mm}\), although we have not yet directly detected all of these galaxies with ALMA. From our stacking analysis we predict that the field contains around 60 additional galaxies with \(1.23\,\mathrm{mm}\) flux densities averaging around \(15\,\mathrm{\mu Jy}\), and over 300 galaxies with flux densities of a few \(\mathrm{\mu Jy}\). However, the contribution of these fainter more modestly-obscured objects to the background is small, and converging, as anticipated from the now well-established strong correlation between galaxy stellar mass and obscured star formation. keywords: methods: data analysis - techniques: interferometric - galaxies: formation - galaxies: starburst - submillimetre: galaxies ## 1 Introduction The Hubble Ultra Deep Field (HUDF) is probably the most well-studied extragalactic region of the sky, containing some of the deepest optical and near-IR exposures obtained to-date (e.g. Beckwith et al., 2006; Illingworth et al., 2013; Koekemoer et al., 2013; Eisenstein et al., 2023). Studies of the HUDF can tell us about how galaxies have evolved from the earliest times to the present day. We now know that a significant fraction of the cosmic star formation occurred within distant galaxies full of dust (e.g. Casey et al., 2015; Koprowski et al., 2017), making them effectively invisible at optical and near-IR wavelengths, for all but the deepest images. Yet these galaxies are bright at millimetre (mm) and submillimetre (submm) wavelengths, and so in order to complete our understanding of galaxy evolution, we must also survey the HUDF in this waveband. The best telescope at these wavelengths available today is the Atacama Large Millimeter/submillimeter Array (ALMA). ALMA has been used several times to survey the HUDF at wavelengths around 1 mm (Dunlop et al., 2017; ASAGAO, Hatsukade et al., 2018; ASPECS, Gonzalez-Lopez et al., 2020; GOODS-ALMA, Gomez-Guijarro et al., 2022), and it has also been used to follow up interesting individual targets within the HUDF (e.g. Fujimoto et al., 2017; Cowie et al., 2018). Yet in contrast to the tens of thousands of galaxies detected in the optical, only a few dozen mm-bright galaxies have been found in these follow-up observations so far. Although often referred to as 'SMGs' (for'submillimetre galaxies'), since we are working here at wavelengths slightly longer than 1 mm, we will refer to them with the more generic acronym DSFG (for 'dusty star-forming galaxy'). In order to detect more galaxies around 1 mm with ALMA, we can combine all of the data into a single image. In a similar way, two decades ago, a'super-map' of the GOODS-N field was constructed by combining SCUBA data sets at 850 \(\mu\)m (Borys et al., 2003), and multiwavelength counterparts from the _Hubble Space Telescope_ (_HST_) were identified and used to study the properties of the detected submm-luminous sources (Pope et al., 2005). With this same motivation we have undertaken an archival project to combine all of the ALMA observations of the HUDF taken around 1 mm, with the goal of finding new DSFGs, identifying their counterparts, assessing how much of the background light has been resolved and providing an image that can be used by others for stacking analyses. In Section 2 we describe how we retrieve the data from the ALMA archive and combine it into single continuum images in \(uv\) space. In Section 3 we outline an alternative approach to combining data subimages in real space, which provides a test of the reliability of our \(uv\)-space combination. In Section 4 we provide our new galaxy catalogue and compare it to previous catalogues. In Section 5 we describe how our results lead to a new estimate of the resolved fraction of the cosmic infrared background (CIB) at 1 mm. We discuss in Section 6 improvements in our data products and results compared to what was previously available at 1 mm in the HUDF and we conclude in Section 7. Appendix A presents a wider (and shallower) map and gives a supplementary list of sources, while Appendix B provides cutouts of our ALMA sources overlaid over _JWST_ F356W imaging. ## 2 Data retrieval and processing ### Obtaining archival ALMA data We focus on ALMA Band 6, which spans a wavelength (frequency) range of 1.1-1.4 mm (210-270 GHz). This band currently contains the most extensive ALMA observations of the HUDF, and so this is where we expect to produce the deepest archival map. To begin, we queried the ALMA archive1 for all Band-6 observations centred at \(03^{\rm h}32^{\rm m}39.0^{\rm s}-27^{\circ}47^{\prime}29.1^{\prime\prime}\) and overlapping within a radius of 1.5 arcmin. There are a total of 12 unique programmes that satisfy this criterion, and we selected all of these for our combined map. For some programmes, only a fraction of the total time was spent on the HUDF (e.g. for individual pointings of targeted galaxies or for large surveys extending beyond the HUDF), and for these cases we only selected the data that overlap with our region of interest. For more details see Table 1, where we summarize all of the data used here to produce the combined map. We also downloaded the raw data from the ALMA archive centred at the same position, but extending out to 3.5 arcmin, which we used to make a shallower but larger map; there are a total of seven additional programmes satisfying this criterion, and these are listed in Table 1. Footnote 1: [https://almascience.nrao.edu/asax](https://almascience.nrao.edu/asax) For each of the observations, we downloaded the raw \(uv\) data and calibrated it using the provided ScrejtorForPI and the CASA (McMullin et al., 2007) version used by the observatory at the time of the observations. We split the science targets from the calibration targets, then time-averaged the data by 30 s and averaged the frequency channels by a factor of 4 in order to reduce the volume of data. These tasks were carried out using the Canadian Advanced Network for Astronomy Research (CANFAR) platform (Major et al., 2019), which provides easy access to all versions of CASA, as well as ample storage space and memory. ### Obtaining archival ALMA images In addition to downloading the \(uv\) data for each programme, we also obtained the imaging products made available by the observatory. For every tuning of every target given in Table 1, the observatory provides a single image made using the multi-frequency synthesis (MFS) mode, where visibilities in each channel are mapped to a single \(uv\)-plane, and therefore represent the mean value of the sky at a characteristic frequency (usually the central frequency of the channels) weighted by a spectral function. For programmes carried out in earlier ALMA cycles, we use the Additional Representative Images for Legacy (ARI-L; Massardi et al., 2021), which are reduced in a way more similar to later cycles. These images are primary-beam-corrected, but the primary-beam image is also available for download. The final number of HUDF images downloaded from the ALMA archive is 61, along with their corresponding 61 primary-beam images. ### Data combination in the \(uv\) plane Our main goal is to produce the deepest possible map of the central HUDF region, which essentially corresponds to the footprint of ASPECS (see Gonzalez-Lopez et al., 2020). We do so by combining all of the available data within this region. Our secondary goal is to produce the deepest possible map of the wider HUDF region, corresponding essentially to the ASAGAO footprint (see Hatsukade et al., 2018). To do this, we carried out the following procedure. All of the time-averaged and frequency-averaged \(uv\) data were imaged using the standard CASA task tclean. For the data presented in Table 1, the pixel size was set to 0.2 arcsec. We ran tclean in MFS mode, which, as discussed above, scales all of the \(uv\) points to the average frequency of the data being combined; here, the average frequency is about 243 GHz (or 1230 \(\mu\)m). We also chose natural weighting (each \(uv\) point is weighted by its instrumental noise), but with a 250 k\(\lambda\)\(uv\) taper. We set the cutoff of the map to 0.2 times the primary beam, which is where the footprint of our map roughly matches the footprint of the ASPECS map. We also produced a map with no \(uv\) taper, setting the pixel size to 0.18 arcsec (because the synthesized beam is somewhat smaller). Lastly, we cleaned the image down to 20 \(\mu\)Jy beam\({}^{-1}\), similar to the cleaning level chosen by ASPECS. The final synthesized beamsize of the \(uv\)-tapered map is 1.49 arcsec\(\times\) 1.07 arcsec, while for the map with no \(uv\) taper the synthesized beamsize is 1.32 arcsec\(\times\) 0.92 arcsec. The pixel rms in the tapered image (which we use for further analysis in this paper) has a minimum value of 4.6 \(\mu\)Jy beam\({}^{-1}\) in the deepest region, while the rms is less than 9.0 \(\mu\)Jy beam\({}^{-1}\) over an area of 1.5 arcmin\({}^{2}\). For the shallower, larger map, we exclude all of the observations targeting the deep central region (this includes all of the ASPECS data, the data from Dunlop et al., 2017, and two deep pointings towards the eastern corner of the HUDF); this is to ensure that the synthesized beam of the larger map is not dominated by data from the central region (see Table 1 for details). This map is therefore mostly a combination of the ASAGAO survey data and the GOODS-ALMA survey data (both high and low resolution). However, we include all of the ASAGAO follow-up observations even if they lie within the deep central region (programme ID 2018.1.00567.S), since they roughly match the array configurations and frequencies of the ASAGAO survey and the GOODS-ALMA survey. We set the pixel size to 0.06 arcsec for the map with no \(uv\) tapering, and 0.12 arcsec for the map with 250 k\(\lambda\)\(uv\) tapering, and set the cutoff of the map to 0.4 times the primary beam, which produced a map with a footprint roughly similar to the ASAGAO footprint. All other tclean parameters were kept the same as described above, except for the cleaning threshold, which was increased to 100 \(\mu\)Jy beam\({}^{-1}\) (similar to the level chosen by ASAGAO). The final synthesized beamsize of the \(uv\)-tapered map is \(0.87\,\mathrm{arcsec}\times 0.64\,\mathrm{arcsec}\), while for the map with no \(uv\) taper the synthesized beamsize is \(0.36\,\mathrm{arcsec}\times 0.27\,\mathrm{arcsec}\). The pixel rms in the tapered image (which we also use for further analysis in this paper) has a typical value of \(60\,\mathrm{\mu Jy\,beam}^{-1}\) outside of individual pointings, and \(20\,\mathrm{\mu Jy\,beam}^{-1}\) within individual pointings. Lastly, we produced maps with uniform noise properties by excluding programmes with individual pointings targeting known galaxies, and instead focusing on large survey programmes. Specifically, for our deep central mosaic we combined data from the programmes 2012.1.00173.S, 2015.1.00098.S, 2015.1.00543.S, 2016.1.00324.L and 2017.1.00755.S, while for the shallower, larger map we only used data from the programmes 2015.1.00098.S, 2015.1.00543.S and 2017.1.00755.S (see Tables 1 and A1, respectively). We used the same tclean parameters as above, generating versions with no \(uv\) tapering and with 250 k\(\lambda\)\(uv\) tapering. The final synthesized beamsizes were effectively unchanged compared to the maps that included individual pointings. For the remaining analysis in this paper we focus on the tapered maps with all of the available data combined. The additional maps, with no tapering and uniform noise properties, were used to check the reliability of our flux densities, and we found no systematic differences between the measurements. To calculate the corresponding noise maps for the combined images, we first created a mask using catalogues of previously-detected galaxies. In particular we took the catalogues from Dunlop et al. (2017), ASAGAO (Hatsukade et al., 2018), ASPECS (Gonzalez-Lopez et al., 2020) and GOODS-ALMA (Gomez-Guijarro et al., 2022). For each galaxy we masked a region 3 times larger than the beamsize, and then multiplied this by the signal map. We next made cutouts of 5 times the beamsize around each pixel and calculated the rms. The resulting noise map was then smoothed with a Gaussian kernel with the same size as the cutout regions to remove artefacts. All of these data products are made publicly available, including the primary-beam-corrected maps, the noise maps, the primary beam maps and the synthesized beam maps.2 The final signal map, noise map and signal-to-noise ratio (S/N) map for the deep central mosaic with a 250 k\(\lambda\) taper, including individual pointings, is shown in Fig. 1, while the same maps for the larger, shallower mosaic are shown in Fig. 1. The deep central mosaic covers an area of 4.2 arcmin2 out to our primary beam threshold of 0.2, while the large shallow mosaic covers an area of 25.4 arcmin2 out to our primary beam threshold of 0.4. Footnote 2: [https://doi.org/10.5683/SP3/YWBWW](https://doi.org/10.5683/SP3/YWBWW) Footnote 3: [https://archive.stsci.edu/prepds/candels](https://archive.stsci.edu/prepds/candels) ### Optical and infrared galaxy catalogues The HUDF has been the target of extensive multiwavelength observations that we can use to complement our 1.23-mm image. The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) catalogue of galaxies in the GOODS-S field4(Suo et al., 2013) contains a summary of much of the deep optical-to-near-IR imaging in the HUDF, and it is expected that most of the ALMA-detected galaxies should have a counterpart in this catalogue. Detections in the CANDELS catalogue were obtained from _HST_'s WFC3 instrument in the F160W band, so it is a 1.5-\(\mu\)m-selected catalogue. The entire catalogue covers a much larger region than our deep map, but the deepest region of the catalogue covers roughly the same area. \begin{table} \begin{tabular}{l c c c c c} \hline Project ID & Target name(s) & Frequency range & Map rmsa & Synthesized beam” & Sky coverage5 \\ & [GHz] & [\(\mu\)Jy beam\({}^{-1}\)] & [arcsec \(\times\) arcsec] & [arcmin2] \\ \hline 2012.1.00173.S6 & HUDF & 211.2–231.2 & 24 & \(0.62\times 0.52\) & 7.05 \\ 2013.1.00718.7 & UDF1 & 212.2–272.0 & 15 & \(1.77\times 0.90\) & 1.15 \\ 2013.10217.S & UDF6462 & 223.3–244.1 & 9 & \(0.33\times 0.26\) & 0.31 \\ 2015.1.00098.8 & HUDF-JVLA-ALMA & 244.3–271.8 & 57 & \(0.20\times 0.16\) & 34.7 \\ 2015.1.00543.8 & GOODS-S & 255.1–274.7 & 130 & \(0.26\times 0.22\) & 54.2 \\ 2015.10066.4 & KMOSMOSDGS4-24110 & 273.2–274.7 & 91 & \(0.15\times 0.14\) & 0.22 \\ 2015.1.01096.5 & UDF-640-1417 & 222.0–252.1 & 11 & \(0.99\times 0.74\) & 0.29 \\ 2015.10144.7 & UDF0 & 212.0–272.0 & 6 & \(1.24\times 0.91\) & 0.29 \\ 2015.A.00009.S & B015Hz27 & 226.4–245.0 & 13 & \(0.73\times 0.46\) & 0.30 \\ 2016.1.00324.1 & UDF\_ mosaic\_1mm & 212.1–272.0 & 12 & \(1.37\times 0.93\) & 3.72 \\ 2017.1.00755.S & GOODS-S & 255.1–274.9 & 120 & \(1.31\times 0.82\) & 54.1 \\ 2018.1.00567.S & ASAGAO27, 35, 40, 45 & 244.3–262.9 & 24 & \(0.65\times 0.47\) & 1.03 \\ \hline \end{tabular} \end{table} Table 1: ALMA projects downloaded from the ALMA archive and used to create the final combined 1-mm image. The first public data release from the the _JWST_ Advanced Deep Extragalactic Survey (JADES) NIRCam survey of the HUDF is also now available4(Rieke & the JADES Collaboration, 2023), in addition to the JWST Extragalactic Medium-band Survey (JEMS, Williams et al., 2023). While the JADES survey is still ongoing and will eventually be deeper, it already contains considerably more galaxies per unit area than the CANDELS catalogue. JADES images range from 0.9 \(\mu\)m to 4.4 \(\mu\)m, and we chose to select our catalogue at 3.5 \(\mu\)m, as a compromise between the longest possible wavelength and the depth. Our entire deep map is covered by the JADES survey at approximately uniform sensitivity. Throughout this paper we primarily make use of the JADES imaging and photometry, with the CANDELS catalogue used to demonstrate new improvements thanks to _JWST_. Footnote 4: [https://archive.stsci.edu/hlsp/jades](https://archive.stsci.edu/hlsp/jades) #### 2.4.1 JADES photometric redshifts and stellar masses In order to construct photometry catalogues, we ran SourceExtractor(Bertin & Arnouts, 1996) in dual-image mode, with NIRCam F356W used as the detection image. All imaging was homogenized to match the point-source function (PSF) of the F444W imaging in order to minimise any colour systematics arising from differences in the PSF. We performed isophotal photometry on all available JADES and JEMS NIRCam bands, as well as ancillary _HST_ ACS imaging from the Hubble Legacy Fields GOODS-South data release (see Illingworth et al., 2013; Whitaker et al., 2019) and ground-based VIMOS _U_-band imaging over GOODS-South (Nonino et al., 2009). To extract robust photometry from this lower-resolution imaging, we utilised TPHOT (Merlin et al., 2015), using positional and surface-brightness information from the higher-resolution NIRCam imaging as input. To calculate photometric redshifts and hence stellar masses, we performed spectral-energy-distribution (SED) fitting using Lemaitre(Arnouts & Ilbert, 2011), with a Bruzual-Charlot (Bruzual & Charlot, 2003) template set, incorporating a Chabrier(Chabrier, 2003) initial mass function. The template set includes a Calzetti(Calzetti et al., 2000) dust-attenuation law, with range of reddening \(A_{V}=[0.0,6.0,0.2]\). We used two metallicities, 0.2 \(\mathrm{Z}_{\odot}\) and \(\mathrm{Z}_{\odot}\), and included exponentially declining star-formation histories with \(\tau\) ranging between 0.1 and 15 Gyr. Where the object has a known spectroscopic redshift, we fixed to this redshift in obtaining the stellar mass. The median uncertainty in our photometric redshifts for all of our HUDF galaxies is \(\Delta z/(1+z)\simeq 0.06\). For stellar masses, in this paper we only care about relative uncertainties, since we only use this parameter to sort galaxies into different bins. The absolute uncertainties in stellar masses are expected to be large, but the relative uncertainties should be small, so ignore them. ## 3 Alternative data treatment and tests ### Data combination in the image plane In addition to our _uv_-combined images, we also tried combining archival ALMA data-product images (each one being produced individually using tclean in MFS mode) in real space. This provides a consistency check for how well we are able to combine in _uv_ space the data spanning a wide frequency range and large array configuration range. For this test we focus on the deep central mosaic (following the ASPECS footprint), where the data sets included are listed in Table 1. The first step in this process is to convolve each image to the same resolution. A common beamsize was selected as the largest beam across all of the data sets, and for each image the convolution kernel required to produce this common beam was found numerically using the create_matching_kernel function from the python photutils module (Bradley et al., 2022). Each image was then convolved with this kernel. We expect that convolution to a lower resolution will effectively decrease the sensitivity to point sources. The largest beam across all of our data comes from the ASPECS pilot programme (2013.1.00718.S, see Aravena et al., 2016), which has a beam of about \(1.5\,\mathrm{arcsec}\times 0.8\,\mathrm{arcsec}\). The most sensitive map (across a significantly larger area) comes from the full ASPECS survey (2016.1.00324.L, see Gonzalez-Lopez et al., 2020), where the beam is about \(1.2\,\mathrm{arcsec}\times 0.8\,\mathrm{arcsec}\), or the second-largest beam across all of our images. We found that degrading the resolution of the full ASPECS map to match the ASPECS pilot map ultimately produces a map with a higher rms (after combining all the data) compared to removing the ASPECS pilot map from the combination and convolving each image to match that of the full ASPECS map; our final combined image therefore does not contain data from the ASPECS pilot, although this amounts to only a small loss of data (see Table 1). We next created a pixel grid for the image. The size of the pixels in the grid was chosen as the largest pixel size within the set of 61 images (in this case 0.2 arcsec), and the grid was set to span 5 arcmin, centred at \(03^{\mathrm{h}}32^{\mathrm{m}}39.0^{\mathrm{s}}-27^{\circ}47^{\prime}29.1^{ \prime\prime}\). Next, each of the 61 convolved images (minus those from the ASPECS pilot) were reprojected onto this grid using the python function reproject_interp, part of the reproject module (Robitaille et al., 2020). The next step was to create a noise map for each reprojected and convolved image. To do this, we created primary-beam-uncorrected images (simply the primary beam image multiplied by the primary beam), and for each image we created a mask using the same method described above. We then calculated the rms of the primary-beam-uncorrected map multiplied by the mask, and divided this by the primary beam. These images should now be aligned to the same pixel grid, have the same resolution, and have associated noise maps. The final step is to combine them, weighting each pixel by its inverse variance. In order to remove boundary artefacts associated with combining images with very different noise levels, an apodizing mask was applied to the edges of each image in the combination. The apodizing function chosen was a Gaussian with a standard deviation of the beamsize, and the apodization was applied out to 3 times the beamsize from the edge of each map. We investigated various options for apodization (e.g. a cosine function or different apodization lengths), and found that this simple choice removed most of the obvious edge effects. Lastly, a new noise map was calculated for the combined image using the same algorithm used to calculate a noise map for the _uv_-combined image. For reference, the image-combined map can also be downloaded.5 Footnote 5: [https://doi.org/10.5683/SP3/YWBWW](https://doi.org/10.5683/SP3/YWBWW) ### Comparison between _uv_-plane combination and image-plane combination The optimal way to combine interferometric images is to add the observations in the _uv_ plane, then image the entire data set. However, the different frequencies and \(uv\) sampling of the various observations may lead to unwanted behaviour. Moreover, adding the individual pointings is much more straightforward in the image plane than in \(uv\) space. Therefore we would like to compare the properties of the two images (\(uv\)-space combination and real-space combination) to check that they are consistent with one another, and to see which one performs better. We focus on the region defined by our deep central map with a 250 k\(\lambda\) taper, which was made going out to 0.2 times the primary beam (see Fig. 1). We extracted all sources with S/N \(>4\) within this region from both maps using the same source-extraction procedure as described in Section 4; this threshold was simply chosen so that enough overlapping sources could be extracted for comparison. There are 36 peaks with this significance in the \(uv\)-space combined map and 29 peaks in the image-space combined map. In Fig. 2 we show all of the sources found with S/N \(>4\), from which it can be seen that all bright and obvious sources are in agreement. However, we do see more sources detected in the \(uv\) combination compared to the image combination, especially around the edges of our selected region, where the \(uv\) map does much better. To quantify the difference between the two maps, in Fig. 3 we plot the ratio of the peak S/N from the \(uv\) combination to the peak S/N from the image combination as a function of 1-mm flux density in the \(uv\)-space combined map. For the galaxies with S/N \(>4\) in the \(uv\)-combined map but not the image-combined map, we simply extract the value of the image-combined map at the location of the \(uv\) detections in order to include them on this plot. We find a relatively uniform improvement in S/N from combining the \(uv\) data as a function of source brightness. The mean S/N of all matching sources is higher by about 1.2. Of course there is a balance between the increased complexity and computing resources required to combine the data in the \(uv\) plane versus combining in the image plane, but our results indicate that the improvement from combining the data in the \(uv\)-plane is worthwhile. ## 4 Results for sources ### Cataloguing 1-mm sources The central deep map shown in Fig. 1 contains all existing ALMA Band-6 data in this region and thus should be the best map currently achievable; for reference, the deepest part goes down to 4.6 \(\mu\)Jy beam\({}^{-1}\), which is 50 per cent deeper than any previous map. Over an area of 1.5 arcmin\({}^{2}\) our new map has an rms below 9.0 \(\mu\)Jy beam\({}^{-1}\), which is 5 per cent better than the deepest previously available map of this size (from ASPECS), and it has a noise level below 35 \(\mu\)Jy beam\({}^{-1}\) over the full 4.2 arcmin\({}^{2}\). It is therefore of interest to see if any new galaxies are detected in this new map. We ran the simple peak-finding algorithm find_peaks, available in the photutils python module, on both the S/N map contained within the region of interest, and on the negative of the same S/N map. One approach to setting the detection threshold is to use the most significant negative peak to set the level above which we might expect all positive peaks to be real galaxies; for reference, the most negative peak was found to have a value of \(-4.1\,\sigma\), and there are 35 positive peaks brighter than +4.1 \(\sigma\). However, we can also lower the threshold to include more real sources at the expense of being less confident about the reality of each one. As an alternative means of setting the threshold, we looked at how the ratio of the total number of positive to negative peaks greater than a given S/N thresholds varies. We can define the 'purity' to be \(1-N_{\rm mg}/N_{\rm pos}\), such that a value of 0 means there are as many negative as positive peaks and 1 is reached when there are no more significant negative peaks. In Fig. 4 we plot this purity as a function of S/N threshold. The choice of a threshold is of course a trade off (a smaller threshold will result in more false positives), and to include more candidate sources (with the understanding that not all may be real) we choose a purity of 0.7, which corresponds to S/N \(\approx\) 3.6. There are 45 sources above this threshold. The catalogue from ASPECS used a 'fidelity' threshold (the differential ratio of galaxies in S/N bins) that resulted in a least significant source with S/N = 3.3, while Dunlop et al. (2017) used a S/N threshold of 3.5 but removed sources with no _HST_ counterpart. Our choice of threshold is therefore similar to (although slightly more conservative than) what was used in previous studies, and we can also use our JADES catalogue to investigate possible false positives. Figure 1: _Left_: Signal map of the deepest region of the HUDF after combining all of the available archival ALMA Band-6 data given in Table 1. This region is defined by the contour where the primary beam reaches 0.2, covering a total of 4.2 arcmin\({}^{2}\). _Middle_: Corresponding noise map. _Right_: S/N map, from dividing the signal map by the noise map. The positions of sources extracted from this search are shown in Fig. 6, with positions and 1-mm flux densities given in Table 2. Appendix B contains 5 arcsec \(\times\) 5 arcsec cutouts overlaid on _JWST_ F356W images. For single-dish surveys, DSFGs are almost always unresolved, but this is not the case for ALMA data. Hence we need to decide how to quote brightness values when some DSFGs are resolved. For the flux densities, we follow a procedure similar to the ASAGAO survey; for each source we fit a 2-dimensional Gaussian profile, fixing the position to the peak pixel and the amplitude to the value of the peak pixel, but allowing the size, ellipticity and position Figure 4: Purity, i.e. 1 minus the the total number of positive to negative peaks greater than a given S/N map, plotted as a function of S/N. A purity of 0.7 corresponds to a S/N threshold of 3.6, which we use to extract sources from our deep map (Fig. 1). Figure 3: Ratio of the peak S/N from the \(uv\) combination to the peak S/N from the image combination, as a function of 1230-\(\mu\)m flux density in the image combination. The peak S/N values from the \(uv\) combination are generally higher than for the image combination (the mean ratio is 1.2). Figure 2: Results from combining all of the data in Table 1 in the \(uv\) plane before imaging, and imaging the same data individually before combining. Signal maps are shown in the two top left panels, noise maps are shown in the two top right panels and S/N maps are shown in the bottom two panels. In each case, the \(uv\) combination is on the left and the real-space image combination is on the right. Sources with S/N \(>\) 4 are marked with green squares in the S/N maps. angle to vary. We then calculate the number of beams contained within each source as \[\mathcal{N}_{\mathrm{b}}=\frac{ab}{\theta_{\mathrm{maj}}\theta_{\mathrm{min}}}, \tag{1}\] where \(a\) and \(b\) are the best-fit major and minor FWHM values, and \(\theta_{\mathrm{maj}}\) and \(\theta_{\mathrm{min}}\) are the synthesized beam major and minor FWHM, respectively. If the number of beams is greater than 1, we calculate the integrated flux density at 243 GHz, or 1230 \(\mu\)m, as \[S_{\mathrm{1230}}=S_{\mathrm{peak}}\mathcal{N}_{\mathrm{b}}, \tag{2}\] otherwise the integrated flux density is simply the peak pixel value. We also flag fits where the minor/major axis ratio is less than 0.5 as bad fits, and use peak pixel values for these sources. Uncertainties are taken from the noise map, and uncertainties from the fits are propagated to the integrated flux densities. All of our sources and 1-mm flux densities are listed in Table 2. As a check, we compare the 1-mm flux densities extracted here to the flux densities given in the four previously published surveys. For simplicity, in this comparison we simply match published sources to our new catalogue using a search radius of 1 arcsec. The mean frequency of the map from Dunlop et al. (2017) is 221 GHz (see Table 1), so we correct their flux densities to the mean frequency of our map (243 GHz) assuming a modified blackbody SED with spectral index \(\beta\) = 1.5, a dust temperature of 30 K, and a redshift of 1.5; the correction factor is 1.3. Similarly, the mean frequency of the map from GOODS-ALMA (Gomez-Guijarro et al., 2022) is 265 GHz, so we follow the same procedure and apply a correction factor of 0.8. The mean frequency of the remaining maps are effectively identical to ours, so we do not apply any further corrections. The flux densities of matched peaks are shown in Fig. 5, and the cross-matched IDs are given in Table 3. We find generally good agreement with all of the previously-published flux densities after applying the above corrections. In Fig. 6 we show our detected sources compared to those found by Dunlop et al. (2017), the ASAGAO survey (Hatsukade et al., 2018) and the ASPECS survey (Gonzalez-Lopez et al., 2020), as well as a few sources from the wider GOODS-ALMA survey (Gomez-Guijarro et al., 2022). It appears that most of the detections in our combined map coincide with published sources, but there are 13 new DSFGs. There are also a few published detections that do not appear here; typically, these are low-significance sources that could have been positive noise excursions. In Appendix A we perform the same source extraction procedure on the larger but shallower map covering the ASAGAO footprint (excluding the central deep region, where we have already detected all of the sources in this shallow map), now fixing the detection threshold to 4.5 \(\sigma\) (as was done to make the ASAGAO catalogue). We find that the purity at this threshold is 0.9 (which is therefore fairly conservative) and we find a total of 27 additional galaxies, nine of which are new. We use a similar algorithm to measure flux densities, and find similar flux densities compared to those published by ASAGAO (Hatsukade et al., 2018) and GOODS-ALMA (Gomez-Guijarro et al., 2022). As a test, we also extract peaks from the central region of the shallow map and measured their flux densities, and find good agreement with the flux densities measured in our deep map. ### Matching mm-selected sources to near-IR-selected sources The expected uncertainty in our ALMA positions is \(\delta\mathrm{RA}=\delta\mathrm{Dec}=0.6\times\mathrm{FWHM}\div(\mathrm{S/N})\)(Ivison et al., 2007), so we expect the 1 \(\sigma\) radial uncertainty to be \(\delta r\approx\) 0.76 arcsec/(S/N) (using the geometric mean of the elliptical ALMA beam). Since the probability density of finding a source a distance \(r\) from its true position is proportional to \(r\mathrm{e}^{-r^{2}/2\delta r^{2}}\), one must go out to a distance of 2.5\(\delta r\) in order to find a correct match with 95 per cent certainty. For a given ALMA detection, we thus search the JADES catalogue out to a distance of 1.9 arcsec/(S/N), where S/N here is the S/N of each source found above our detection threshold. For the most significant ALMA sources this search radius is unphysically small (much less than 1 pixel), so we apply a minimal search radius of 0.3 arcsec. We preform a similar counterpart search with the CANDELS catalogue, including the deep infrared data from the HUDF09 survey (Bouwens et al., 2011). Here, we simply apply a uniform search radius of 0.6 arcsec, as we found that the _HST_ F160W morphologies of our DSFGs were often clumpy, leading to unrealistic offsets with respect to our ALMA centroids. We checked by eye that all identified CANDELS counterparts were indeed the same galaxy as the JADES counterparts. It should be noted that Dunlop et al. (2017) found a systematic offset between the CANDELS and ALMA astrometry, which was resolved by applying an offset of about 0.25 arcsec south to the CANDELS positions; the same offset was applied here. In Table 3 we provide the JADES and CANDELS IDs for these matches, and in Appendix B we show the positions of matched sources in our ALMA cutouts overall over _JWST_ F356W imaging. At the position of our ALMA source ID 45 we found that there were two blended galaxies in the longer-wavelength _JWST_ images, yet at shorter _JWST_ wavelengths one of these galaxies was undetected. In the JADES catalogue there was only one ID for these two sources (ID 207277) that had a spectroscopic redshift of 0.332. This spectroscopic redshift likely corresponds to the galaxy brighter at short wavelengths, while our ALMA source is probably the second galaxy that is only detected at longer wavelengths. We therefore use the longer wavelength photometry (where this galaxy is detected) to fit an SED, which results in a photometric redshift of 2.42. Throughout the rest of the paper we report results derived from this fit. Lastly, our nominal search radius did not return a counterpart to ALMA ID 38, yet this source is located slightly less than 0.5 arcsec from a \(z\) = 1.998, \(M_{*}\simeq 10^{10}\) M\({}_{\odot}\) JADES galaxy (the JADES ID is 207221). Moreover, Figure 5: Comparison of the flux densities measured in this work with flux densities from Dunlop et al. (2017), ASPECS (González-López et al., 2020), ASAGAO (Hatsukade et al., 2018) and GOODS-ALMA (Gómez-Guijarro et al., 2022). Figure 6: Signal-to-noise ratio map of the deep central region, covering 4.2 arcmin\({}^{2}\), with peaks \(>\) 3.6 \(\sigma\) indicated as green boxes. Galaxies found by Dunlop et al. (2017) are shown as yellow circles, galaxies from the ASPECS survey (González-López et al., 2020) are orange circles and galaxies from the ASAGAO survey (Hatsukade et al., 2018) are red circles. A few of the brightest galaxies are also detected in the GOODS-ALMA survey (Gómez-Guijarro et al., 2022) and shown as brown circles. The gold contour shows the footprint of the deepest _HST_ data from the CANDELS/HUDF09 survey, while the entire region is covered by the JADES survey. Galaxies with counterparts in both the CANDELS catalogue and the JADES catalogue are indicated with a blue cross and galaxies with only a JADES counterpart are indicated with a red cross. There are no galaxies with a CANDELS/HUDF09 counterpart but no JADES counterpart. Numbers refer to the labels in Tables 2 and 3. there is another ALMA source (ID 44) located less than 2 arcsec south of source 38, whose closest _JWST_ counterpart is in the JADES catalogue and not in our F356W-selected catalogue (the JADES ID is 267661) since it is very close to JADES ID 207221 and suffered blending issues. The reported JADES photometric redshift for ID 44 is 2.02, very close to the spectroscopic redshift of the JADES galaxy found near ID 38, thus it is likely that they are at the same redshift and are undergoing a merger, resulting in complicated morphologies that could easily lead to large offsets between our ALMA centroids and the corresponding JADES positions. We therefore classify these two ALMA sources as having a JADES counterparts, although we do not fit an SED to ID 44. Sources found to have counterparts within their given search radii in both the CANDELS catalogue and the JADES catalogue are marked in Fig. 6 with blue crosses. There are 37 ALMA-detected galaxies with counterparts in both catalogues, seven of which are ALMA sources that have not been previously reported. There are an additional two ALMA-detected galaxies with a counterpart only in \begin{table} \begin{tabular}{l c c c c} \hline Name & RA Dec [J2000] & \(S_{1230}\) [\(\mu\)Jy] & \(z\) & \(\log\left(M_{*}/M_{\odot}\right)\) \\ \hline ALMA-HUDF-1 & 3:32:43.53 \(-\)27:46:39.2 & 850\(\pm\)23 & 2.85 & 11.0 \\ ALMA-HUDF-2 & 3:32:38.54 \(-\)27:46:34.6 & 749\(\pm\)10 & 2.543\({}^{*}\) & 9.8 \\ ALMA-HUDF-3 & 3:32:36.96 \(-\)27:47:27.5 & 507\(\pm\)12 & 2.47 & 10.7 \\ ALMA-HUDF-4 & 3:32:39.75 \(-\)27:46:11.6 & 488\(\pm\)21 & 1.546\({}^{*}\) & 11.1 \\ ALMA-HUDF-5 & 3:32:34.44 \(-\)27:46:59.8 & 477\(\pm\)17 & 1.413\({}^{*}\) & 10.7 \\ ALMA-HUDF-6 & 3:32:40.08 \(-\)27:47:55.6 & 381\(\pm\)25 & 1.998\({}^{*}\) & 10.7 \\ ALMA-HUDF-7 & 3:32:41.01 \(-\)27:46:31.6 & 342\(\pm\)13 & 2.40 & 10.4 \\ ALMA-HUDF-8 & 3:32:42.33 \(-\)27:46:47.0 & 258\(\pm\)12 & 2.41 & 10.6 \\ ALMA-HUDF-9 & 3:32:35.08 \(-\)27:46:47.8 & 231\(\pm\)12 & 2.497\({}^{*}\) & 10.8 \\ ALMA-HUDF-10 & 3:32:35.56 \(-\)27:47:04.2 & 189\(\pm\)12 & 3.45 & 9.9 \\ ALMA-HUDF-11 & 3:32:39.88 \(-\)27:47:15.2 & 181\(\pm\)19 & 1.095\({}^{*}\) & 10.5 \\ ALMA-HUDF-12 & 3:32:38.08 \(-\)27:46:26.6 & 168\(\pm\)9 & 1.159\({}^{*}\) & 9.6 \\ ALMA-HUDF-13 & 3:32:37.35 \(-\)27:46:45.8 & 161\(\pm\)20 & 1.846\({}^{*}\) & 9.9 \\ ALMA-HUDF-14 & 3:32:35.51 \(-\)27:46:26.8 & 154\(\pm\)36 & 1.07 & 10.1 \\ ALMA-HUDF-15 & 3:32:42.99 \(-\)27:46:50.2 & 134\(\pm\)9 & 1.036\({}^{*}\) & 10.7 \\ ALMA-HUDF-16 & 3:32:36.48 \(-\)27:46:31.8 & 133\(\pm\)14 & 1.12 & 9.5 \\ ALMA-HUDF-17 & 3:32:38.08 \(-\)27:47:14.8 & 130\(\pm\)16 & 1.850\({}^{*}\) & 10.5 \\ ALMA-HUDF-18 & 3:32:42.37 \(-\)27:46:07.8 & 126\(\pm\)13 & 1.317\({}^{*}\) & 11.0 \\ ALMA-HUDF-19 & 3:32:36.19 \(-\)27:46:28.0 & 119\(\pm\)14 & 2.307\({}^{*}\) & 10.6 \\ ALMA-HUDF-20 & 3:32:38.75 \(-\)27:48:10.4 & 114\(\pm\)19 & 2.94 & 10.1 \\ ALMA-HUDF-21 & 3:32:41.69 \(-\)27:46:55.6 & 98\(\pm\)8 & 1.776\({}^{*}\) & 10.3 \\ ALMA-HUDF-22 & 3:32:48.26 \(-\)27:46:40.8 & 96\(\pm\)19 & 1.098\({}^{*}\) & 10.2 \\ ALMA-HUDF-23 & 3:32:41.04 \(-\)27:47:48.0 & 89\(\pm\)22 &... &... \\ ALMA-HUDF-24 & 3:32:35.78 \(-\)27:46:27.6 & 86\(\pm\)16 & 1.094\({}^{*}\) & 10.4 \\ ALMA-HUDF-25 & 3:32:41.24 \(-\)27:46:16.6 & 81\(\pm\)22 &... &... \\ ALMA-HUDF-26 & 3:32:41.71 \(-\)27:46:45.0 & 78\(\pm\)22 & 1.45 & 10.1 \\ ALMA-HUDF-27 & 3:32:59.97 \(-\)27:47:55.8 & 77\(\pm\)15 & 2.58 & 10.3 \\ ALMA-HUDF-28 & 3:32:34.82 \(-\)27:46:31.2 & 76\(\pm\)21 &... &... \\ ALMA-HUDF-29 & 3:32:38.50 \(-\)27:47:02.6 & 66\(\pm\)15 & 0.954\({}^{*}\) & 10.9 \\ ALMA-HUDF-30 & 3:32:37.61 \(-\)27:47:40.6 & 66\(\pm\)10 & 1.24 & 9.5 \\ ALMA-HUDF-31 & 3:32:41.45 \(-\)27:47:29.2 & 85\(\pm\)14 & 0.621\({}^{*}\) & 9.0 \\ ALMA-HUDF-32 & 3:32:37.73 \(-\)27:47:07.2 & 57\(\pm\)14 & 0.668\({}^{*}\) & 9.8 \\ ALMA-HUDF-33 & 3:32:38.59 \(-\)27:47:30.4 & 54\(\pm\)14 & 2.642\({}^{*}\) & 9.6 \\ ALMA-HUDF-34 & 3:32:40.23 \(-\)27:47:38.2 & 50\(\pm\)11 &... &... \\ ALMA-HUDF-35 & 3:32:35.75 \(-\)27:46:39.4 & 46\(\pm\)14 & 2.07 & 9.8 \\ ALMA-HUDF-36 & 3:32:35.77 \(-\)27:46:55.4 & 42\(\pm\)12 & 1.721\({}^{*}\) & 9.8 \\ ALMA-HUDF-37 & 3:32:38.56 \(-\)27:47:05.4 & 41\(\pm\)9 &... &... \\ ALMA-HUDF-38 & 3:32:41.83 \(-\)27:46:56.8 & 38\(\pm\)8 & 1.998\({}^{*}\) & 10.1 \\ ALMA-HUDF-39 & 3:32:39.78 \(-\)27:46:29.4 & 38\(\pm\)11 &... &... \\ ALMA-HUDF-40 & 3:32:37.08 \(-\)27:46:17.4 & 38\(\pm\)7 & 2.226\({}^{*}\) & 9.0 \\ ALMA-HUDF-41 & 3:32:38.69 \(-\)27:46:30.8 & 34\(\pm\)9 &... &... \\ ALMA-HUDF-42 & 3:32:43.62 \(-\)27:46:59.0 & 33\(\pm\)8 & 1.569\({}^{*}\) & 9.7 \\ ALMA-HUDF-43 & 3:32:36.37 \(-\)27:46:50.2 & 32\(\pm\)8 &... &... \\ ALMA-HUDF-44 & 3:32:41.90 \(-\)27:46:58.6 & 32\(\pm\)8 &... &... \\ ALMA-HUDF-44 & 3:32:42.35 \(-\)27:46:57.4 & 30\(\pm\)7 & 2.42 & 9.9 \\ \hline \end{tabular} \end{table} Table 2: Positions and flux densities (\(S_{1230}\)) for the 45 sources found in the ALMA 1-mm combined image at \(\rm S/N>3.6\). Here flux densities are measured by fitting Gaussian profiles to the sources in order to estimate the number of beams per source the JADES catalogue, both of which are ALMA sources that have not been previously reported. We do not find any CANDELS counterparts with no corresponding JADES counterpart, which highlights one of the benefits of the deeper catalogue of the HUDF made possible with _JWST_ (although the improvement is not dramatic, since CANDELS gets close to identifying all of our mm sources). This leaves six ALMA-detected sources with no optical or near-infrared counterpart; one of these has previously-published ALMA identifications, the remaining five do not. It is worth pointing out that two of the sources with no counterparts are close to the edge of the map where the noise is the largest (IDs 23 and 25), but the other four are found near the centre of the map where the data are much deeper (IDs 34, 37, 41 and 43). The number of beams in our ALMA map can be estimated as the area of the map (4.2 arcmin\({}^{2}\)) divided by the beam area (\(2\pi\theta_{\rm maj}\theta_{\rm min}/8\ln 2\)), which results in about 8500. From Gaussian statistics, we would expect around 1.5 beams to randomly have a significance greater than 3.6 \(\sigma\); however, there are really twice as many statistically independent noise samples due to correlations with the beam (e.g., Condon, 1997; Condon et al., 1998; Dunlop et al., 2017), so really we would expect three false positive detections. The measured number of negative peaks more significant than 3.6 is 14, \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Name & JADES ID & CANDELS ID & Dunlop et al. & DSAAGOO ID & GOODS-ALMA ID & ASPECS ID \\ \hline ALMA-HUDF-1 & 208820 & J033243.52\(-\)274639.0 & UDF2 & 4 & A2GS9 & C06 \\ ALMA-HUDF-2 & 209117 & J033238.54\(-\)274634.0 & UDF3 & 5 & A2GS25 & C01 \\ ALMA-HUDF-3 & 204232 & J033236.96\(-\)27427.2 & UDF5 & 12 & A2GS41 & C02 \\ ALMA-HUDF-4 & 211273 & J033239.73\(-\)274611.2 & UDF8 & 16 & \(\cdots\) & C05 \\ ALMA-HUDF-5 & 207012 & J033234.43\(-\)274659.5 & UDF6 & 13 & \(\cdots\) & C03 \\ ALMA-HUDF-6 & 202563 & J033240.05\(-\)274755.4 & UDF11 & 15 & \(\cdots\) & C10 \\ ALMA-HUDF-7 & 209357 & J033241.02\(-\)274631.4 & UDF4 & 10 & \(\cdots\) & C04 \\ ALMA-HUDF-8 & 208030 & J033243.32\(-\)2744647.5 & UDF7 & 14 & \(\cdots\) & C11 \\ ALMA-HUDF-9 & 208000 & J033235.07\(-\)274647.5 & UDF13 & 23 & \(\cdots\) & C07 \\ ALMA-HUDF-10 & 206834 & J033235.55\(-\)274703.8 & \(\cdots\) & \(\cdots\) & C09 \\ ALMA-HUDF-11 & 205449 & J033239.88\(-\)274175.0 & \(\cdots\) & \(\cdots\) & C16 \\ ALMA-HUDF-12 & 209777 & J033238.02\(-\)274626.2 & \(\cdots\) & \(\cdots\) & C08 \\ ALMA-HUDF-13 & 208134 & J033237.35\(-\)274645.4 & \(\cdots\) & \(\cdots\) & C18 \\ ALMA-HUDF-14 & 209492 & J033235.48\(-\)274626.6 & \(\cdots\) & \(\cdots\) & C23 \\ ALMA-HUDF-15 & 207739 & J033242.98\(-\)274649.9 & \(\cdots\) & \(\cdots\) & C13 \\ ALMA-HUDF-16 & 209285 & J033236.44\(-\)274631.5 & \(\cdots\) & \(\cdots\) & C12 \\ ALMA-HUDF-17 & 205379 & J033238.79\(-\)274714.7 & \(\cdots\) & \(\cdots\) & C17 \\ ALMA-HUDF-18 & 206183 & J033242.37\(-\)274707.6 & UDF16 & \(\cdots\) & \(\cdots\) & C15 \\ ALMA-HUDF-19 & 209617 & J033236.17\(-\)274627.6 & \(\cdots\) & \(\cdots\) & C19 \\ ALMA-HUDF-20 & 201501 & J033238.72\(-\)274810.3 & \(\cdots\) & \(\cdots\) & C24 \\ ALMA-HUDF-21 & 2072727 & J033241.68\(-\)274655.4 & \(\cdots\) & \(\cdots\) & C14a \\ ALMA-HUDF-22 & 208277 & J033234.85\(-\)274640.4 & \(\cdots\) & \(\cdots\) & C25 \\ ALMA-HUDF-23 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & C20 \\ ALMA-HUDF-24 & 209480 & J033235.77\(-\)274627.4 & \(\cdots\) & \(\cdots\) & C20 \\ ALMA-HUDF-25 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & C26 \\ ALMA-HUDF-26 & 208267 & J033234.67\(-\)274644.5 & \(\cdots\) & \(\cdots\) & C26 \\ ALMA-HUDF-27 & 204579 & J033235.98\(-\)274725.6 & \(\cdots\) & \(\cdots\) & C21 \\ ALMA-HUDF-28 & 129574 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & C33 \\ ALMA-HUDF-29 & 206703 & J033238.48\(-\)274072.4 & \(\cdots\) & \(\cdots\) & \(\cdots\) & C33 \\ ALMA-HUDF-30 & 203384 & J033237.61\(-\)274744.0 & \(\cdots\) & \(\cdots\) & C22 \\ ALMA-HUDF-31 & 204483 & J033241.45\(-\)274729.3 & \(\cdots\) & \(\cdots\) & \(\cdots\) & C32 \\ ALMA-HUDF-32 & 206205 & J033237.73\(-\)274076.9 & \(\cdots\) & \(\cdots\) & \(\cdots\) & C32 \\ ALMA-HUDF-33 & 204449 & J033238.55\(-\)274730.2 & \(\cdots\) & \(\cdots\) & \(\cdots\) & C27 \\ ALMA-HUDF-34 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & C27 \\ ALMA-HUDF-35 & 208812 & J033235.73\(-\)274639.0 & \(\cdots\) & \(\cdots\) & \(\cdots\) & C27 \\ ALMA-HUDF-36 & 124908 & J033235.76\(-\)274655.0 & UDF15 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ ALMA-HUDF-37 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ ALMA-HUDF-38 & 207221 & J033241.83\(-\)274657.0 & \(\cdots\) & \(\cdots\) & \(\cdots\) & C14b \\ ALMA-HUDF-39 & 265959 & J033239.76\(-\)274629.3 & \(\cdots\) & \(\cdots\) & \(\cdots\) & C14b \\ ALMA-HUDF-40 & 210730 & J033237.07\(-\)274617.1 & \(\cdots\) & \(\cdots\) & \(\cdots\) & C31 \\ ALMA-HUDF-41 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ ALMA-HUDF-42 & 207079 & J033243.61\(-\)274658.7 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ ALMA-HUDF-43 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ ALMA-HUDF-44 & 267661 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\) \\ ALMA-HUDF-45 & 207277 & J033242.35\(-\)274657.0 & \(\cdots\) & \(\cdots\) & \(\cdots\) & \(\cdots\ which is larger than the expectation from a Gaussian distribution - the negative peaks might not be perfectly Gaussian distributed, but this is not surprising given the non-uniform \(uv\) coverage. On the other hand, the number of ALMA-detected sources with no optical or near-infrared counterpart (six) is more comparable to the number of negative peaks, and so we cannot rule out the possibility that some of them are false positives. In Fig. 7 we show the distributions of redshifts, magnitudes, and colours from our JADES catalogue. For the redshifts and colours, we filter out two galaxies which have a S/N \(<\) 5 in the F356W band as the photometry is not reliable. For the remaining galaxies, the median redshift uncertainty is \(\Delta z/(1+z)\simeq 0.03\), so we expect them to be accurate. We display the distributions of all 39/45 ALMA galaxies with _JWST_ detections and also highlight the distributions of the eight ALMA galaxies that were not found in previous surveys. The first (top left) panel in Fig. 7 shows the distribution of redshifts (spectroscopic where available, otherwise photometric). We see that the redshift distribution is flat around \(z\simeq 2\), and our new galaxies appear to follow this trend. Next we show the distributions of the magnitudes of the sources in the NIRCam images (F277W, F335M, F356W, F410M and F444W). We see that most ALMA sources have magnitudes ranging from 19 to 25, with the two faintest sources extending out to \(>30\) (these are the two sources that we do not fit SEDs) to. Lastly we show distributions for three NIRCam colours; most ALMA galaxies are fairly red (i.e. brighter at longer wavelengths), and our new, fainter ALMA galaxies tend to follow the same distribution. ### Stellar mass-redshift distribution As described in Section 2.4.1, the photometric redshifts and stellar masses for most JADES galaxies have been estimated using the extensive multiwavelength imaging available for the HUDF. It is therefore of interest to see how the photometric redshifts and stellar masses of our mm-selected galaxies compare to the typical galaxies found in this field. For our list of sources with a JADES counterpart, we filter out all sources with S/N \(<\) 5 in the F356W band, since these sources will not have reliable SED fits. For the remaining sources, we check to see if there is a spectroscopic redshift, and otherwise use the photometric redshift. In Fig. 8 we plot stellar mass versus redshift for all JADES galaxies (McLeod et al. in prep.), and highlight our ALMA Band-6-selected sources with good SED fits in red. We find that most of our ALMA-selected sources are high-stellar-mass galaxies between \(z=\) 1 and 3, similar to what was found in earlier ALMA surveys of the HUDF (e.g., Dunlop et al., 2017; McLure et al., 2018; Aravena et al., 2020). In particular, there are 53 galaxies with \(M_{*}>10^{10}\) M\({}_{\odot}\), 22 of which have been selected in our ALMA image. Also of note is that at 1.23 mm we are sensitive primarily to \(z>1\) objects, since in our sample of 36 galaxies with spectroscopic or photometric redshifts, only three are at \(z<1\). In a similar vein, it may also be worth noting that all of the \(\log(M_{*}/{\rm M}_{\odot})>10.3\) galaxies at \(2<z<3\) are detected in our ALMA image. In order to investigate the difference between galaxies with \(M_{*}>10^{10}\) M\({}_{\odot}\) that we have detected with ALMA versus those that we have not detected with ALMA, in Fig. 9 we show the distributions of three select NIRCam magnitudes and three NIRCam colours for the subsamples of 22 ALMA-detected galaxies and 31 ALMA-undetected galaxies. We find that there is no discernible difference in magnitudes between the two samples; however, mm-bright ALMA sources tend to have redder NIRCam colours than the galaxies with similarly high stellar masses that we have _not_ detected with ALMA. ### Stacking on near-IR-selected positions One major benefit of the deep ancillary catalogues available in the HUDF is the ability to stack on the positions of undetected galaxies in different stellar mass and redshift bins, thus estimating the statistical properties of fainter mm-emitting sources that are not individually detectable in our image. To do this, we follow a procedure similar to Simstack (Viero et al., 2013). However, since we adapt this for a non-circular beam and non-uniform noise distribution, it is worth describing what we do in a little detail. First, we mask out all galaxies that we have detected in our ALMA map (see Table 2), with a source mask set to be \(3\times\) FWHM in diameter. We also restrict this stacking analysis to the deep central map shown in Fig. 6. Then we subtract the weighted mean of the image - this is crucial, since the'stack' is really the covariance between a map and catalogue (see section 3 of Marsden et al., 2009) and will give a biased result unless the map has zero mean. Next, we define a grid of four stellar mass bins, logarithmically spaced between \(\log\left(M_{*}/{\rm M}_{\odot}\right)=7.4\) and \(\log\left(M_{*}/{\rm M}_{\odot}\right)=11.4\), with a width of \(\Delta\log\left(M_{*}/{\rm M}_{\odot}\right)=1\). We also define a grid of four redshift bins between 0 and 8, with a width of 2. For the stacking catalogue, we again turn to JADES, using stellar masses and redshifts given by McLeod et al. (in prep.); here redshifts are spectroscopic if available, otherwise photometric. These bins are chosen because they are expected to contain the galaxies comprising the vast majority of the CIB (see Section 5.3 for more details). For each redshift and stellar mass bin we produce a 'hits' map, which is simply a copy of our ALMA map with pixel values set to the number of JADES galaxies within the bin contained within each pixel. We convolve each hits map with the ALMA beam, thereby producing a model image of the sky where the only free parameter of each map is its amplitude, which in this case can be interpreted as the best-fit flux density of the galaxies within the given stellar mass and redshift bin. As was done with the data, we also subtract the weighted mean of the convolved maps. As described in Viero et al. (2013), in general there are correlations between redshift bins simply due to the presence of large-scale structure, and to take these into account one would have to fit for all of the amplitudes in the defined bins simultaneously. In our case, redshift bins have a width of 2, so these correlations are expected to be negligible. We therefore fit for the amplitude in each bin independently, which reduces to solving a simple weighted linear regression between the ALMA map and maps of 'hits' in the JADES catalogue for each bin. The solution is \[\hat{S}_{\alpha}=\frac{\sum_{j}w_{j}D_{j}N_{\alpha}^{j}}{\sum_{j}w_{j}(N_{ \alpha}^{j})^{2}}, \tag{3}\] where \(j\) labels each map pixel, \(\alpha\) denotes the redshift/stellar mass bin, \(N_{\alpha}\) is the hits map of bin \(\alpha\) convolved with the ALMA beam with the weighted mean subtracted, \(D\) is the data map with the weighted mean subtracted and \(w_{j}=1/\sigma_{j}^{2}\) are the inverse-variance weights. The sum is performed over all the pixels in the map. Equation 3 is effectively the covariance between the map and the catalogue, but with the ALMA beam taken into account. To estimate the uncertainties in \(\hat{S}_{\alpha}\), for each redshift and stellar mass bin we generate 1000 catalogues with the same number of sources in each bin but with random positions, and evaluate Eq. 3 for each one. We find the distribution of values to be well-described by a Gaussian, so we take the standard deviation of the random catalogue flux densities to be the 1 \(\sigma\) error in \(\hat{S}_{\alpha}\). In order to visually show our results, we also evaluate Eq. 3 after shifting the hits map \(N_{\alpha}\) relative to the data map \(D\) within boxes of 10 arcsec; this is effectively the cross-correlation between the two maps, where the central pixel is the zero-lag cross-correlation (or the covariance), which is the value we are most interested in. The top panel of Figure 10 shows our results from the cross-correlation, while in the bottom panel we list the central pixel values, which are the best-fit flux densities of the galaxies within each bin. For discussing the CIB contributions we want to know the pixel sum of the best-fit average surface brightness for each bin (weighted by the inverse-variance), which we calculate as \[\hat{I}_{\alpha} =\frac{\hat{S}_{\alpha}}{2\pi\sigma_{\rm maj}\sigma_{\rm min}} \frac{\sum_{j}w_{j}N_{\alpha}^{j}}{\sum_{j}w_{j}} \tag{4}\] \[=\frac{\hat{S}_{\alpha}N_{\alpha,\rm eff}}{A}.\] Here \(\sigma_{\rm maj}\) and \(\sigma_{\rm min}\) are the beam major and minor axes (in standard deviation units), respectively, the quantity \(A\) is the solid angle of the map and \(N_{\alpha,\rm eff}\) is the effective total number of sources from catalogue \(\alpha\) in the map, weighted by the noise, \[N_{\alpha,\rm eff}=\frac{\sum_{j}w_{j}N_{\alpha}^{j}}{\sum_{j}w_{j}}\frac{A}{ 2\pi\sigma_{\rm maj}\sigma_{\rm min}}. \tag{5}\] This is just the average number of sources per beam (weighted by the noise), times the number of beams in the map. In Fig. 10 we provide the values of \(N_{\alpha,\rm eff}\) along with \(\hat{S}_{\alpha}\). Note that when we calculate the weighted mean surface brightness of the sky from our model, the weighted mean should not be subtracted from \(N_{\alpha}^{j}\). It is worth noting that a simpler stacking technique (just summing pixel values at the positions of _JWST_ galaxies; see Marsden et al., 2009) produces similar results, merely with slightly less significance. As a completely separate null test, we also stacked our ALMA map at the positions of 21 sources from the JADES catalogue flagged as stars that happen to fall within our ALMA map. This stack is shown in Fig. 11 and we can see that it is consistent with noise. In Fig. 10 we see that many high stellar mass bins are blank; this is because we have either detected all of the galaxies within these bins, or there were no galaxies within these bins to begin with. We also see that there are stacked peaks \(\gtrsim 3\,\sigma\) in all bins with stellar masses between \(10^{8.4}\,\rm M_{\odot}\) and \(10^{10.4}\,\rm M_{\odot}\) and with redshifts between 0 and 4. Across the lowest stellar mass the stacks tend to be positive but with error bars overlapping 0, meaning that galaxies with stellar masses \(<10^{8.4}\,\rm M_{\odot}\) barely contribute to our ALMA map (a topic that is explored further in Section 5.3). In terms of stellar mass, the main conclusions of these stacking results are that: (1) there are about 60 galaxies with stellar masses around \(10^{10}\,\rm M_{\odot}\) lying just below our \(3.6\,\sigma\) ALMA threshold, which have flux densities around \(15\,\mu\)Jy; (2) there are more than 300 galaxies with stellar masses around \(10^{9}\,\rm M_{\odot}\) that can also be statistically detected in the ALMA map, with individual flux densities of a few Figure 7: Distribution of ALMA source properties matched to our JADES catalogue (Rieke & the JADES Collaboration, 2023, McLeod et al. in prep.). The first (top left) panel shows the distribution of redshifts (spectroscopic when available, otherwise photometric). The next five panels show distributions of NIRCam magnitudes for the ALMA sources in the JADES catalogue. The final three panels show distributions of three selected NIRCam colours. The distributions from all 39/45 ALMA galaxies with _JWST_ detections are shown in blue, while the distributions from the subset of eight new ALMA galaxies with _JWST_ detections are shown in red. In the redshift panel (top left) there are only 37 ALMA galaxies with available redshifts, and six new ALMA galaxies with redshifts. In the bottom row we also only plot the six galaxies with available redshifts because the other two have quite uncertain colours. \(\mu\)Jy; and (3) galaxies with stellar masses around \(10^{8}\,\mathrm{M}_{\odot}\) or below have flux densities that are too low to be detected in the ALMA map, even statistically. ## 5 The cosmic infrared background in the HUDF The absolute intensity of extragalactic light has been studied at all wavelengths from \(\gamma\)-rays to the radio (Hill et al., 2018). Peaking at around \(200\,\mu\)m, the CIB is the brightness of the extragalactic sky at infrared wavelengths, averaged over the whole sky, after subtracting all contributions originating from the Solar System and the Milky Way. This absolute value tells us about the history of star formation and has been measured using the FIRAS instrument onboard the _COBE_ satellite (Fixsen et al., 1998). The 1-mm background lies on the long-wavelength side of the CIB and we can try to use our new map to estimate what fraction can be accounted for in DSFGs. ### Resolved source contribution to the cosmic infrared background An important question is whether the intensity of the CIB can be recovered by summing the contribution from known galaxies, or if there exists an additional population of sources or a genuinely diffuse component of the CIB. This question has been addressed by many previous studies at wavelengths around 1 mm (e.g. Penmer et al., 2011; Viero et al., 2013; Dunlop et al., 2017; Hatsukade et al., 2018; Gonzalez-Lopez et al., 2020; Gomez-Guijarro et al., 2022; Chen et al., 2023), with results typically in the tens of per cent range, depending on the precise wavelength. Here we explore this question with our new ALMA map at \(1.23\,\mathrm{mm}\) and our new JADES catalogue. To start with, the sum of our detected source flux densities (\(>3.6\,\sigma\)) is \((7.33\pm 0.10)\,\mathrm{mJy}\), and these sources are detected within an area of \(1.15\times 10^{-3}\,\mathrm{deg}^{2}\). Therefore we have resolved a total intensity of \((6.35\pm 0.09)\,\mathrm{Jy}\,\mathrm{deg}^{-2}\) in individually-detected DSFGs. In addition to this, our stacking analysis (Fig. 10) demonstrates that our map is also statistically sensitive to fainter galaxies. Multiplying \(\hat{S}_{\alpha}\) by \(N_{\mathrm{eff}}\) in each bin shown in Fig. 10, and summing, yields a flux density of \((2.6\pm 0.5)\,\mathrm{mJy}\), or an intensity of \((2.4\pm 0.5)\,\mathrm{Jy}\,\mathrm{deg}^{-2}\) Figure 8: Stellar mass versus redshift for JADES galaxies (McLeod et al. in prep.), with JADES galaxies detected in our \(S_{1220}\) image highlighted in red (circles indicate photometric redshifts, stars indicate spectroscopic redshifts). The apparent vertical features are a result of the grid used for the photometric redshift fitting. (where the area used in our stack is \(1.11\times 10^{-3}\,\mathrm{deg}^{2}\), slightly less than the full map due to our source mask); this stacking result is larger than has previously been possible, due to the combination of a deeper ALMA map and larger catalogue from JWST. Lastly, noting that there are no galaxies with \(S_{1230}\) \(>\) 0.85 mJy present in our map, we could also add a contribution from brighter sources. The GOODS-ALMA survey (Gomez-Guijarno et al., 2022) covered a much larger area with ALMA at 1 mm (about 72 arcmin\({}^{2}\)) and found 22 galaxies with \(S_{1230}\) \(>\) 0.85 mJy, corresponding to an intensity of \((1.31\pm 0.02)\) Jy \(\mathrm{deg}^{-2}\) (including the factor of 0.8 to convert their flux densities from 268 GHz to 243 GHz) and so we can add this to our resolved CIB contribution as well. In total, we directly or statistically detect a total CIB intensity of \((8.7\pm 0.5)\) Jy \(\mathrm{deg}^{-2}\) in our map, and by adding in a contribution from brighter sources, we estimate that the true value of the CIB is \((10.0\pm 0.5)\) Jy \(\mathrm{deg}^{-2}\). We note that if we instead perform our stacking analysis on the full map (without masking bright detected sources) we obtain a CIB estimate of \((8.7\pm 0.8)\) Jy \(\mathrm{deg}^{-2}\), consistent with our approach of detecting bright objects then masking them to add the stacking result. Now we have to determine what fraction of the background we have accounted for. ### The absolute value of the cosmic infrared background Estimating the absolute level of the CIB at 1 mm is subject to larger uncertainties than our estimate of the amount contributed by sources. The spectrum of the CIB was measured by _COBE_-FIRAS (Fixsen et al., 1998), but with fairly large systematics-dominated uncertainties. More recently Odegard et al. (2019) used _Planck_ in combination with FIRAS to estimate the total intensity of the CIB in the _Planck_-High-Frequency Instrument channels, including at 217 and 353 GHz (the closest _Planck_ frequencies to the mean frequency of our ALMA image, 243 GHz). The best-fit CIB spectral shape from Fixsen et al. Figure 9: _Top row:_ Distributions of three select NIRCam magnitudes for all ALMA-detected galaxies with _JWST_ counterparts and with stellar masses above \(10^{10}\,\mathrm{M}_{\odot}\), compared to all _JWST_ galaxies with stellar masses above \(10^{10}\,\mathrm{M}_{\odot}\) that are not detected by ALMA. _Bottom row:_ Same as top row but for three select NIRCam colours. While we do not see a significant difference between the two samples in terms of magnitude, the mm-bright galaxies detected by ALMA with high stellar masses tend to have redder NIRCam colours compared to ALMA-undetected galaxies with equally large stellar masses. Figure 10: _Top:_ Stacks at the positions of JADES galaxies in redshift and stellar mass (from McLeod et al. in prep.) bins, after masking all of our detected ALMA sources. The cutouts are \(10\,\mathrm{arcsec}\propto 10\,\mathrm{arcsec}\). The colour represents S/N, where the signal is the variance-weighted mean and the noise is the uncertainty in the variance-weighted mean. _Bottom:_ Central pixel values and uncertainties of the top panel (i.e. our best estimate of the average 1230-\(\mu\)m flux density of galaxies within each bin), with the number of JADES galaxies contributing to each bin also given. For a definition of the quantity \(N_{\mathrm{eff}}\), see Eq. 5. (1998) is a modified blackbody function of the form \[I(\nu)=A\left(\frac{\nu}{\nu_{0}}\right)^{\beta}B_{\nu}(T_{\rm d}), \tag{6}\] where \(A\) is a constant, \(\nu_{0}\) is a fiducial frequency, \(\beta\) = 0.65, \(B_{\nu}\) is the Planck function and \(T_{\rm d}\) = 18.5 K. The uncertainties in the three fit parameters \(A\), \(\beta\) and \(T_{\rm d}\) are significantly correlated (the correlation coefficients reported in the paper are larger than 95 per cent), meaning that there is effectively only one free parameter in the fit. Lacking the data needed to do the fit ourselves, we simply fix \(\beta\) and \(T_{\rm d}\) (thus fixing the shape of the CIB spectrum), but take into account the uncertainties in the amplitude throughout our calculations. To interpolate the CIB intensity to the frequency of our ALMA map, we first estimate what the transmission function of our ALMA map is, and use that to estimate the effective frequency of the image (which may be different from the mean frequency of 243 GHz). Each ALMA observation consists of two side bands 4 GHz wide, whose central frequencies are separated by 12 GHz, and for each observation we have already computed the central rms of the observatory-produced MFS image (\(\sigma_{i}\); see Section 3). Assuming each observation's individual transmission function is flat (in \(\nu\)) across the two sidebands, we can calculate the mean transmission function of all of the relevant observations used to produce our final map, weighted by each observation's inverse variance. To each weight we must also include a factor of the ratio of the given observation's beam area (set by \(\theta_{\rm maj,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Odegard et al. (2019), where the error bars are the 68 per cent confidence intervals of the posterior distributions and the central values are the means of the distributions. We show these estimates graphically in Fig. 12. Specifically we plot with a blue band the 68 per cent range of the absolute value of the CIB estimated using FIRAS data alone (Fixsen et al., 1998) and with a pink band we plot the estimate using FIRAS data in combination with _Planck_ for foreground removal (Odegard et al., 2019). Although the Odegard et al. (2019) results substantially shrink some uncertainties compared to the Fixsen et al. (1998) results, that is really only the case at shorter wavelengths; at 1.2 mm the uncertainties are similar, but the background is actually a little higher in the more recent paper. The difference between the two estimates indicates that the background at these wavelengths is still quite uncertain. In Fig. 12 we also present the contribution of sources to the CIB, by plotting the cumulative intensity of resolved galaxies from this work as a function of their flux density, including the estimated contribution from \(S_{1230}>0.85\) mJy galaxies and also the stacking-estimate contribution from galaxies in the JADES catalogue. For reference we show the same analysis using the results from Dunlop et al. (2017) and ASPECS (Gonzalez-Lopez et al., 2020), where in both works an estimate of the contribution to the CIB from stacking was performed. Our new results are similar to those from previous studies, with our new analysis detecting the faintest sources and finding the highest total background value in the HUDF region. ### Consistency between resolved sources and the absolute value of the cosmic infrared background Clearly the absolute value measurements of the CIB are larger than what we find by summing the flux densities of known galaxies (detected by both ALMA and _JWST_). It is therefore important to consider whether or not the two kinds of measurement are consistent. There are fluctuations in the CIB (e.g., Planck Collaboration XXX, 2014) whose amplitude depends on the area observed. Studies of the CIB at submm wavelengths found that \(\delta I/I=15\) per cent on scales around 10 arcmin (Viero et al., 2009, 2013a), which is larger than the area of our deep ALMA map. To find a better estimate of the expected amplitude of these fluctuations for maps the size of our HUDF image, we use the Simulated Infrared Dusty Extragalactic Sky (SIDES) mock catalogue (Bethermin et al., 2017; Gkogkou et al., 2023). Briefly, SIDES uses dark matter halos in a simulated light cone to obtain clustered positions on the projected sky, then attaches stellar masses and far-infrared SEDs to the halos using a two star-formation-mode model of galaxy evolution. The total simulated area of the SIDES simulation is \(117\,\mathrm{deg}^{2}\), but here we only use a \(1\,\mathrm{deg}^{2}\) tile from the full simulation. Since each simulated galaxy has an SED, we first estimate their 243-GHz flux densities by integrating their SEDs through the transmission function derived above. We next take 100 random patches from the \(1\,\mathrm{deg}^{2}\) tile, each with the area of our image of the HUDF (here \(4.2\,\mathrm{arcmin}^{2}\)). The CIB intensity of each random patch is then just the sum of the flux densities of the galaxies in the patch divided by the area. However, since we find no galaxies with \(S_{1230}>0.85\) mJy in our real ALMA map, then we subtract these from our random patches as well; this assumes that there are no additional fluctuations from the population of \(S_{1230}>0.85\) mJy galaxies (i.e. we simply add a constant for the bright part of the background). Additionally, we neglect the contribution from galaxies at stellar masses where we were unable to detect any statistical signal in the real data. Following this procedure, we find that the mean CIB value within \(4.2\,\mathrm{arcmin}^{2}\) patches of the SIDES simulation (after subtracting the contribution from \(S_{1230}>0.85\) mJy galaxies) is \(11.1\,\mathrm{Jy}\,\mathrm{deg}^{-2}\), comparable to the value of \((8.6\pm 0.5)\) Jy \(\mathrm{deg}^{-2}\) that we have measured. The standard deviation of the SIDES simulation patches is \(1.9\,\mathrm{Jy}\,\mathrm{deg}^{-2}\), thus fluctuations are \(\delta I/I=17\) per cent. We must therefore take this into account when comparing the mean CIB value from FIRAS averaged over nearly the whole sky to the single \(4.2\,\mathrm{arcmin}^{2}\) patch we have observed. We now estimate the probability of measuring a CIB value of \((10.0\pm 0.5)\) Jy \(\mathrm{deg}^{-2}\) assuming that the true value is either \(19^{+6}_{-5}\) Jy \(\mathrm{deg}^{-2}\)(Fixsen et al., 1998) or \(23^{+6}_{-8}\) Jy \(\mathrm{deg}^{-2}\)(Odegard et al., 2019) and that fluctuations are 17 per cent. We take our 10,000 CIB absolute value realizations discussed above and additionally draw 10,000 Gaussian-distributed values for a fluctuation amplitude, then multiply these together. We draw 10,000 Gaussian-distributed numbers for our measured CIB value, and take the difference between these and the possible absolute values. Finally, our statistic is the fraction of the area of the resulting posterior distribution that is less than zero, which can be interpreted as the probability of obtaining our actual measured CIB value or less while taking into account both the measurement uncertainties and intrinsic CIB fluctuations. We find that the probability of measuring a CIB value of \((10.0\pm 0.5)\) Jy \(\mathrm{deg}^{-2}\), given the absolute CIB measurement from Fixsen et al. (1998) is \(8.9\) per cent, or \(5.3\) per cent given the absolute CIB measurement from Odegard et al. (2019). Assuming that the absolute value measurements from FIRAS are correct and that our measurement is also correct, we can calculate the required level of statistical excursion (in units of \(\sigma=0.17\)) corresponding to the HUDF. Using the same 10,000 random values for the absolute FIRAS values and the measured values, we find that the CIB fluctuation at the position of the HUDF must be \(-2.4^{+0.4}_{-1.4}\,\sigma\) using the CIB value from Fixsen et al. (1998), or \(-2.9^{+0.3}_{-1.2}\,\sigma\) from Odegard et al. (2019) (here the central values are the means of the posterior distributions and the error bars are 68 per cent confidence intervals). What this means is that the variance in HUDF fields is large enough that our results can explain the whole of the CIB provided that the HUDF happens to be a relatively mild (\(\simeq 2\,\sigma\)) underdense direction on the sky.6 Footnote 6: The HUDF was not selected entirely randomly; however, there is no particular reason to believe that the criteria used for its selection would make it likely to have a lower than average background (Beckwith et al., 2006). As a final check, we use the simulated catalogue of galaxies from SIDES to estimate the fraction of the CIB emitted by galaxies with stellar masses between \(10^{7.4}\,\mathrm{M}_{\odot}\) and \(10^{11.4}\,\mathrm{M}_{\odot}\) and with redshifts between 0 and 8 (namely the parameter space over which we stacked on undetected JADES galaxies). For each of the 100 random patches described above, we also calculate the sum of the flux densities from all galaxies within our stacking range divided by the sum of all of all of the galaxies in the HUDF-sized region. We find that the average ratio is \(97\,\mathrm{per}\) cent, with a standard deviation of 1 per cent. Thus if we accept that SIDES provides a reasonable model for counts at these wavelengths, we do not expect that we are missing a significant contribution to our ALMA measurement of the CIB from even fainter and lower stellar mass galaxies. Turning back to the data, if we include sources detected by ALMA, the contribution to the CIB from \(10^{10.4}\)-\(10^{11.4}\,\mathrm{M}_{\odot}\) galaxies is \((3.5\pm 0.1)\) Jy \(\mathrm{deg}^{-2}\), from \(10^{9.4}\)-\(10^{10.4}\,\mathrm{M}_{\odot}\) galaxies is \((3.2\pm 0.1)\) Jy \(\mathrm{deg}^{-2}\), from \(10^{8.4}\)-\(10^{9.4}\,\mathrm{M}_{\odot}\) galaxies is \((1.1\pm 0.2)\) Jy \(\mathrm{deg}^{-2}\) and from \(10^{7.4}\)-\(10^{8.4}\,\mathrm{M}_{\odot}\) galaxies is \((0.5\pm 0.4)\) Jy \(\mathrm{deg}^{-2}\). If we stack on the next lowest mass bin (\(10^{6.4}\)-\(10^{7.4}\,\mathrm{M}_{\odot}\)) over the full redshift range we obtain a CIB contribution of \((-0.5\pm 0.5)\) Jy deg\({}^{-2}\). This is consistent with the CIB having essentially converged over the stellar mass range that we have probed. A potential issue here is the completeness of the JADES catalogue within our stacking range. For our 1000 random SIDES patches we also keep track of the total number of galaxies with stellar masses and redshifts within our stacking region. We find that the average number of galaxies is 1890 (with a standard deviation of 150). In the JADES catalogue there are 1856 galaxies in our ALMA image with stellar masses and redshifts within our stacking range, which is consistent with the total number of galaxies expected from the SIDES simulation. For reference, there are 1561 galaxies from the CANDELS catalogue that are in our ALMA image footprint within the same stacking range. Adding JADES has helped us to find more of the CIB within our ALMA map, and it seems that going even deeper in the optical/near-IR will not add significantly to the source-derived CIB estimate. What these numbers indicate is that our estimate of the 1.23-mm CIB (from individually-detected galaxies, together with statistical stacking results) appears to contain essentially all of the possible galaxies that would contribute to the CIB, and that it is genuinely lower than what the mean value is estimated to be. However, the chances that our small patch of the extragalactic sky falls on a negative fluctuation of the CIB are not small enough to rule out the hypothesis that we have indeed recovered essentially the entire CIB from these known galaxies. ## 6 Improvement over previous studies The deepest previously-published ALMA survey of the HUDF at 1 mm is ASPECS and so a question of interest is how much of an improvement have we achieved by including all of the additional data. Qualitatively, looking at Table 1 we can see that ASPECS is by far the deepest of the individual surveys. The maps from Dunlop et al. (2017), ASAGAO and GOODS-ALMA overlap with the entire ASPECS map and so by combining them with ASPECS the result must be deeper. Additionally, by including more individual pointings, we have been able to make some regions even deeper still. To quantify the difference between ASPECS and our combined image, we downloaded the ASPECS map produced using CASA in the MFS mode and made public by the ASPECS team,7 then ran the primary-beam-corrected image through our algorithm for generating the noise map (see Section 2.3). The pixel size of the ASPECS map is 0.2 arcsec, the same as our \(uv\)-combined map, and the beamsizes are very similar (\(1.49\,\mathrm{arcsec}\times 1.07\,\mathrm{arcsec}\) versus \(1.53\,\mathrm{arcsec}\times 1.08\,\mathrm{arcsec}\)), so we simply computed the ratio of the two noise maps to assess the amount by which the noise improves with the additional data. Unsurprisingly we find that across the entire ASPECS region the noise (meaning here the rms after masking sources) in our map is smaller, ranging from about 5 per cent smaller in the central region to about 50 per cent smaller near the edges and around the deepest individual pointings. The ASPECS map was made going out to a primary beam value of 0.1 and covers a total area of 4.2 arcmin\({}^{2}\), 3.3 arcmin\({}^{2}\) of which has a noise level less than 35 \(\mu\)Jy beam\({}^{-1}\), whereas our map was made going out to a primary beam value of 0.2 and all 4.2 arcmin\({}^{2}\) has a noise level less than 35 \(\mu\)Jy beam\({}^{-1}\). Thus by combining the different data sets we not only reduce the noise across the majority of the map by 5 per cent, but we also expand the area where the map is at its most sensitive. Footnote 7: [https://almascience.nrao.edu/alma-data/lp/ASPECTS](https://almascience.nrao.edu/alma-data/lp/ASPECTS) Gonzalez-Lopez et al. (2020) searched the ASPECS map for sources down to a fidelity of 0.5, and their lowest significance source had a S/N of 3.3. Their catalogue contains a total of 35 sources (four of which are not detected in our map), whereas we find 45 sources; it is worth noting that all 45 of our sources are contained within the area mapped by ASPECS. Thus the extra depth we have been able to achieve by combining archival ALMA data with ASPECS has led to a significant increase in detected sources. In addition to detecting more sources, we are able to recover more of the CIB through stacking thanks to our deeper JADES catalogue. If we stack on the 1661 galaxies from the CANDELS catalogue with stellar masses between \(10^{7.4}\,\mathrm{M}_{\odot}\) and \(10^{11.4}\,\mathrm{M}_{\odot}\) and with redshifts between 0 and 8 (after masking detected ALMA sources) we obtain (\(1.5\pm 0.4\)) Jy deg\({}^{-2}\), compared to (\(2.4\pm 0.5\)) Jy deg\({}^{-2}\) using the JADES catalogue. Now turning to the wider HUDF region, Hatsukade et al. (2018) also combined the ASAGAO survey with the survey from Dunlop et al. (2017) and the first GOODS-ALMA survey (Franco et al., 2018); their final map has a mean rms of about 75 \(\mu\)Jy beam\({}^{-1}\), with a beamsize of 0.59 arcsec\(\times\) 0.53 arcsec and a pixel size of 0.1 arcsec after applying a 250 k\(\lambda\) taper. Our shallow and wide combined map (presented in Appendix A) contains additional GOODS-ALMA data that were not available when the ASAGAO map was constructed, as well as more individual pointings, while maintaining a similar beamsize (0.87 arcsec \(\times\) 0.64 arcsec) and pixel size (0.12 arcsec). We thus also downloaded the ASAGAO 250 k\(\lambda\)-tapered map8 to quantitatively check the improvement. After running the ASAGAO map through our noise algorithm, we find that the new GOODS-ALMA data reduces the noise by 10-20 per cent throughout, while some individual pointings go about 50 per cent deeper. Footnote 8: [https://sites.google.com/view/assgao26/alma-data?authuser=0](https://sites.google.com/view/assgao26/alma-data?authuser=0) Footnote 9: [https://doi.org/10.5683/SP3/VWBWWH](https://doi.org/10.5683/SP3/VWBWWH) ## 7 Conclusions We have produced a series of 1-mm maps of the HUDF by combining all of the previously-published survey data in the \(uv\) plane in various ways, reducing the noise compared to previous studies. We specifically constructed a deep map covering 4.2 arcmin\({}^{2}\) and a shallower map covering 25.4 arcmin\({}^{2}\). Our deep map has a pixel rms that ranges from 5 to 50 per cent lower than in the best previous study, with an area of about 1.5 arcmin reaching below 9 \(\mu\)Jy beam\({}^{-1}\) and a minimum of about 4.6 \(\mu\)Jy beam\({}^{-1}\) reached in some regions. Our shallow map has a pixel rms that ranges from 10 to 50 per cent lower than in the best previous wider and shallower study. We make all of our maps publicly available10 Footnote 10: [https://doi.org/10.5683/SP3/VWBWWH](https://doi.org/10.5683/SP3/VWBWWH) We searched our deep map for sources down to a signal-to-noise threshold of 3.6, finding a total of 45 peaks in the S/N map, 13 of which are new. Nearly all (39/45) of these ALMA sources have near-IR counterparts detected by _JWST_ and _HST_. We additionally find 27 sources in our wider map, nine of which are new. The JADES data enable stellar masses and photometric redshifts to be estimated for the ALMA source counterparts, and we find that they are all relatively high \(M_{*}\) galaxies. Compared with ALMA-undetected galaxies at similar \(M_{*}\), the ALMA-detected galaxies typically have redder _JWST_ colours. With our larger sample of mm-selected sources in the HUDF, other studies investigating the statistical properties of the faintest star-forming galaxies could be carried out. For example, Aravena et al. (2020) studied the SFR-stellar mass relation of ASPECS galaxies and Boogaard et al. (2023) looked at the morphologies of ASPECS galaxies in _JWST_'s MIRI filters; expanding to a larger sample size and using improved SED fits from _JWST_ photometry could lead to more robust conclusions. Since the vast majority of near-IR-selected galaxies are not directly detected in our 1-mm map, we performed a stacking analysis on their positions. We found significant average signals for all galaxies in the range \(z\) = 0 to 3 and with stellar masses between \(10^{9.4}\) M\({}_{\odot}\) and \(10^{10.4}\) M\({}_{\odot}\), as well as a roughly 3 \(\sigma\) signal from \(10^{8.4}\) M\({}_{\odot}\) to \(10^{9.4}\) M\({}_{\odot}\) stellar mass galaxies between \(z\) = 0 and 2. The evidence is that \(M_{*}\sim 10^{10}\) M\({}_{\odot}\) ALMA-_undetected_ galaxies have a 1-mm flux density around \(15\,\mu\)Jy and would be individually detected in even deeper integrations. The stacking results also show the value of our new data products for performing similar statistical analyses on sets of galaxies detected in other wavebands. We used our galaxy detections, as well as our stacking analysis, to estimate the level of the CIB at 1.23 mm that we have resolved. We thus account for a background level of \((10.0\pm 0.5)\) Jy deg\({}^{-2}\), with an expectation that even fainter galaxies will hardly change this number. There is still a large uncertainty in the total background estimate, and we also stress that the variance expected in a region as small as the HUDF is around 20 per cent. We do not recover all of the background, and there are a number of possible resolutions: (1) HUDF is a 2-3 \(\sigma\) negative fluctuation in the CIB; (2) the absolute level of the CIB has been overestimated; (3) there is a new population of galaxies that contribute at 1 mm, but are not detected by _JWST_; or (4) a genuinely diffuse component of the background exists. Given that the simple explanation (1) cannot be excluded, then the suggestion is that deep ALMA+_JWST_ observations may account for all of the mm-wave background. The HUDF is probably the best-studied extragalactic field, and it is crucial to probe this field at longer (\(>500\,\mu\)m) wavelengths in order to understand how the earliest galaxies began forming their stars, complementing the data obtained by _HST_, _JWST_ and other telescopes at shorter wavelengths. The 4.2 arcmin\({}^{2}\) map presented here is the deepest such image of the HUDF available to-date, yet it is clear that there are still many more galaxies left to detect. Deeper ALMA observations of the HUDF will inevitably uncover these galaxies, providing a more complete understanding of galaxy evolution at early times. ## Acknowledgements This research used the Canadian Advanced Network For Astronomy Research (CANFAR) operated in partnership by the Canadian Astronomy Data Centre and The Digital Research Alliance of Canada, with support from the National Research Council of Canada, the Canadian Space Agency, CANARIE and the Canadian Foundation for Innovation. RH and DS acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). This paper makes use of the ALMA data ADS/JAO.ALMA#2012.1.00173.S, 2013.1.00718.S, 2013.1.01271.S, 2015.100098.S, 2015.10.00543.S, 2015.10.00664.S, 2015.100821.S, 2015.108070.S, 2015.11016.S, 2015.101379.S, 2015.101447.S, 2015.A0009.S, 2016.100324.L, 2016.10.00721.S, 2016.10.00967.S, 2017.10.00190.S, 2017.10.00755.S, 2018.10.00567.S and 2018.1.01044.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources. This work is based in part on observations taken by the CANDELS Multi-Cycle Treasury Program with the NASA/ESA _HST_, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This work is based in part on observations made with the NASA/ESA/CSA _James Webb Space Telescope_. ## Data Availability All of the data products described in this paper are publicly available at [https://doi.org/10.5683/SP3/VWBVWH](https://doi.org/10.5683/SP3/VWBVWH). The raw ALMA data used to produce these data products are publicly available at the ALMA archive. The _JWST_ images from the JADES Collaboration used in this paper are also publicly available, and the catalogue derived using JADES images are described and made available in a separate paper.
2309.06132
Measuring vagueness and subjectivity in texts: from symbolic to neural VAGO
We present a hybrid approach to the automated measurement of vagueness and subjectivity in texts. We first introduce the expert system VAGO, we illustrate it on a small benchmark of fact vs. opinion sentences, and then test it on the larger French press corpus FreSaDa to confirm the higher prevalence of subjective markers in satirical vs. regular texts. We then build a neural clone of VAGO, based on a BERT-like architecture, trained on the symbolic VAGO scores obtained on FreSaDa. Using explainability tools (LIME), we show the interest of this neural version for the enrichment of the lexicons of the symbolic version, and for the production of versions in other languages.
Benjamin Icard, Vincent Claveau, Ghislain Atemezing, Paul Égré
2023-09-12T11:18:29Z
http://arxiv.org/abs/2309.06132v2
# Measuring vagueness and subjectivity in texts: ###### Abstract We present a hybrid approach to the automated measurement of vagueness and subjectivity in texts. We first introduce the expert system Vago, we illustrate it on a small benchmark of fact vs. opinion sentences, and then test it on the larger French press corpus FreSaDa to confirm the higher prevalence of subjective markers in satrical vs. regular texts. We then build a neural clone of Vago, based on a BERT-like architecture, trained on the symbolic Vago scores obtained on FreSaDa. Using explainability tools (LIME), we show the interest of this neural version for the enrichment of the lexicons of the symbolic version, and for the production of versions in other languages. Vagueness, Subjectivity, Precision, Detail, Hybridization, Explainability, Multidimensionality ## I Introduction How do we decide whether a statement is factual or whether it reports a mere opinion when we lack access to first-hand knowledge? This question is of central importance in order to assess and enhance information quality on the web and in other media. In 2018, PEW Research Center (henceforth PRC) conducted a survey among North-Americans, intended to test the public's ability to recognize statements either as expressing mere opinion or as reporting facts in the news. While each statement in their sample was correctly classified by a majority of participants, only 26% were able to correctly identify all factual statements as such in the test sample, and only 35% to categorize all opinion statements as opinion [21]. Despite asking participants to issue judgments "regardless of [their] knowledge of the topic", PRC did not give an account of the linguistic cues that a competent language-user ought to rationally identify as conveying objective or subjective information. As argued by Kaiser and Wang in [14], however, whether a statement is fact-like or opinion-like depends on "linguistic packaging". In order to clarify this issue, this paper presents a symbolic tool called Vago, designed to assess informational quality in textual documents, by providing measures of vagueness vs. precision in discourse [26, 7], but also of subjectivity vs. objectivity [15]. So far, automatic vagueness detection has been considered in the context of privacy policies, for scoring vagueness at the level of words and sentences [4], for predicting whether a word is vague or not based on the vector representation of vague expressions in recurrent neural networks [19], or for generating different degrees of vague sentences using adversarial networks [17], with results further exploited in [18] to evaluate the support of individual sentences to the vagueness prediction of whole documents. Concerning the detection of opinion, measures of subjectivity have been considered at the level of sentences and documents [1]. Following [25, 9] observed that in French articles, some syntactic markers (e.g. exclamation, interrogation and suspension points) are significantly more prevalent in opinion articles than in factual articles, as for semicolon and colon. Conversely, like [16] for English newspapers, [9] also observed that factual articles in French contained more named entities (e.g. dates, values, percentage) than opinion articles. Our own approach relies on the observation that a subclass of vague expressions, comprised in particular of multi-dimensional vague adjectives, constitutes a reliable marker of subjectivity [23, 14], and therefore of opinion. To leverage it, we combine two distinct methods. On the one hand, we use an expert system called Vago[10, 11, 12], which relies on symbolic rules for detecting and measuring lexical vagueness and subjectivity in textual documents. On the other hand, we create a neural counterpart of symbolic Vago, called Vago-N, in order to test and enhance the expert system's performance. One of the goals of this method is to extend the results of the expert system to languages other than French and English. But we also present the potential use of Vago-N, initially trained for predicting scores through regression, for detecting false information, or fake news. ## II The Symbolic Tool Vago ### _Typology of vagueness and measures of detail_ Vago measures vagueness and subjectivity in textual documents based on a lexical database for French and English [3], with rules for vagueness scoring and vagueness cancellation depending on the syntactic context (see below). Built on a typology derived from [8], this database provides an inventory of vague terms in four categories: approximation vagueness (\(V_{A}\)), generality vagueness (\(V_{G}\)), degree vagueness (\(V_{D}\)), and combinatorial vagueness (\(V_{C}\)). Approxim vagueness primarily concerns modifiers like "environ" ("approximately"), which relax the truth conditions of the modified expression. Generality vagueness includes determiners like "certains" ("certain") and modifiers like "au plus" ("at most"). Unlike expressions of approximation, the latter have precise truth conditions. The category of expressions related to degree vagueness and combinatorial vagueness [2] mainly consists of one-dimensional gradable adjectives (such as "grand" - "big", "viewx" - "old") and multidimensional gradable adjectives (like "beau" - "beautiful", "intelligent" - "intelligent", "bon" - "good", "qualifie" - "qualified"). Expressions of type \(V_{A}\) and \(V_{G}\) are treated as _factual_, while expressions of type \(V_{D}\) and \(V_{C}\) are treated as _subjective_[15, 27, 23]. In the original version of VAGO [11], a sentence is considered to be vague if it contains at least one marker of vagueness, and subjective if it contains at least one marker of subjectivity. However, the precision of sentences and texts is evaluated only _negatively_: a sentence is precise exactly if it does not contain any vague markers. As an example, VAGO would assign identical vagueness and subjectivity scores of 1 to the following two sentences, because both contain at least one marker of vagueness/subjectivity ("important" in (a), "quickly" and "excellent" in (b)):1 Footnote 1: Sentence (a) is an English translation of a sentence taken from the Wikipedia article on Joseph Bonaparte, while sentence (b) is inspired by a false news or “fake news” story. * _King of Naples from 1806 to 1808, then of Spain from 1808 to 1813, he is an_ **important** _figure in the plan implemented by Napoleon to establish the sovereignty of France over continental Europe._ * _To_ **quickly** _cue Covid-19, one must take an_ **excellent** _herbal decoction._ Intuitively, however, sentence (a), which contains nine named entities (underlined terms in the sentence), is more informative than sentence (b), which only contains one named entity ("Covid-19"), and therefore provides fewer details than (a). To address this limitation, here the current version of VAGO is enriched with a relative _detail score_, based on the proportion of named entities compared to vague expressions. ### _Scores: Vagueness, Subjectivity, Detail_ The current version of VAGO is able to measure the scores of vagueness, subjectivity, but also of detail in English and French documents. The detection of vagueness and subjectivity relies on a lexical database, which consisted of 1,640 terms in both languages at the time of the experiments [3], distributed as follows by vagueness category: \(|V_{A}|\) = 9, \(|V_{G}|\) = 18, \(|V_{D}|\) = 43, and \(|V_{C}|\) = 1570. Regarding the level of detail, the detection is based on identifying named entities (such as people, locations, temporal indications, institutions, and numbers) using the open-source library for Natural Language Processing spaCy.2 Footnote 2: [https://spacy.io/](https://spacy.io/) For a given sentence \(\phi\), its _vagueness score_ is defined as the ratio between the number of vague words in \(\phi\) and the total number of words in the sentence, written \(N_{\phi}\): \[R_{vasueness}(\phi)=\frac{\overbrace{|V_{D}|_{\phi}+|V_{C}|_{\phi}}^{\text{ subjective}}+\overbrace{|V_{A}|_{\phi}+|V_{G}|_{\phi}}^{\text{factual}}}{N_{\phi}} \tag{1}\] where \(|V_{A}|\phi\), \(|V_{G}|\phi\), \(|V_{D}|\phi\), and \(|V_{C}|\phi\) represent the number of terms in \(\phi\) belonging to each of the four vagueness categories (approximation, generality, degree vagueness, and combinatorial vagueness). More specifically, the _subjectivity score_ of a sentence \(\phi\) is calculated as the ratio between the subjective vague expressions in \(\phi\) and the total number of words in \(\phi\). Similarly, a factual vagueness score can be calculated with the expressions of generality and approximation (see sections III and IV): \[R_{subjectivity}(\phi)=\frac{|V_{D}|_{\phi}+|V_{C}|_{\phi}}{N_{\phi}} \tag{2}\] \[R_{factual\ vagueness}(\phi)=\frac{|V_{A}|_{\phi}+|V_{G}|_{\phi}}{N_{\phi}} \tag{3}\] The _detail score_ of a sentence can be defined as the ratio \(R_{detail}(\phi)=\frac{|P|_{\phi}}{N_{\phi}}\), where \(|P|_{\phi}\) denotes the number of named entities in the sentence (referential terms). By extension, if \(|V|_{\phi}\) denotes the number of vague terms in a sentence (across all categories), we define the _detail/vagueness score_ of a sentence as the relative proportion of named entities, given by: \[R_{detail/vagueness}(\phi)=\frac{|P|_{\phi}}{|P|_{\phi}+|V|_{\phi}} \tag{4}\] In the previous example, we can verify that \(R_{detail/vagueness}(\text{a})=9/10\), while \(R_{detail/vagueness}(\text{b})=1/3\), indicating a higher measure of informativeness for (a). For a text \(T\) (set of sentences), the vagueness scores (including subjective vagueness and factual vagueness) are defined as the proportion of sentences of \(T\) whose vagueness score (subjective or factual) is non-zero. The detail/vagueness score of a text \(T\) is defined as the average of the \(R_{detail/vagueness}\) ratios for each sentence in \(T\). The online version of VAGO, available on the Mondeca website,3 showcases the functionalities of the original version of VAGO. The website offers a graphical interface to measure the vagueness and subjectivity scores of texts using two barometers. The first barometer represents the degree of vagueness in a text (defined as \(R_{vagueness}(T)\)), while the second barometer indicates the extent to which the text expresses an opinion rather than a fact, in other words, the proportion of subjective vocabulary within the text (defined as \(R_{subjectivity}(T)\)). Footnote 3: [https://research.mondeca.com/demo/vago/](https://research.mondeca.com/demo/vago/). See [11, 12] for the implementation details. ### _English Vago on factual versus opinion statements_ In [10], Vago was applied to a large set of more than 28,000 documents from four different corpora. A positive correlation was found between the classification of texts as biased by a CNN based classifier and the vagueness scores computed by Vago. In this section, we provide a more analytic perspective by showing the predictions of VAGO on the test set used by PRC to evaluate lay people's judgments of fact vs. opinion. Although the set is very limited (10 sentences), it provides a useful benchmark for comparison, in particular because PRC tested the statements on a large sample of participants (\(N=5,035\)). Figure 1 presents PRC's 10 test sentences, comprised of 5 statements labelled as "opinion" and 5 statements labelled as "factual" (based on PRC's prescriptive classification). The participants recruited by PRC were instructed that a statement should be considered as "factual" in case it can "be proved or disproved based on _objective evidence_" (our emphasis), and _regardless of "whether you think it is accurate or not_". By contrast, a statement counted as an "opinion" if "they thought that it was based on the _values and beliefs_ of the journalist or the source making the statement, and could not definitively be proved or disproved based on objective evidence", and _regardless of "whether you agree with it or not_"[21] (our emphasis). 1. _ISIS lost a_ **significant** _portion of its territory in Iraq and Syria in 2017._ [**F/O**]__ 2. _Immigrants who are in the U.S. illegally have_ **some** _rights under the Constitution._ [**F/F**]__ 3. _Health care costs per person in the U.S. are the_ **highest** _in the developed world._ [**F/O**]__ 4. _Spending on Social Security, Medicare, and Medicaid make up the_ **largest** _portion of the U.S. federal budget._ [**F/O**]__ 5. _President Barack Obama was born in the United States._ [**F/F**]__ 6. _Democracy is the_ **greatest** _form of government._ [**O/O**]__ 7. _Government is_ **almost always wasteful** _and_ **inefficient.** [**O/O**]__ 8. _Increasing the federal minimum wage to_ $15 an hour is_ _essential_ _for the health of the U.S. economy._ [**O/O**]__ 9. _Immigrants who are in the U.S. illegally are a_ **very big** _problem for the country today._ [**O/O**]__ 10. _Abortion_ **should** _be legal in_ **most** _cases._ [**O/O**]__ While the proportion of fact vs. opinion answers varied depending on the sentence, for each statement the majority of participants agreed with the classification made by PRC (mode ranging from 54% to 77% for factual statements, and between 68% and 80% for opinion statements), giving support to the PRC labels. Because participants were asked to decide whether a statement is factual or opinion regardless of world knowledge and personal beliefs, this justifies looking at the linguistic cues they should be aware of to solve the task. The category ascribed by Vago to each particular statement is reported in Figure 1 (right label). The statements for which Vago's classification differs from the PRC classification are highlighted in blue. In Figure 2 are displayed the results of Vago's classification for statement 4, "_Government is almost always wasteful and inefficient_". Note that for a single sentence, the barometers necessarily take categorical values, intermediate values are only obtained over larger texts. As shown in Figure 1, Vago classifies eight statements as opinion, and only two as factual. More specifically, the two sentences classified as factual by Vago (2 and 5) are indeed factual according to the PRC's criterion. Conversely, the five sentences labelled "opinion" by PRC (6-10) are classified as opinions by Vago. This means Vago uses a stricter criterion on what to count as factual, and a laxer one on what to count as opinion relative to this sample. According to Vago, sentences 1, 3, 4, 6-10, are all opinion statements since they contain at least one marker of subjectivity (see Figure 1). But, as with the sentences (a) and (b) given above, some of those statements are more informative than others since they contain more named entities. The measure of detail/vagueness we introduced in subsection II-B helps distinguish those sentences in terms of informativity. For instance, although sentences 4 and 9 both receive a score of subjectivity equal to 1, \(R_{detail/vagueness}(4)=4/5\) while \(R_{detail/vagueness}(9)=1/3\). More specifically, four of the five statements marked as opinion by PRC are identified as such by Vago based on the occurrence of an expression of type \(V_{C}\) ("great", "wasteful", "inefficient", "essential", "big") as well as \(V_{D}\) ("very"). Our hypothesis is that participants relied on those items to decide that the statements convey subjective values or beliefs. Looking more specifically at the statements for which the classifications diverge, we can see that in statements 3 and 4, Vago classifies the superlatives "highest" and "largest" as elements of the category \(V_{D}\). While Vago has rules of vagueness cancellation for comparatives in the category \(V_{D}\) ("taller") and for measure phrases ("5 feet tall"), it does Fig. 1: PRC’s sentences, grouped by category (F=fact; O=opinion). Within brackets: PRC’s classification (left) vs. Vago’s (right). In blue: cases in which Vago differs. Bold-faced expressions: Vago entries detected. Fig. 2: Vago’s results for the sentence “_Government is almost always wasteful and inefficient_”. Vago scores are binarized for sentences, but yield intermediate values for larger texts. not cancel vagueness in superlatives currently. For "greatest" in 6, this is as it should be, since even the superlative leaves room for subjective disagreement, but for "highest" and "largest", the interpretation seems objective and factual. In the case of 1, "significant" is the pivotal element behind VAGO's classification of the sentence as opinion. We note that 30% of participants rated the sentence as opinion, possibly relying on the fact that what counts as "significant" is a matter of interpersonal disagreement. Finally, for sentences 2 and 5 classified as fact by VAGO and by PRC alike, 5 is the only sentence categorized as precise by VAGO, 2 is vague but "some" is \(V_{G}\) and not counted as a marker of subjectivity. ### _French VAGO on regular versus satirical press articles_ To scale up analytic intuitions, VAGO was tested on the French corpus "FreSaDa"4[13], consisting of 11,570 press articles divided into two supposed homogeneous classes: 5,648 "regular" articles from the general French press, not presumed to be false, versus 5,922 "satirical" articles explicitly including false or made up content. Within the total corpus, VAGO processed 10,969 out of the initial 11,570 articles, with the remaining 601 articles excluded due to inappropriate format for analysis (isolated words, keywords, incomplete sentences, etc.). The results provided by VAGO are reported in Figure 3. Footnote 4: [https://github.com/adrianchifu/FreSaDa](https://github.com/adrianchifu/FreSaDa) According to VAGO, the articles in the satirical corpus are significantly _more vague_ (\(p=4.99\times 10^{-11}\)), _more subjective_ (\(p=1.69\times 10^{-9}\)), and _less detailed_ (\(p=3.36\times 10^{-22}\)) than the articles in the regular press corpus (scores calculated per text; two-tailed t-tests, \(\alpha=0.05\), with Bonferroni correction). These results align with expectations and support the findings previously obtained with VAGO on English texts [10]. To further exemplify the interest of these ratios, we conducted a simple classification experiment: the goal was to distinguish between the two types of documents in the FreSaDa corpus solely based on the vagueness, subjectivity and detail/vagueness scores. For a given text of the corpus, the ratios of each of its sentences were obtained with VAGO; the minimum, median, average and maximum of each ratio on all the sentences were computed and used as input to an XGBoost classifier [5] (300 rounds, max_depth=8). The accuracy obtained with different sizes of the training set showed that the ratios scores are effective cues to classify such documents as being either _regular_ or _satirical_. Moreover, very few examples were necessary to yield high accuracy, the plateau being reached at 500 documents. ## III The neural version VAGO-N ### _Training of the VAGO clone_ We built a neural version of the symbolic VAGO called VAGO-N, based on combining a BERT [6] architecture with a regression layer and an MSE loss function to predict a score of vagueness for sentences. In the present experiment, we tested both subjective and factual vagueness, but for the sake of completeness, we also tested the prediction of the \(R_{detail/vagueness}\) score. However, since this score can be more simply computed from a named entity recognition system, we did not return to it in those experiments. As in a distillation task, VAGO was used to associate a vagueness score to each sentence in the FreSaDa corpus and thus to train a neural system. In the experiments reported in the following sections, 106,000 sentences out of 141,137 were randomly selected within the 10,969 articles of the FreSaDa corpus processed by VAGO, and divided into a training set (85,000 sentences) and a test set (21,000 sentences). We used a RoBERTa Large model (_Batch Size_=30; _Learning Rate_=1e-6; _Epochs_=20); experiments not detailed here with a CamemBERT model [20] provided slightly lower results. Performance is reported in Table I with standard regression measures: root-mean-square error (RMSE), coefficient of determination (\(R^{2}\)), mean absolute error (MAE) and median absolute error (MedAE). All these measures show that VAGO-N replicates the symbolic VAGO scores with high accuracy. The task of detecting subjectivity seems a little more difficult than in the case of factual vagueness detection. ### _Comparison of the versions of VAGO_ The previous quantitative evaluation indicates that VAGO-N replicates the general behavior of VAGO quite faithfully. From a qualitative perspective, we aim to verify here that this neural version relies on the same lexical cues as the symbolic version. For this purpose, we use the explainability tool LIME [22]. Applied to the outputs of VAGO-N, LIME identifies the tokens \begin{table} \begin{tabular}{l|c|c|c|c} & RMSE & \(R^{2}\) & MAE & MedAE \\ \hline subjective vagueness & 0.022063 & 0.859897 & 0.014518 & 0.009488 \\ factual vagueness & 0.008745 & 0.949339 & 0.004124 & 0.001730 \\ detail/vague & 0.097008 & 0.882543 & 0.051396 & 0.012367 \\ \end{tabular} \end{table} TABLE I: Regression results of VAGO–N for scores of subjective vagueness, factual vagueness and detail/vagueness concerning the sentences of the FreSaDa corpus (French). Fig. 3: Average ratio scores per article of the FreSaDa corpus (French) according to VAGO. that contribute the most (or the least) to the vagueness score of a given text. In Figure 4, we provide an example of LIME output on a French sentence for subjective vagueness. Using this tool, we examine the cases where the predictions of vagueness scores by VAGO-N diverge the most from those of VAGO. The study of these error cases puts emphasis on several aspects. Regarding the differences between VAGO and VAGO-N for the prediction of factual vagueness, the vast majority of terms identified as contributing the most to the prediction of VAGO-N are already present in the French lexicon of VAGO, both in case of generality vagueness (e.g., _"tout/tous/toutes", "jamais", "ou", "general", "quelques", "certains/certaines"_), and in case of approximation vagueness (_"environ", "presque"_). Those indicators of factual vagueness are correct, but their weight in the final score of VAGO-N differs from the calculation performed by the original VAGO, which does not put any weight on lexical items. This difference in weight may result from words of morpho-syntactic categories not considered by VAGO, which may amplify or diminish the resulting factual vagueness score, according to the neural network. Similarly, in the case of subjective vagueness, LIME applied to VAGO-N identifies adjectives and terms of excess that were already present in the extant VAGO lexicon, either within the category of combinatorial vagueness (e.g., _"negatif", "affirmati", "intressant", "fortement", "chiant", "difficile", "probablement", "vrai", "stupide", "vraiment"_), or within the category of degree vagueness (_"petit"_). But LIME also identifies other adjectives carrying combinatorial vagueness that are not yet in the lexicon and should be included (e.g., _"durable", "particulierement", "ringard", "actual"_), with some exceptions ("_sabbatique_"). ## IV Extensions of the VAGO Approaches ### _Validation and enrichment of the symbolic VAGO_ For each token \(t\) in a text, LIME provides a score for the contribution of \(t\) to the vagueness prediction (subjective or factual) in the text, which we denote \(c_{occ}(t)\). In the case of a sentence, the higher the \(c_{occ}(t)\) of a term \(t\) occurring in the sentence, the more positively \(t\) contributes to the vagueness score of that sentence. By applying LIME to the 141,137 sentences of the FreSaDa corpus processed by VAGO and exploited in VAGO-N, we collected the contribution scores \(c_{occ}\) of all occurrences of all the tokens within these sentences. To obtain a global score \(c_{tok}(t)\) per token \(t\), we summed and normalized the \(c_{occ}\) by the total number of occurrences of each token noted here \(|occ_{t}|\): \(c_{tok}(t)=\frac{1}{|occ_{t}|}\sum_{o\in c_{occ}}c_{occ}(o)\). Our hypothesis was that terms from the VAGO lexicon should be prioritized among tokens receiving the highest \(c_{tok}\). To test it, we calculated the statistical precision (proportion of VAGO instances retrieved) on the list of tokens ordered by decreasing \(c_{tok}\). Note that a token is taken into account if it is an inflection of a term in the VAGO lexicon. The results are listed in Table II, where P@\(k\) represents the precision over the first \(k\) tokens in the list. Figure 5 shows the ROC curve relating the \(c_{tok}\) score to presence in the French VAGO lexicon. These results support our hypothesis. Although VAGO-N has been only trained on sentences and their scores, it is able to reconstruct the lexicon at the core of the symbolic version. Besides, we examined the 100 tokens with the highest \(c_{tok}\) for subjective vagueness: in addition to the 81 well present in the lexicon, a few verbal forms are listed. Although considered false positives (current VAGO lexicons only list adjectives and adverbs), their relevance can be debated. In the remaining tokens, seven absent words were validated as relevant and worthy of inclusion in the lexicons. This list contains four adverbs (_"egalement", "seulement", "particulierement", "clairement"_) for which the VAGO database contains the root adjectives in two cases (_"particuliere", "clair"_), an action verb (_"faire"_), an adjective that can also be a noun (_"droit/droite"_), \begin{table} \begin{tabular}{l|c|c|c|c|c|c} vagueness type & P@5 & P@10 & P@20 & P@30 & P@100 & P@200 \\ \hline subjective & 1.00 & 1.00 & 0.95 & 0.93 & 0.81 & 0.79 \\ factual & 1.00 & 1.00 & 1.00 & 0.93 & 0.31 & 0.16 \\ \end{tabular} \end{table} TABLE II: Comparison of the precision at different thresholds for the French tokens of the VAGO lexicon ordered by \(c_{tok}\). Fig. 4: Example of a LIME output on a French sentence from the FreSaDa corpus processed by VAGO-N. The category labeled as “neutre” corresponds to the inverse category of vagueness, contributing negatively to the vagueness score. Fig. 5: ROC curve of \(c_{tok}\) as an indicator of its presence in the French lexicon of VAGO. and a noun (_"nombre"_). We also noted the detection of non-standard forms of terms present in the lexicon (_"difficille"_, _"pauv"_), illustrating the robustness of the neural approach on noisy text (typo, abbreviation, etc.). The results validate the neural clone VAGO-N, which retrieves the lexical cues of the expert system VAGO, while identifying new or non-standard forms. ### _Developing multilingual versions of VAGO-N_ Developing symbolic versions of VAGO for other languages requires vague lexicons to be available in the target languages. However, automatic translation of these lexicons is not possible due to the idiomatic and out-of-context nature of the lists of expression involved. That being said, it is possible to translate the VAGO-N training set by relying on the following assumption: vagueness scores, in particular subjective vagueness and factual vagueness scores, are preserved from the source language into the target language. To begin with, the FreSaDa corpus was translated from French into English using the Helsinki/NP/opus-mt-fr-en5 model [24]. Next, VAGO-N was trained to predict subjective vagueness and factual vagueness scores on this English corpus (using the same hyper-parameters as for training VAGO-N on French). The regression results are similar to those obtained for French, and presented in Table III. Footnote 5: [https://huggingface.co/Helsinki-NLP/opus-mt-fr-en](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en) Applying the same approach as in subsection IV-A, we isolated the list of tokens ordered by decreasing \(c_{tok}\), then compared it to the English lexicon of VAGO, which serves as ground-truth. The accuracy of this list measured at different thresholds is reported in Table IV. The corresponding ROC curves are presented in Figure 6. We also collected the 100 most vagueness-prone English terms according to VAGO-N. These terms were compared with those in the extant English lexicon of the symbolic version: 90 terms were already included in the VAGO English lexicon. Among the highest-ranking terms from the top 100 that do not appear in VAGO, we found five adjectives or adverbs that could appear in the combinatorial vagueness category (_"likely"_, _"full"_, _"complicated"_, _"frankly"_, _"enough"_), one modal verb ("_must"_), which it also makes sense to include given the presence of _"should"_ in the VAGO lexicon. Four terms, on the other hand, are not clearly vague (_"course"_, _"lost"_, _"lose"_ and _"finally"_), with the possible exception of "course" (occurrences in "of course", the use of which is subjective). Of the next 100 terms, all those not included in the VAGO lexicon are adjectives that can be included in the \(V_{C}\) category (_"worse"_, _"complex"_, etc.). ## V Conclusion In this article, we first presented VAGO, a structured lexicon and rule-based system which calculates scores of vagueness, subjectivity, and detail/vagueness for texts. We then created a neural clone, VAGO-N, based on BERT. Unlike VAGO, VAGO-N is trained solely on VAGO scores, without knowledge of the lexicon underlying this symbolic version. Using LIME, the terms with the highest contribution to VAGO-N scores turn out to be either terms that already appear in VAGO, or terms mostly susceptible to appear in it. This suggests that the decisions of VAGO-N are largely explained by lexical items identified by the symbolic VAGO. Once trained, VAGO-N can be used to complete the extant lexicons of the symbolic VAGO. But it can also be used to produce neural versions of VAGO in other languages and to generate lexicons for symbolic versions in these languages. More work remains to be done. We plan to measure the genericity of our approach by masking the named entities in a text to see whether the VAGO-N scores remain stable before and after this operation, in order to determine the proportion of the VAGO lexicon (which does not contain named entities) actually influencing the decision of VAGO-N. On a more applied level, we are now extending VAGO with additional markers of subjectivity besides adjectives, in particular with explicit markers (first-person pronouns, exclamation marks), but also with more contextual features (direct vs. reported discourse). Ultimately, our goal is to update its interface to help users have a reliable grasp of levels of objectivity and subjectivity in the texts they read in the media. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} vagueness type & P@5 & P@10 & P@20 & P@30 & P@100 & P@200 \\ \hline subjective & 0.80 & 0.90 & 0.90 & 0.93 & 0.90 & 0.84 \\ factual & 1.00 & 0.80 & 0.55 & 0.50 & 0.26 & 0.14 \\ \end{tabular} \end{table} TABLE IV: Comparison of the precision at different thresholds for the list of English tokens ordered by \(c_{tok}\) as a function of the VAGO lexicon. \begin{table} \begin{tabular}{l|c|c|c|c} vagueness type & RMSE & \(R^{2}\) & MAE & MedAE \\ \hline subjective & 0.031801 & 0.708915 & 0.022865 & 0.016807 \\ factual & 0.016990 & 0.808772 & 0.009582 & 0.004172 \\ \end{tabular} \end{table} TABLE III: Regression results of VAGO-N for scores of subjective and factual vagueness concerning the English version of the FreSaDa sentences. Fig. 6: ROC curve of \(c_{tok}\) as an indicator of its presence in the English lexicon of VAGO. ## Acknowledgements We thank four anonymous reviewers for helpful comments and several colleagues for feedback. This work was carried out with the support of the programs HYBRINFOX (ANR-21-ASIA-0003), FRONTCOG (ANR-17-EURE-0017 program), and PLEXUS (Marie Sklodowska-Curie Action, Horizon Europe Research and Innovation Programme, grant agreement n\({}^{\circ}\)101086295).
2309.12147
Integrable measure equivalence rigidity of right-angled Artin groups via quasi-isometry
Let $G$ be a right-angled Artin group with $|\mathrm{Out}(G)|<+\infty$. We prove that if a countable group $H$ with bounded torsion is measure equivalent to $G$, with an $L^1$-integrable measure equivalence cocycle towards $G$, then $H$ is finitely generated and quasi-isometric to $G$. In particular, through work of Kleiner and the second-named author, $H$ acts properly and cocompactly on a $\mathrm{CAT}(0)$ cube complex which is quasi-isometric to $G$ and equivariantly projects to the right-angled building of $G$. As a consequence of work of the second-named author, we derive a superrigidity theorem in integrable measure equivalence for an infinite class of right-angled Artin groups, including those whose defining graph is an $n$-gon with $n\ge 5$. In contrast, we also prove that if a right-angled Artin group $G$ with $|\mathrm{Out}(G)|<+\infty$ splits non-trivially as a product, then there does not exist any locally compact group which contains all groups $H$ that are $L^1$-measure equivalent to $G$ as lattices, even up to replacing $H$ by a finite-index subgroup and taking the quotient by a finite normal subgroup.
Camille Horbez, Jingyin Huang
2023-09-21T15:04:36Z
http://arxiv.org/abs/2309.12147v1
# Integrable measure equivalence rigidity of right-angled Artin groups via quasi-isometry ###### Abstract Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\). We prove that if a countable group \(H\) with bounded torsion is measure equivalent to \(G\), with an \(L^{1}\)-integrable measure equivalence cocycle towards \(G\), then \(H\) is finitely generated and quasi-isometric to \(G\). In particular, through work of Kleiner and the second-named author, \(H\) acts properly and cocompactly on a CAT(0) cube complex which is quasi-isometric to \(G\) and equivariantly projects to the right-angled building of \(G\). As a consequence of work of the second-named author, we derive a superrigidity theorem in integrable measure equivalence for an infinite class of right-angled Artin groups, including those whose defining graph is an \(n\)-gon with \(n\geq 5\). In contrast, we also prove that if a right-angled Artin group \(G\) with \(|\operatorname{Out}(G)|<+\infty\) splits non-trivially as a product, then there does not exist any locally compact group which contains all groups \(H\) that are \(L^{1}\)-measure equivalent to \(G\) as lattices, even up to replacing \(H\) by a finite-index subgroup and taking the quotient by a finite normal subgroup. ###### Contents * 1 Introduction * 2 Preliminaries on the geometry of right-angled Artin groups * 3 Blow-up buildings and a quasi-isometry criterion * 4 Measure equivalence couplings * 5 Proximal dynamics and strong ICC property for \(\operatorname{Aut}(\mathbb{B})\) * 6 Action on the right-angled building with amenable stabilizers * 7 Exploiting integrability * 8 Controlling the factor actions * 9 Proof of the main theorem * 10 Lattice embeddings of right-angled Artin groups * 11 Lack of virtual locally compact model for products Introduction ### Background, history and motivation Measure equivalence was introduced by Gromov [14] as a measure-theoretic analogue to quasi-isometry. Two countable groups \(G_{1},G_{2}\) are _measure equivalent_ if there exists a standard (non-null) measure space \(\Omega\) (called a _coupling_), equipped with a measure-preserving action of \(G_{1}\times G_{2}\), such that for every \(i\in\{1,2\}\), the \(G_{i}\)-action on \(\Omega\) is free and has a finite measure fundamental domain. Quasi-isometry between finitely generated groups has an analogous characterization, with \(\Omega\) a (non-empty) locally compact topological space on which \(G_{1}\) and \(G_{2}\) have commuting actions, both properly discontinuous and cocompact. As a motivating example, lattices in the same locally compact second countable group are always measure equivalent - if cocompact, they are quasi-isometric. Despite the analogy in definitions, there is no implication in either way. By a celebrated theorem of Ornstein-Weiss [13], building on earlier work of Dye [11, 12], all countably infinite amenable groups are measure equivalent - they are far from being all quasi-isometric. Conversely, measure equivalence preserves Property (T) [10] or ratios of \(\ell^{2}\)-Betti numbers [1], which are not quasi-isometry invariants. In contrast to the Ornstein-Weiss theorem, there has been a lot of effort in proving the quasi-isometric and measure equivalence rigidity of many important classes of groups. These include lattices in higher rank simple Lie groups [10, 11, 12, 13], where Zimmer's cocycle superrigidity theorem [15] played a central role on the side of measure equivalence, and surface mapping class groups [16, 1, 17]. By using a notion of _uniform measure equivalence_ to study the large-scale geometry of amenable groups [18], Shalom strengthened the bridge between measure equivalence and quasi-isometry. In the same spirit of imposing a quantitative control on the word length of a cocycle naturally associated to the measure equivalence, Bader-Furman-Sauer coined the notion of _integrable measure equivalence_, reviewed at the beginning of Section 1.2. For this notion, they established new rigidity theorems for some rank \(1\) lattices, including all lattices in \(\mathrm{Isom}(\mathbb{H}_{\mathbb{R}}^{n})\) with \(n\geq 3\), and all cocompact lattices in \(\mathrm{Isom}(\mathbb{H}_{\mathbb{R}}^{2})\) - for the latter rigidity fails for (standard) measure equivalence [13]. The idea behind integrable measure equivalence finds its roots in the work of Margulis [14], where integrability conditions on lattices appear to be crucial in induction arguments, see also [18]. Integrable measure equivalence retains more geometric information about the group, like growth [19] or the isoperimetric profile [15]. On the ergodic side, integrability conditions on orbit equivalence cocycles (closely related to measure equivalence cocycles) already appeared in Belinskaya's theorem [1] regarding actions of \(\mathbb{Z}\), and have also been studied in connection to ergodic notions like entropy [19, 14, 15]. Outside the realm of Lie groups, there has been growing interest in understanding lattices in totally disconnected locally compact groups and their rigidity properties. These turn out to be very mysterious compared to the more classical lattices acting on symmetric spaces and buildings. A general classification theorem of lattice embeddings due to Bader-Furman-Sauer [13, Theorem A] highlights the importance of the totally disconnected case. Of particular relevance are lattices acting on CAT(0) cube complexes, among which right-angled Artin groups form a prototypical example. Right-angled Artin groups are also important for many other reasons. We mention in particular their connections to buildings [20], and the deep combinatorial tools from the work of Haglund-Wise [17, 18] that famously led to Agol's solution to the virtual Haken conjecture [1]. These also turn out to be crucial ingredients in quasi-isometry, measure equivalence and other forms of rigidity, as will be further demonstrated in this paper. Given a finite simplicial graph \(\Gamma\), the right-angled Artin group \(G_{\Gamma}\) has a finite presentation with one generator per vertex of \(\Gamma\), where two generators commute whenever the associated vertices are adjacent. There has been a lot of work regarding the quasi-isometry classification/rigidity of these groups, e.g. [1, 2, 1, 1, 2, 3, 4]; some of them are quasi-isometrically rigid [4]. In previous work [1], we initiated a study of right-angled Artin groups in measure equivalence. Contrary to the situation in quasi-isometry, and in contrast to the behaviour of certain other classes of Artin groups [1], they demonstrate a lack of rigidity, in the sense that the class of groups that are measure equivalent to \(G_{\Gamma}\) is huge, for instance it contains all graph products of countably infinite amenable groups over \(\Gamma\). In fact the line between rigidity and flexibility is quite subtle, see e.g. [1] where we recover rigidity by imposing extra ergodicity assumptions on the coupling. In the present paper, we relate integrable measure equivalence and quasi-isometry for right-angled Artin groups with finite outer automorphism group, and derive a superrigidity theorem in some cases. The finiteness condition on the outer automorphism group naturally appears in rigidity questions; it is easily readable on the defining graph [10, 11] and is generic in a sense [10, 1]. ### Integrable measure equivalence versus quasi-isometry for RAAGs Let \(G\) and \(H\) be two countable groups, with \(G\) finitely generated. Let \(|\cdot|_{G}\) be a word length on \(G\) with respect to some finite generating set. An _\((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\)_ is a measure equivalence coupling \((\Omega,\mu)\) between \(G\) and \(H\) such that there exists a Borel fundamental domain \(X_{G}\) for the \(G\)-action on \(\Omega\), for which the measure equivalence cocycle \(c:H\times X_{G}\to G\) (defined by letting \(c(h,x)\) be the unique element \(g\in G\) such that \(ghx\in X_{G}\)) satisfies \[\forall h\in H,\ \ \int_{X_{G}}|c(h,x)|_{G}\;d\mu(x)<+\infty.\] The terminology \((L^{1},L^{0})\)-measure equivalence coupling comes from the fact that we are only imposing an \(L^{1}\)-integrability condition from \(H\) to \(G\), not from \(G\) to \(H\) (this would not make sense as \(H\) is not assumed finitely generated). In this respect, this notion is weaker than \(L^{1}\)-measure equivalence in the sense of Bader-Furman-Sauer [1], who imposed integrability in both directions. We say that a group \(H\) has _bounded torsion_ if there is a bound on the cardinality of its finite subgroups. Our main theorem is the following. **Theorem 1**.: _Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), let \(H\) be a countable group with bounded torsion._ _If there exists an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\), then \(H\) is finitely generated and quasi-isometric to \(G\)._ As such, the theorem fails if \(H\) is allowed to have unbounded torsion. One example comes from infinitely generated non-uniform lattices in the automorphism group of the universal cover of the Salvetti complex of \(G\) (see [1, Section 4.2]). Another example comes from graph products over the defining graph of \(G\), with vertex groups isomorphic to \(\oplus_{\mathbb{N}}\mathbb{Z}/2\mathbb{Z}\) - the odometer gives a measure equivalence (in fact an orbit equivalence) between \(\oplus_{\mathbb{N}}\mathbb{Z}/2\mathbb{Z}\) and \(\mathbb{Z}\) with an \(L^{\infty}\)-integrability condition, and the integrability passes to graph products. We do not know any finitely generated examples, however: this is related to the deep and important question regarding the existence of finitely generated non-uniform lattices acting on polyhedral complexes, see [11, Question 33] - we mention that finitely generated non-uniform lattices in products of trees were constructed by Remy [14] using Kac-Moody groups. The integrability condition in Theorem 1 is also crucial, as it excludes examples coming from graph products of amenable groups. We mention that Theorem 1 is already new even for \(G=\mathbb{Z}\), when \(H\) is not assumed finitely generated. Finitely generated groups \(H\) with an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(\mathbb{Z}\) grow linearly by a theorem of Bowen [1], and are therefore virtually cyclic. But excluding the possibility that \(H\) be infinitely generated (e.g. \(H=\mathbb{Q}\), whose finitely generated subgroups are all isomorphic to \(\mathbb{Z}\)) requires a new argument. In the present work, this generalization to infinitely generated groups is not just for the sake of the greatest generality. Indeed, when \(G\) is an arbitrary right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), even if we start with a finitely generated group \(H\), in the course of the proof, we will have to work with subgroups of \(H\) that arise as point stabilizers for some \(H\)-action on a \(\operatorname{CAT}(0)\) cube complex, and we will not know _a priori_ that these are finitely generated. In fact, proving finite generation will be an important task in the proof. For certain groups (like surface mapping class groups), there are separate rigidity statements in measure equivalence and quasi-isometry, which imply the conclusion of Theorem 1. This is not the case however for the class of groups in Theorem 1. ### Consequences to superrigidity Groups that are quasi-isometric to a right-angled Artin group with finite outer automorphism group have been extensively studied [13, 14]. The following corollary follows from the combination of Theorem 1 and [13, Corollary 6.4]. **Corollary 2**.: _Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), let \(H\) be a countable group with bounded torsion._ _If there exists an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\), then \(H\) acts properly discontinuously, cocompactly, by cubical automorphisms on a \(\operatorname{CAT}(0)\) cube complex which is quasi-isometric to \(G\)._ In a sense, this is analogous to Furman's theorem on lattices \(G\) in higher-rank simple Lie groups [12], stating that any countable group that is measure equivalent to \(G\), acts as a lattice (up to a finite kernel) on the corresponding symmetric space (or [1] for \(\operatorname{Isom}(\mathbb{H}_{\mathbb{R}}^{n})\) under an integrability assumption). But there is a crucial difference, in that we need to allow the cube complex to depend on \(H\), see Theorem 4. Nevertheless, all cube complexes that arise in Corollary 2 are simple deformations of the universal cover of the Salvetti complex of \(G\), and they have a very explicit description given in Section 3 (in particular, they all collapse onto the right-angled building of \(G\)). We also mention that if the bounded torsion group \(H\) is only assumed to be measure equivalent to \(G\), with no integrability condition, then we can still derive that \(H\) acts cocompactly on a \(\operatorname{CAT}(0)\) cube complex (in fact on the right-angled building of \(G\)), with amenable stabilizers, see Theorem 5. Under further assumptions on the defining graph of \(G\), it is proved in [11, Theorem 1.2], using deep combinatorial insights on special cube complexes by Haglund-Wise [12], that all uniform lattices acting on all cube complexes arising in Corollary 2 are virtually special, which is the key for proving that they are commensurable - see also [20] for recent progress on the relationship between virtual specialness and commensurability. Thus we obtain a superrigidity theorem in integrable measure equivalence for a class of right-angled Artin groups. More precisely, we say that a finite simplicial graph \(\Gamma\) is _star-rigid_ if for every vertex \(v\in V\Gamma\), the only automorphism of \(\Gamma\) that fixes the star of \(v\) pointwise is the identity. Examples of such graphs include the \(n\)-gon, for any \(n\geq 3\), and any asymmetric graph. We recall that an _induced square_ in a simplicial graph \(\Gamma\) is an embedded \(4\)-cycle \(C\) such that no two opposite vertices in \(C\) are adjacent in \(\Gamma\). **Corollary 3**.: _Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), whose defining graph is star-rigid and does not contain any induced square. Let \(H\) be any countable group with bounded torsion._ _If there exists an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\), then \(G\) and \(H\) are commensurable up to a finite kernel._ This is perhaps the first instance where passing through quasi-isometry is key for obtaining rigidity results on the side of (integrable) measure equivalence. _Remark 1.1_.: The conclusion of Corollary 3 is false whenever the defining graph of \(G\) contains an induced square in view of [11, Theorem 1.8]. In this case, the universal cover \(X\) of the Salvetti complex of \(G\) contains a subcomplex which is a product of two trees. An appropriate irreducible lattice acting on this product of trees can then be extended to a cocompact lattice in \(\operatorname{Aut}(X)\) that is not commensurable to \(G\) even up to a finite kernel. ### Lattice envelopes Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), and let \(\mathcal{C}_{G}\) be the class of all bounded torsion countable groups \(H\) having an \((L^{1},L^{0})\)-measure equivalence coupling towards \(G\). Motivated by results in Lie groups as discussed before, we ask the following questions. Does there exist a locally compact second countable group \(\mathfrak{G}\) 1. such that every \(H\in\mathcal{C}_{G}\) has a lattice representation into \(\mathfrak{G}\) with finite kernel? 2. such that every \(H\in\mathcal{C}_{G}\) has a finite-index subgroup \(H^{0}\subseteq H\) that has a lattice representation into \(\mathfrak{G}\) with finite kernel? It turns out that the answer to the first question is negative whenever \(|\operatorname{Out}(G)|<+\infty\), by [10, Theorem 6.11] (whose proof constructs a group \(H\) that is commensurable to \(G\) and has no lattice representation with finite kernel in the same locally compact group as \(G\)). On the other hand, the second question has a positive answer for the groups appearing in Corollary 3. But our next theorem shows that even the (much more subtle) second question has a negative answer in general, in fact as soon as \(G\) splits as a product of two non-cyclic groups. In fact, it even has a negative answer if \(\mathcal{C}_{G}\) is replaced with the (possibly smaller) class of all groups \(H\) that are _strongly commable_ with \(G\), in the sense that there exist finitely generated groups \(G=G_{1},\dots,G_{k}=H\) such that for every \(i\in\{1,\dots,k-1\}\), the groups \(G_{i}\) and \(G_{i+1}\) are uniform lattices in a common locally compact second countable group - this definition is a variation over Cornulier's notion of _commability_[12]. **Theorem 4**.: _Let \(G\) be the product of two non-cyclic right-angled Artin groups with finite outer automorphism groups (hence \(|\operatorname{Out}(G)|<+\infty\))._ _Then there does not exist any locally compact group \(\mathfrak{G}\) such that any torsion-free countable group \(H\) which is strongly commable with \(G\), has a finite-index subgroup \(H^{0}\) with a lattice representation with finite kernel into \(\mathfrak{G}\)._ Our proof of Theorem 4 crucially relies on the celebrated construction by Burger-Mozes of simple groups which are uniform lattices in products of trees [1]. This forms a sharp contrast with Corollary 3 which relies on virtual specialness of \(H\) - the examples leading to Theorem 4 are very far from being virtually special. Theorem 4 can also be viewed as a much stronger version of Remark 1.1, where one goes from lack of commensurability to lack of (virtual) common locally compact model. Theorem 4 also contrasts with Kida's proof of the measure equivalence superrigidity of products of mapping class groups [16], which he derives from the superrigidity of mapping class groups together with the work of Monod-Shalom [17] on rigidity for products of negatively curved (\(\mathcal{C}_{\mathrm{reg}}\)) groups. The difference with our work is that the form of rigidity established by Kida for mapping class groups is even stronger in that he proves that every self measure equivalence coupling factors through the tautological one by left/right multiplication on the (extended) mapping class group. This stronger form of rigidity fails in our context - we will come back to this while discussing the proof of our main theorem. Nevertheless, we can still get information on the possible lattice envelopes of a non-cyclic right-angled Artin group \(G\) with \(|\operatorname{Out}(G)|<+\infty\). More generally, if \(H\) is a countable group with bounded torsion which is measure equivalent to \(G\), then any lattice embedding of \(H\) is cocompact, and in fact every lattice envelope \(\mathfrak{H}\) of \(H\) is totally disconnected up to a compact kernel (Theorem 10.1). And if there is an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\), then more can be said: in this case, there exists a uniformly locally finite \(\operatorname{CAT}(0)\) cube complex \(Y\) quasi-isometric to \(G\) (having the same description as in Corollary 2), and a continuous homomorphism \(\mathfrak{H}\to\operatorname{Aut}(Y)\) with compact kernel and cocompact image (Theorem 10.5). We mention that some of these results follow alternatively from the general work of Bader-Furman-Sauer on lattice envelopes [1]. We take a different route, in closer relation to Furman's ideas for exploiting measure equivalence rigidity towards classification of lattice embeddings [13]. ### Discussion of the proof of the main theorem A common strategy towards proving the measure equivalence rigidity of a group \(G\), initiated by Furman [13] and further developed by Monod-Shalom [17], Kida [16], Bader-Furman-Sauer [1], consists in showing that every self-coupling of \(G\) factors through \(G\) itself, or more generally through a locally compact group \(\mathfrak{G}\) that contains \(G\) as a lattice. However, this cannot hold in our setting even for integrable self-couplings, as this would imply that every countable group that is integrably measure equivalent to \(G\) has a lattice representation with finite kernel into \(\mathfrak{G}\). This would contradict our discussion in Section 1.4. More specifically, Kida's strategy for measure equivalence rigidity of mapping class groups [16] relies on having a graph \(\mathcal{C}\) on which \(G\) acts (in his case, the curve graph of the surface), with two properties. First, vertex stabilizers of \(\mathcal{C}\) are "recognized" in an appropriate sense by any self-coupling \(\Omega\) of \(G\), which allows to build an equivariant map \(\Omega\to\operatorname{Aut}(\mathcal{C})\). Second, the graph \(\mathcal{C}\) is combinatorially rigid in the sense that \(\operatorname{Aut}(\mathcal{C})\) is virtually isomorphic to \(G\) - this is ensured by a theorem of Ivanov for the curve graph [13]. For right-angled Artin groups, there cannot exist a graph \(\mathcal{C}\) that has both properties because of the discussion in Section 1.4. However the first half of Kida's strategy still works, and was carried in our previous work [12]. More precisely, Kim and Koberda introduced in [16] an analogue of the curve graph, which they called the _extension graph_\(\Gamma^{e}\) - but the Polish group \(\operatorname{Aut}(\Gamma^{e})\) is much bigger than \(G_{\Gamma}\) itself and not locally compact, in fact it contains the permutation group of countably many elements \(\mathfrak{S}_{\infty}\) as a subgroup [14, Corollary 4.20]. In earlier work [12], we proved the following fact (see also Lemma 6.5), under our standing assumption that \(|\operatorname{Out}(G_{\Gamma})|<+\infty\). **Fact.** For every self-coupling \(\Sigma\) of \(G_{\Gamma}\), there is a \((G_{\Gamma}\times G_{\Gamma})\)-equivariant measurable map \(\Sigma\to\operatorname{Aut}(\Gamma^{e})\) - where the action on \(\operatorname{Aut}(\Gamma^{e})\) is by left/right multiplication. This is the starting point of the present paper. From there our proof has three steps. **Step 1: An action of \(H\) on the right-angled building of \(G_{\Gamma}\).** This first step does not use any integrability assumption. In addition to the extension graph, another important geometric object attached to \(G_{\Gamma}\) is its right-angled building \(\mathbb{B}_{\Gamma}\), a \(\operatorname{CAT}(0)\) cube complex introduced by Davis in [11] which encodes the arrangement of flats in \(G_{\Gamma}\). Its vertices are exactly the _standard flats_ in \(G_{\Gamma}\), i.e. left cosets of free abelian subgroups coming from complete subgraphs of \(\Gamma\) - a vertex corresponding to a left coset of \(\mathbb{Z}^{k}\) is said to have _rank_\(k\). And there is an edge between two vertices representing cosets \(C,C^{\prime}\) if \(C\subseteq C^{\prime}\) and this is a codimension \(1\) inclusion. There is a natural way of filling in higher-dimensional cubes - we refer to Section 2.2 for more details and Remark 2.4 on how the right-angled building is related to the classical buildings. We prove the following theorem, which answers, at least partly, the first question in [12, p. 1028]. **Theorem 5**.: _Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), let \(\mathbb{B}\) be the right-angled building of \(G\), and let \(H\) be a group which is measure equivalent to \(G\)._ _Then \(H\) acts on \(\mathbb{B}\) with amenable vertex stabilizers, and cocompactly provided that \(H\) has bounded torsion._ We now say a word about its proof. It turns out that \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) is naturally isomorphic to \(\operatorname{Aut}(\Gamma^{e})\), see Section 2.3. So the above fact ensures that every self-coupling of \(G_{\Gamma}\) factors through \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\). By a general argument from [10, 1], given a measure equivalence coupling \(\Omega\) between \(G_{\Gamma}\) and \(H\), we obtain a representation of \(H\) in \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) (i.e. an action of \(H\) on \(\mathbb{B}_{\Gamma}\)) and a \((G_{\Gamma}\times H)\)-equivariant measurable map \(\theta:\Omega\to\operatorname{Aut}(\mathbb{B}_{\Gamma})\). We then elaborate on an argument of Kida [10, Section 5.2] and formulate a general framework that enables, given a vertex \(v\in V\mathbb{B}_{\Gamma}\), to induce a measure equivalence coupling \(\Omega_{v}\) between the stabilizers \(G_{v}\) (for the original action of \(G_{\Gamma}\)) and \(H_{v}\) (for the \(H\)-action obtained as above). This coupling \(\Omega_{v}\) is (up to a small technicality) the preimage, under \(\theta\), of the full stabilizer of \(v\) in \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\). This general framework is established in Section 4. We mention that there are several subtleties for implementing the above strategy. In particular, it is important for all our arguments to know that the inclusion of \(G_{\Gamma}\) in \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) is _strongly ICC_, i.e. that the Dirac measure at \(\operatorname{id}\) is the only probability measure on \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) which is invariant under the conjugation by every element of \(G_{\Gamma}\). The proof of this fact relies heavily on the proximal dynamics (in the sense of Furstenberg [12]) of the action of \(G_{\Gamma}\) on the Roller compactification of \(\mathbb{B}_{\Gamma}\), relying on tools established by Fernos [12] and Kar-Sageev [13]. **Step 2: From amenable to virtually cyclic stabilizers of rank \(1\) vertices.** This is the only place where we use our integrability assumption. We use it to show that every vertex of \(\mathbb{B}_{\Gamma}\) whose stabilizer for the action of \(G_{\Gamma}\) is cyclic, also has a virtually cyclic stabilizer for the action of \(H\). For this, using an argument of Escalier and the first-named author [EH], we observe that if the measure equivalence coupling \(\Omega\) is \(L^{1}\)-integrable from \(H\) to \(G\), then the induced measure equivalence coupling \(\Omega_{v}\) between the vertex stabilizers \(G_{v},H_{v}\) is also \(L^{1}\)-integrable from \(H_{v}\) to \(G_{v}\approx\mathbb{Z}\). At this point, if we knew that \(H_{v}\) were finitely generated, then we could apply Bowen's theorem stating that growth is preserved by \(L^{1}\)-integrable measure equivalence [11, Appendix B], and deduce that \(H_{v}\) is virtually cyclic. The main difficulty is that we do not know _a priori_ that vertex stabilizers for the \(H\)-action are finitely generated, even if we had assumed \(H\) to be finitely generated to start with. It is therefore crucial for us to extend Bowen's theorem and prove the following, which specializes Theorem 1 to the case where \(G=\mathbb{Z}\) (in fact Theorem 7.10 gives a slightly more precise version phrased in terms of integrable embeddings). **Theorem 6** (see Theorem 7.10).: _Let \(H\) be a countable group with bounded torsion, and assume that there is an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(\mathbb{Z}\)._ _Then \(H\) is virtually cyclic._ **Step 3: Control of factor actions and conclusion.** At this point we have actions of \(H\) and \(G_{\Gamma}\) acting on the same complex \(\mathbb{B}_{\Gamma}\) by cubical automorphisms, with all the cell stabilizers being virtually isomorphic.1 However, this is far from concluding that \(H\) and \(G_{\Gamma}\) are virtually isomorphic or even quasi-isometric. Footnote 1: In reality, we will only prove that stabilizers of rank \(1\) vertices in \(H\) and \(G_{\Gamma}\) are virtually isomorphic, which is enough for our purposes, but this could be extended to higher-rank vertices. By restricting the action \(H\curvearrowright\mathbb{B}_{\Gamma}\) to rank \(0\) vertices of \(G_{\Gamma}\), we obtain an action of \(H\) on \(G_{\Gamma}\) by _flat-preserving bijections_, which means that every element \(h\in H\) acts on \(G_{\Gamma}\) by sending any standard flat \(F\) bijectively onto another standard flat \(F^{\prime}\). A flat-preserving bijection is in generally not an isometry (or even a quasi-isometry) - in fact, given a standard line \(\ell\) (i.e. a \(1\)-dimensional standard flat) in \(G_{\Gamma}\), any permutation of elements of \(\ell\) extends to a flat-perserving bijection of \(G_{\Gamma}\). On the other hand, a flat-preserving bijection is a quasi-isometry if its restriction to each standard line is a quasi-isometry, with uniform constants. The upshot of Step 3 is to prove that the action \(H\curvearrowright G_{\Gamma}\) coarsely preserves the order along each standard line up to a carefully chosen conjugation. As the permutation on parallel standard lines would interfere with each other, to make this precise, we use the notion of a _factor action_ introduced in [13], which we now recall. Let \(\mathsf{v}\in V\Gamma^{e}\) be a vertex, corresponding to a cyclic subgroup \(gG_{\mathsf{v}}g^{-1}\), with \(g\in G_{\Gamma}\) and \(v\in V\Gamma\). The stabilizer \(H_{\mathsf{v}}\) (for the \(H\)-action on \(\operatorname{Aut}(\Gamma^{e})\approx\operatorname{Aut}(\mathbb{B}_{\Gamma})\)) preserves the product region \(P_{\mathsf{v}}=g(G_{\mathsf{v}}\times G_{\operatorname{lk}(\mathsf{v})})\), which is also the union of all standard lines in \(G_{\Gamma}\) with stabilizer \(gG_{\mathsf{v}}g^{-1}\) (here \(G_{\operatorname{lk}(\mathsf{v})}\) is the subgroup generated by elements in \(V\Gamma\setminus\{v\}\) that commute with \(v\)). Geometrically we think of \(P_{\mathsf{v}}\) as the union of all standard lines that are parallel to \(gG_{v}\). The action of \(H_{\mathsf{v}}\) on \(P_{\mathsf{v}}\) preserves its product decomposition, thereby inducing a _factor action_ of \(H_{\mathsf{v}}\) on \(Z_{\mathsf{v}}\approx gG_{v}g^{-1}\). The main ingredient of Step 3 is the following result connecting measure equivalence to quasi-isometry in our setting, established in Section 8. **Key property.** Let \(\alpha:H\curvearrowright\mathbb{B}_{\Gamma}\) be an action obtained from an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G_{\Gamma}\), and let \(\mathsf{v}\in V\Gamma^{e}\). Then the factor action \(H_{\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\) is conjugate to an action of \(H_{\mathsf{v}}\) on \(\mathbb{Z}\) by uniform quasi-isometries. Once the key property is established, it follows from [10] that the action \(H\curvearrowright G_{\Gamma}\) is conjugate to an action by uniform quasi-isometries, from which it is not hard to deduce \(H\) and \(G_{\Gamma}\) are quasi-isometric. ### Structure of the paper In Section 2, we provide background on right-angled Artin groups and associated geometric objects; in particular, we review Kim and Koberda's extension graph, and the right-angled building, and compare their automorphism groups. In Section 3, we review the work of Kleiner and the second-named author, and use it to establish a sufficient criterion to ensure that a group is quasi-isometric to a right-angled Artin group with finite outer automorphism group. Section 4 contains general constructions and lemmas regarding measure equivalence couplings. Section 5 uses proximal dynamics to establish the strong ICC property for \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\). In Section 6, we combine the measure equivalence framework and our previous work [11] to prove Theorem 5: every group that is measure equivalent to a right-angled Artin group \(G\) with \(|\operatorname{Out}(G)|<+\infty\), acts on the right-angled building of \(G\) with amenable stabilizers. In Section 7, we exploit our integrability assumption; we prove Theorem 6 and use it to get control on the stabilizers for the \(H\)-action on the right-angled building of \(G\). The required control on factor actions, described in Step 3 of the above sketch, is established in Section 8, and we complete the proof of our main theorem (Theorem 1) in Section 9. Section 10 contains our theorems regarding lattice envelopes of groups that are measure equivalent to a right-angled Artin group. Finally, we prove Theorem 4, regarding the lack of virtual common locally compact model for groups that are strongly commable to \(G\), when \(G\) splits as a product, in Section 11. Acknowledgments.We thank Uri Bader, Amandine Escalier, Alex Furman, Matthieu Joseph, Francois Le Maitre and Romain Tessera for fruitful related conversations. The first-named author was funded by the European Union (ERC, Artin-Out-ME-OA, 101040507). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. The second-named author was funded by a Sloan fellowship. This project was started at the Institut Henri Poincare (UAR 839 CNRS-Sorbonne Universite) during the trimester program _Groups acting on fractals, Hyperbolicity and Self-similarity_. Both authors thank the IHP for its hospitality and support (through LabEx CARMIN, ANR-10-LABX-59-01). Preliminaries on the geometry of right-angled Artin groups ### Right-angled Artin groups Let \(\Gamma\) be a finite simplicial graph, i.e. \(\Gamma\) has no loop-edge and no multiple edges between vertices. The _right-angled Artin group_ with defining graph \(\Gamma\), denoted by \(G_{\Gamma}\), is the group defined by the following presentation: \[\langle V\Gamma|\ [v,w]=1\text{ if }v\text{ and }w\text{ are joined by an edge }\rangle.\] The reader is referred to [1] for an introduction to right-angled Artin groups. Recall that \(\Lambda\subset\Gamma\) is an _induced subgraph_ if vertices of \(\Lambda\) are adjacent in \(\Lambda\) if and only if they are adjacent in \(\Gamma\). Each induced subgraph \(\Lambda\subset\Gamma\) yields an injective homomorphism \(G_{\Lambda}\to G_{\Gamma}\) whose image is called a _standard parabolic subgroup_ of type \(\Lambda\). A _parabolic subgroup_ of \(G_{\Gamma}\) is a conjugate of some standard parabolic subgroup. A _standard coset_ of type \(\Lambda\) is a left coset of the form \(gG_{\Lambda}\). A _standard abelian subgroup_ of \(G_{\Gamma}\) is a standard parabolic subgroup whose type is a complete subgraph. In particular, the trivial subgroup is a standard abelian subgroup, whose type is the empty set. A _standard flat_ of type \(\Lambda\) in \(G_{\Gamma}\) is a left coset of a standard abelian subgroup of type \(\Lambda\), and its _dimension_ is the rank of this abelian subgroup. One-dimensional standard flats are also called _standard lines_. Zero-dimensional standard flats are exactly elements in \(G_{\Gamma}\). The _star_ of a vertex \(v\) in \(\Gamma\), denoted by \(\operatorname{st}(v)\), is the induced subgraph spanned by \(v\) and all its adjacent vertices. Its _link_\(\operatorname{lk}(v)\) is the induced subgraph spanned by all vertices that are adjacent to \(v\). The _orthogonal_\(\Lambda^{\perp}\) of an induced subgraph \(\Lambda\subseteq\Gamma\) is the induced subgraph of \(\Gamma\) spanned by all vertices in \(V\Gamma\setminus V\Lambda\) which are adjacent to every vertex of \(\Lambda\). For example \(\operatorname{lk}(v)=\{v\}^{\perp}\). The following proposition was proved by Charney, Crisp and Vogtmann, building on work of Godelle [1]. **Proposition 2.1** ([1, Proposition 2.2]).: _Let \(\Gamma\) be a finite simplicial graph._ 1. _For every induced subgraph_ \(\Lambda\subseteq\Gamma\)_, the normalizer of_ \(G_{\Lambda}\) _in_ \(G_{\Gamma}\) _is equal to_ \(G_{\Lambda\cup\Lambda^{\perp}}\)_._ 2. _Let_ \(\Lambda_{1},\Lambda_{2}\subseteq\Gamma\) _be induced subgraphs. If_ \(G_{\Lambda_{1}}\) _and_ \(G_{\Lambda_{2}}\) _are conjugate, then_ \(\Lambda_{1}=\Lambda_{2}\)_._ 3. _Given two induced subgraphs_ \(\Lambda_{1}\) _and_ \(\Lambda_{2}\) _in_ \(\Gamma\)_, if_ \(gG_{\Lambda_{1}}g^{-1}\subset G_{\Lambda_{2}}\) _for some_ \(g\in G_{\Gamma}\)_, then there exists_ \(h\in G_{\Lambda_{2}}\) _such that_ \(gG_{\Lambda_{1}}g^{-1}=hG_{\Lambda_{1}}h^{-1}\)_._ Two standard flats \(F_{1}=g_{1}G_{\Lambda_{1}}\) and \(F_{2}=g_{2}G_{\Lambda_{2}}\) in \(G_{\Gamma}\) are _parallel_ if \(\Lambda_{1}=\Lambda_{2}\) and \(g_{1}^{-1}g_{2}\) belongs to the normalizer of \(G_{\Lambda_{1}}\) in \(G_{\Gamma}\). Note that this definition does not depend on the choice of coset representatives \(g_{1}\) and \(g_{2}\). The _parallel set_ of a standard flat \(F\), denoted \(P_{F}\), is the union of all standard flats that are parallel to \(F\). By Proposition 2.1(1), if \(F\) has type \(\Lambda\), then \(P_{F}=gG_{\Lambda\cup\Lambda^{\perp}}\) for some \(g\in G_{\Gamma}\). The splitting \(G_{\Lambda\cup\Lambda^{\perp}}=G_{\Lambda}\times G_{\Lambda^{\perp}}\) gives a splitting of the parallel set \(P_{F}\cong F\times F^{\perp}\), moreover, this splitting gives a _parallelism map_\(p:F_{1}\to F_{2}\) between any two standard flats that are parallel. The universal cover of the Salvetti complex.The group \(G_{\Gamma}\) acts geometrically, i.e. properly discontinuously and cocompactly, on a CAT(0) cube complex \(X_{\Gamma}\), defined as follows. The \(1\)-skeleton of \(X_{\Gamma}\) is the Cayley graph \(C_{\Gamma}\) of \(G_{\Gamma}\) for its standard generating set \(S\). It is equipped with the usual orientation and labeling of edges in a Cayley graph by elements of \(S\). We then glue a square to each \(4\)-cycle in the Cayley graph to obtain the 2-skeleton of \(X_{\Gamma}\), then attach a 3-cube to each copy of the boundary of a 3-cube (with the obvious attaching maps), and more generally by induction on \(k\), we glue a \(k\)-cube on each copy of the boundary of a \(k\)-cube. This process terminates after finitely many steps and results in a finite-dimensional CAT(0) cube complex [1, Section 3], on which \(G_{\Gamma}\) acts geometrically. We will identify \(G_{\Gamma}\) with the 0-skeleton of \(X_{\Gamma}\). The _Salvetti complex_ of \(G_{\Gamma}\), introduced in [10] and denoted \(S_{\Gamma}\), is defined to be the quotient \(G_{\Gamma}\backslash X_{\Gamma}\) - and \(X_{\Gamma}\) is the universal cover of \(S_{\Gamma}\). Note that the 2-skeleton of \(S_{\Gamma}\) is exactly the presentation complex of \(G_{\Gamma}\). Hence we label each edge of \(S_{\Gamma}\) by a generator of \(G_{\Gamma}\) and orient each edge. Each complete subgraph \(\Delta\) of \(\Gamma\) with \(n\) vertices gives a copy of the \(n\)-dimensional torus in \(S_{\Gamma}\) as a subcomplex, and \(S_{\Gamma}\) is a union of these torus subcomplexes. ### Extension graphs and right-angled buildings Extension graphs.The following notion was introduced by Kim and Koberda. **Definition 2.2** (Extension graph [11]).: _Let \(\Gamma\) be a finite simplicial graph. The extension graph \(\Gamma^{e}\) of \(\Gamma\) is the simplicial graph whose vertices are the infinite cyclic parabolic subgroups of \(G_{\Gamma}\), where two vertices are adjacent if the corresponding parabolic subgroups commute._ We emphasize that, unless \(\Gamma\) is a complete graph, the extension graph \(\Gamma^{e}\) is infinite and not locally finite. The conjugation action of \(G_{\Gamma}\) on itself induces an action of \(G_{\Gamma}\) on \(\Gamma^{e}\) by graph automorphisms. Every standard flat \(F\subseteq G_{\Gamma}\) determines a (finite) complete subgraph \(\Delta(F)\subseteq\Gamma^{e}\), in the following way: \(\operatorname{Stab}_{G_{\Gamma}}(F)\) is a parabolic subgroup, generated by a finite set of pairwise commuting infinite cyclic parabolic subgroups \(\{Z_{1},\ldots,Z_{k}\}\), and we let \(\Delta(F)\) to the complete subgraph of \(\Gamma^{e}\) spanned by the vertices corresponding to \(Z_{1},\ldots,Z_{k}\). The number \(k\) of vertices in \(\Delta(F)\) is equal to the dimension of \(F\). Given \(\mathsf{v}\in V\Gamma^{e}\), a _\(\mathsf{v}\)-line_ in \(G_{\Gamma}\) is a standard line \(\ell\) with \(\Delta(\ell)=\{\mathsf{v}\}\). Note that a collection of pairwise distinct cyclic parabolic subgroups \(P_{i}=g_{i}\langle v_{i}\rangle g_{i}^{-1}\) for \(1\leq i\leq n\) mutually commute if and only \(v_{i}\) and \(v_{j}\) are adjacent in \(\Gamma\) for \(i\neq j\) and there exists \(g\in G_{\Gamma}\) such that for every \(i\in\{1,\ldots,n\}\), one has \(P_{i}=g\langle v_{i}\rangle g^{-1}\). This is a consequence of Proposition 2.1 and an induction on \(n\). Thus, if \(K\) is a complete subgraph of \(\Gamma^{e}\), then there exists a standard flat \(F=g\langle v_{1}\ldots,v_{n}\rangle\) such that \(\Delta(F)=K\). If two standard flats \(F_{1},F_{2}\) satisfy \(\Delta(F_{1})=\Delta(F_{2})\), then they are parallel. Indeed, if \(F_{i}=g_{i}G_{\Lambda_{i}}\) for every \(i\in\{1,2\}\), then \(\Delta(F_{1})=\Delta(F_{2})\) implies \(g_{1}G_{\Lambda_{1}}g_{1}^{-1}=g_{2}G_{\Lambda_{2}}g_{2}^{-1}\). Then Proposition 2.1 implies that \(F_{1}\) and \(F_{2}\) are parallel. As a consequence, the map \(\Delta\) from the collection of standard flats to the collection of cliques of \(\Gamma^{e}\) induces a bijection between maximal standard flats in \(G_{\Gamma}\) and maximal cliques in \(\Gamma^{e}\). Thus we can define \(\Gamma^{e}\) alternatively as follows. Vertices of \(\Gamma^{e}\) are in one-to-one correspondence with parallelism classes of standard lines in \(G_{\Gamma}\). Two vertices of \(\Gamma^{e}\) are adjacent if and only if there are representatives of the associated parallelism classes that together span a 2-dimensional standard flat. Right-angled buildings.Recall that an _interval_ in a partially ordered set \((\mathcal{P},\leq)\) is a subset of the form \(I_{a,b}=\{x\in\mathcal{P}\mid a\leq x\leq b\}\) for some \(a,b\in\mathcal{P}\) with \(a\leq b\). If every interval in \(\mathcal{P}\) is a Boolean lattice of finite rank, then there exists a unique (up to isomorphism) cube complex \(|\mathcal{P}|\) whose poset of cubes is isomorphic to the poset of intervals of \(\mathcal{P}\), see e.g. [1, Proposition A.38]. We call \(|\mathcal{P}|\) the _cubical realization_ of \(\mathcal{P}\) We will be particularly interested in the case when \(\mathcal{P}\) is the poset of standard flats in \(G_{\Gamma}\), ordered by inclusion. Note that if we take \(g_{1}G_{\Lambda_{1}}\leq g_{2}G_{\Lambda_{2}}\) in \(\mathcal{P}\), then we can assume that \(g_{2}=g_{1}\) up to changing coset representatives, and the interval between \(g_{1}G_{\Lambda_{1}}\) and \(g_{2}G_{\Lambda_{2}}\) consists of all cosets of form \(g_{1}G_{\Lambda}\) with \(\Lambda_{1}\subset\Lambda\subset\Lambda_{2}\). In particular it is a Boolean lattice of finite rank. The following notion was introduced by Davis in [1]. **Definition 2.3** (Right-angled building).: _Let \(\Gamma\) be a finite simplicial graph. The right-angled building of \(G_{\Gamma}\), denoted by \(\mathbb{B}_{\Gamma}\), is the cubical realization of the poset of standard flats in \(G_{\Gamma}\)._ _Remark 2.4_.: We comment on the terminology "right-angled building", even though we will not explicitly need the connection to buildings in the rest of the paper. Davis [1] explained that we can view the right-angled Artin group \(G_{\Gamma}\) as a building modeled on a reflection group \(W_{\Gamma}\) which is the right-angled Coxeter group with the same defining graph. Moreover, \(\mathbb{B}_{\Gamma}\) is a geometric model witnessing some classical geometric properties of buildings, for example, any two points in \(\mathbb{B}_{\Gamma}\) are contained in a common "apartment" which is a convex subcomplex isomorphic to the canonical CAT(0) cube complex (called the _Davis complex_) associated with \(W_{\Gamma}\). We refer to [1, Section 5] for a more detailed discussion. By [1, Corollary 11.7] (attributed to Meier), the cube complex \(\mathbb{B}_{\Gamma}\) is CAT(0). There is a one-to-one correspondence between \(k\)-cubes in \(\mathbb{B}_{\Gamma}\) and intervals of the form \(I_{F_{1},F_{2}}\) where \(F_{1}\subseteq F_{2}\) are two standard flats in \(\Gamma\) with \(\dim(F_{2})-\dim(F_{1})=k\). In particular, vertices of \(\mathbb{B}_{\Gamma}\) correspond to standard flats in \(\Gamma\). The _rank_ of a vertex \(v\in V\mathbb{B}_{\Gamma}\) is the dimension of the corresponding standard flat. We label every vertex of \(\mathbb{B}_{\Gamma}\) by the type of the corresponding standard flat. The vertex set of \(\mathbb{B}_{\Gamma}\) inherits a partial order from the poset of standard flats. There is an induced action of \(G_{\Gamma}\) on \(\mathbb{B}_{\Gamma}\) by cubical automorphisms, which preserves the labellings of vertices. This action is cocompact, but not proper - the stabilizer of a cube is isomorphic to \(\mathbb{Z}^{n}\) where \(n\) is the rank of the minimal vertex in this cube. Recall that the _join_ of two simplicial graphs \(\Gamma_{1}\) and \(\Gamma_{2}\), denoted \(\Gamma_{1}\circ\Gamma_{2}\), is the simplicial graph obtained from the disjoint union \(\Gamma_{1}\sqcup\Gamma_{2}\) by adding an edge between any vertex in \(\Gamma_{1}\) and any vertex in \(\Gamma_{2}\). It follows from the definition that if \(\Gamma=\Gamma_{1}\circ\Gamma_{2}\), then \(\mathbb{B}_{\Gamma}\cong\mathbb{B}_{\Gamma_{1}}\times\mathbb{B}_{\Gamma_{2}}\). **Lemma 2.5**.: _Every cubical automorphism of \(\mathbb{B}_{\Gamma}\) preserves ranks of vertices._ Proof.: Let \(x\in V\mathbb{B}_{\Gamma}\) be a vertex. Recall that the _link_ of \(x\) in \(\mathbb{B}_{\Gamma}\), denoted by \(\operatorname{lk}(x,\mathbb{B}_{\Gamma})\), is the simplicial complex formed by intersecting an \(\varepsilon\)-sphere around \(x\) with \(\mathbb{B}_{\Gamma}\), for \(\varepsilon>0\) small enough. In particular, there is a bijection between vertices of \(\mathbb{B}_{\Gamma}\) which are adjacent to \(x\) and vertices in \(\operatorname{lk}(x,\mathbb{B}_{\Gamma})\). Let \(\operatorname{lk}^{+}(x,\mathbb{B}_{\Gamma})\) (resp. \(\operatorname{lk}^{-}(x,\mathbb{B}_{\Gamma})\)) be the full subcomplex of \(\operatorname{lk}(x,\mathbb{B}_{\Gamma})\) spanned by vertices which are larger than \(x\) (resp. smaller than \(x\)). Then \(\operatorname{lk}(x,\mathbb{B}_{\Gamma})\) is the join of \(\operatorname{lk}^{+}(x,\mathbb{B}_{\Gamma})\) and \(\operatorname{lk}^{-}(x,\mathbb{B}_{\Gamma})\). Note that \(\operatorname{lk}^{+}(x,\mathbb{B}_{\Gamma})\) is a finite complex. Suppose \(x\) has rank \(k\) and corresponds to a standard flat \(F\) of dimension \(k\). Vertices in \(\operatorname{lk}^{-}(x,\mathbb{B}_{\Gamma})\) are in one-to-one correspondence with co-dimension \(1\) standard flats in \(F\). Thus these vertices can be divided into \(k\) different classes \(V_{1},\ldots,V_{k}\), corresponding to the \(k\) parallelism classes of co-dimension \(1\) standard flats in \(F\). Each \(V_{i}\) is an infinite set. Moreover, \(\operatorname{lk}^{-}(x,\mathbb{B}_{\Gamma})\) is a join of \(k\) discrete sets \(V_{1}*V_{2}*\cdots*V_{k}\). Thus two vertices of \(\mathbb{B}_{\Gamma}\) with different ranks must have non-isomorphic links in \(\mathbb{B}_{\Gamma}\). Now the lemma follows. ### Automorphisms of the extension graph and the right-angled building We endow \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) and \(\operatorname{Aut}(\Gamma^{e})\) with the compact-open topology, making them Polish groups. A bijection \(f:G_{\Gamma}\to G_{\Gamma}\) is _flat-preserving_ if both \(f\) and \(f^{-1}\) send any standard flat bijectively onto another standard flat. Let \(\operatorname{Bij}_{\operatorname{FP}}(G_{\Gamma})\) be the group of flat-preserving bijections of \(G_{\Gamma}\), again equipped with the compact-open topology, or equivalently the topology of pointwise convergence, which makes it a Polish group. Lemma 2.5 implies that for every \(f\in\operatorname{Aut}(\mathbb{B}_{\Gamma})\), the restriction of \(f\) to the set of rank \(0\) vertices is a flat-preserving bijection of \(G_{\Gamma}\). Conversely, any flat-preserving bijection of \(G_{\Gamma}\) induces an automorphism of the poset of standard flats, hence an automorphism of \(\mathbb{B}_{\Gamma}\). This yields an isomorphism of Polish groups \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\simeq\operatorname{Bij}_{ \operatorname{FP}}(G_{\Gamma})\). We now explain how to identify \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) to \(\operatorname{Aut}(\Gamma^{e})\) when \(|\operatorname{Out}(G_{\Gamma})|<+\infty\), as will be recorded in Lemma 2.6 below. From \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) to \(\operatorname{Aut}(\Gamma^{e})\).We define a map \(\Phi:\operatorname{Aut}(\mathbb{B}_{\Gamma})\to\operatorname{Aut}(\Gamma^{e})\) as follows. Take \(\alpha\in\operatorname{Aut}(\mathbb{B}_{\Gamma})\). The restriction of \(\alpha\) to the set of rank \(0\) vertices of \(\mathbb{B}_{\Gamma}\) is a flat-preserving bijection \(g:G_{\Gamma}\to G_{\Gamma}\). We claim that \(g\) sends parallel standard lines to parallel standard lines. Indeed, let \(\ell_{1}\) and \(\ell_{2}\) be parallel standard lines. If \(\ell_{1}\) and \(\ell_{2}\) are contained in a common \(2\)-dimensional standard flat, then the claim is obvious. In general, it follows from Proposition 2.1 that there is a finite chain of standard lines starting from \(\ell_{1}\) and ending at \(\ell_{2}\) such that consecutive members in the chain are parallel and contained in a common \(2\)-dimensional standard flat. The claim thus follows. The above claim implies that \(g\) induces a bijection of the set of parallelism classes of standard lines of \(G_{\Gamma}\). Recall that vertices of \(\Gamma^{e}\) correspond to parallelism classes of standard lines. Thus \(g\) induces a bijection of \(V\Gamma^{e}\), and by construction this bijection preserves adjacency. We let \(\Phi(\alpha)\) be this automorphism of \(\Gamma^{e}\). From \(\operatorname{Aut}(\Gamma^{e})\) to \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\).Conversely, assuming that \(|\operatorname{Out}(G_{\Gamma})|<+\infty\), we now build a map \(\Theta:\operatorname{Aut}(\Gamma^{e})\to\operatorname{Aut}(\mathbb{B}_{\Gamma})\), which will be an inverse to \(\Phi\). Let \(\alpha\in\operatorname{Aut}(\Gamma^{e})\). Let \(p\in G_{\Gamma}\), and let \(F_{1},\dots,F_{n}\) be the maximal standard flats that contain \(p\). Each standard flat \(F_{i}\) corresponds to a maximal clique \(C_{i}\) in \(\Gamma^{e}\), and \(\alpha(C_{i})\) in turn corresponds to a unique maximal standard flat \(F^{\prime}_{i}\) of \(G_{\Gamma}\). As \(|\operatorname{Out}(G_{\Gamma})|<+\infty\), it follows from [10, Lemmas 4.12 and 4.17] that \(F^{\prime}_{1}\cap\dots\cap F^{\prime}_{n}\) is a singleton \(\{p^{\prime}\}\). Letting \(\alpha_{*}(p)=p^{\prime}\) defines a map \(\alpha_{*}:G_{\Gamma}\to G_{\Gamma}\). By construction, the \(\alpha_{*}\)-image of any maximal standard flat is contained in a maximal standard flat. By considering \((\alpha^{-1})_{*}\), we see that in fact \(\alpha_{*}\) sends every maximal standard flat bijectively onto another standard flat. We claim that in fact \(\alpha_{*}\) is flat-preserving. For this it is enough to prove that it sends standard lines to standard lines - an inductive argument then shows it for standard flats of higher dimension. As \(|\operatorname{Out}(G_{\Gamma})|<+\infty\), every vertex \(v\in V\Gamma\) is the intersection of the maximal cliques in \(\Gamma\) containing \(v\) (otherwise the link of \(v\) would be contained in the star of another vertex). As standard flats containing a standard line of type \(v\) are in one-to-one correspondence with cliques in \(\Gamma\) that contain \(v\), every standard line \(\ell\) is the intersection of all the maximal standard flats containing \(\ell\). Hence \(\alpha_{*}\) and \((\alpha^{-1})_{*}\) send standard lines bijectively onto standard lines, as claimed. We now let \(\Theta(\alpha)=\alpha_{*}\), viewed as an automorphism of \(\mathbb{B}_{\Gamma}\). One readily verifies that \(\Phi\) and \(\Theta\) are continuous group homomorphisms, and that they are inverses of each other, which we record in the following statement. **Lemma 2.6**.: _Suppose that \(|\operatorname{Out}(G_{\Gamma})|<+\infty\). Then \(\Phi:\operatorname{Aut}(\mathbb{B}_{\Gamma})\to\operatorname{Aut}(\Gamma^{e})\) is an isomorphism of Polish groups with inverse \(\Theta\), and \(\Phi(G_{\Gamma})=G_{\Gamma}\) under the natural embeddings of \(G_{\Gamma}\) in \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) and in \(\operatorname{Aut}(\Gamma^{e})\). _Remark 2.7_.: In the sequel, the last part of the lemma will be used in the following way. Letting \(G_{\Gamma}\times G_{\Gamma}\) act by left/right multiplication on both \(\operatorname{Aut}(\mathbb{B}_{\Gamma})\) and \(\operatorname{Aut}(\Gamma^{e})\), the map \(\Phi\) is \((G_{\Gamma}\times G_{\Gamma})\)-equivariant. ## 3 Blow-up buildings and a quasi-isometry criterion In this section, we review work of Kleiner and the second named author [10] and use it to provide a criterion ensuring that a group is quasi-isometric to a right-angled Artin group \(G=G_{\Gamma}\) with \(|\operatorname{Out}(G)|<+\infty\), see Theorem 3.2. Factor actions.Take a vertex \(\mathsf{v}\in V\Gamma^{e}\), and let \(P_{\mathsf{v}}\) be the union of all \(\mathsf{v}\)-lines in \(G_{\Gamma}\). Proposition 2.1 implies that \(P_{\mathsf{v}}\) is a left coset of the form \(gG_{\operatorname{st}(v)}\) for some \(v\in V\Gamma\). Let \(Z_{\mathsf{v}}\) be the collection of left cosets of \(G_{\operatorname{lk}(\mathsf{v})}\) in \(P_{\mathsf{v}}\) and \(\mathcal{L}_{\mathsf{v}}\) be the collection of \(\mathsf{v}\)-lines (i.e. left cosets of \(G_{\mathsf{v}}\) in \(P_{\mathsf{v}}\)). There are natural projections \(\pi_{1}:P_{\mathsf{v}}\to Z_{\mathsf{v}}\) and \(\pi_{2}:P_{\mathsf{v}}\to\mathcal{L}_{\mathsf{v}}\), and we can identify \(P_{\mathsf{v}}\) and \(Z_{\mathsf{v}}\times\mathcal{L}_{\mathsf{v}}\) via the bijection \((\pi_{1},\pi_{2})\). Let \(H\) be a group. Any action \(\alpha:H\to\operatorname{Aut}(\mathbb{B}_{\Gamma})\) induces an \(H\)-action by flat-preserving bijections on \(G_{\Gamma}\), as well as an \(H\)-action on \(\Gamma^{e}\) by graph automorphisms, as explained in Section 2.3. Let \(\mathsf{v}\in V\Gamma^{e}\) and let \(H_{\mathsf{v}}\) be its \(H\)-stabilizer. Then \(H_{\mathsf{v}}\) preserves \(P_{\mathsf{v}}\), and acts on it by flat-preserving bijections sending \(\mathsf{v}\)-lines to \(\mathsf{v}\)-lines. As a consequence, the \(H_{\mathsf{v}}\)-action on \(P_{\mathsf{v}}\) preserves the product decomposition \(Z_{\mathsf{v}}\times\mathcal{L}_{\mathsf{v}}\) described above. In particular, there is an induced action \(\alpha_{\mathsf{v}}:H_{\mathsf{v}}\to\operatorname{Bij}(Z_{\mathsf{v}})\), where \(\operatorname{Bij}(Z_{\mathsf{v}})\) is the group of all bijections of \(Z_{\mathsf{v}}\). The following notion, from [10, Definition 5.32], will be crucial in the present work. **Definition 3.1** (Factor action).: _Given an action \(H\curvearrowright\mathbb{B}_{\Gamma}\) of a group \(H\) by cubical automorphisms, and \(\mathsf{v}\in V\Gamma^{e}\), the induced action \(\alpha_{\mathsf{v}}:H_{\mathsf{v}}\to\operatorname{Bij}(Z_{\mathsf{v}})\) is called the factor action of \(\alpha\) associated to \(\mathsf{v}\)._ The goal of the present section is to derive the following theorem from the work of Kleiner and the second named author [10]. **Theorem 3.2**.: _Let \(G=G_{\Gamma}\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\). Let \(H\) be a group. Assume that \(H\) has an action \(\alpha:H\curvearrowright G\) by flat-preserving bijections satisfying the following conditions:_ 1. _the action has finitely many orbits and finite stabilizers;_ 2. _for every_ \(\mathsf{v}\in V\Gamma^{e}\)_, the factor action_ \(\alpha_{\mathsf{v}}:H_{\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\) _is conjugate to an action on_ \(\mathbb{Z}\) _by uniform quasi-isometries._ _Then \(H\) is finitely generated and quasi-isometric to \(G\)._ _Moreover, \(H\) acts geometrically (i.e. properly and cocompactly by automorphisms) on a CAT(0) cube complex \(Y\) with an \(H\)-equivariant surjective cubical map \(Y\to\mathbb{B}_{\Gamma}\) (for the \(H\)-action on \(\mathbb{B}\) induced by \(\alpha\)) such that the preimage of any rank \(k\) vertex is isomorphic to the Euclidean space \(\mathbb{E}^{k}\) with its usual cubulation._ The cube complex \(Y\) has a more explicit description, see Section 3.1. It is closely related to the canonical cube complex \(X_{\Gamma}\) associated with \(G_{\Gamma}\) (the universal cover of its Salvetti complex) in the sense that \(Y\) is obtained from \(X_{\Gamma}\) by replacing each standard flat in \(X_{\Gamma}\) by what we call a branched flat, and gluing these branched flats in a similar pattern as how standard flats in \(X_{\Gamma}\) are glued together. In the rest of this section, we will first give a more precise description of blow-up buildings, then prove Theorem 3.2. ### Blow-up buildings **Definition 3.3** (Branched lines and flats).: _A metric simplicial graph \(\beta\) is a branched line if there exists \(C>0\) such that \(\beta\) is obtained from \(\mathbb{R}\) (equipped with its simplicial structure given by subdividing at integer points) by gluing, at every integer \(n\in\mathbb{Z}\), at most \(C\) edges of length \(1\), denoted \(e_{n,1},\ldots,e_{n,k}\), gluing the origin of each \(e_{n,i}\) at \(n\)._ _Valence one vertices of \(\beta\) are called the tips of \(\beta\), and their set is denoted by \(t(\beta)\). The copy of \(\mathbb{R}\) is called the core of \(\beta\)._ _A branched flat \(F\) is a product of finitely many branched lines \(\beta_{1},\ldots,\beta_{k}\). A tip of \(F\) is a tuple \((t_{1},\ldots,t_{k})\), where each \(t_{i}\) is a tip of \(\beta_{i}\); we denote their set by \(t(F)\). The core of \(F\) is the product of the cores of the \(\beta_{i}\)._ The following construction is a special case of [11, Sections 5.2 and 5.3], see also [17, Section 3.1]. **Definition 3.4** (Blow-up datum).: _A blow-up datum is a family of surjections \((g_{\ell}:\ell\to\mathbb{Z})_{\ell}\), where \(\ell\) varies over the set of standard lines of \(G_{\Gamma}\), such that_ 1. _whenever_ \(\ell_{1},\ell_{2}\) _are parallel, with parallelism map_ \(p:\ell_{2}\to\ell_{1}\)_, then_ \(g_{\ell_{2}}=g_{\ell_{1}}\circ p\)_;_ 2. _for every_ \(\ell\)_, there exists_ \(C_{\ell}>0\) _such that_ \(|g_{\ell}^{-1}(n)|\leq C_{\ell}\) _for every_ \(n\in\mathbb{Z}\)_._ _We say that the blow-up datum \((g_{\ell})_{\ell}\) is uniformly locally finite if the constant \(C_{\ell}\) can be chosen independent of \(\ell\)._ Recall that in our convention, a standard line is defined as a subset of \(G_{\Gamma}\) - in particular it is discrete. Associated to any blow-up datum \((g_{\ell})_{\ell}\) is a family \((\beta_{\mathsf{v}})_{\mathsf{v}\in V\Gamma^{\mathsf{e}}}\) of branched lines, and a family of maps \((f_{\ell})_{\ell}\), defined in this way. For every \(\mathsf{v}\in V\Gamma^{\mathsf{e}}\), we first choose a \(\mathsf{v}\)-line \(\ell_{\mathsf{v}}\). We then let \(\beta_{\mathsf{v}}\) be the simplicial graph \((\ell_{\mathsf{v}}\times[0,1])\sqcup\mathbb{R}/\!\!\sim\), where \((x,1)\sim g_{\ell_{\mathsf{v}}}(x)\) for any \(x\in\ell_{\mathsf{v}}\). The inclusion map \(\ell_{\mathsf{v}}\to\beta_{\mathsf{v}}\) yields a bijection \(f_{\ell_{\mathsf{v}}}:\ell_{\mathsf{v}}\to t(\beta_{\mathsf{v}})\). Now, for every standard line \(\ell\), denoting by \(\mathsf{v}\) the type of \(\ell\), there is a parallelism map \(p:\ell\to\ell_{\mathsf{v}}\), and we let \(f_{\ell}=f_{\ell_{\mathsf{v}}}\circ p\). We say that the family \((f_{\ell})_{\ell}\) of bijections constructed in this way is _adapted_ to the blow-up datum \((g_{\ell})_{\ell}\). Blow-up buildings.Let \((g_{\ell})_{\ell}\) be a blow-up datum, and let \((f_{\ell})_{\ell}\) be an adapted family of bijections. We now associate to \((g_{\ell}),(f_{\ell})\) a cube complex \(Y\), as follows. First, to every standard flat \(F\subseteq G_{\Gamma}\), we associate a space \(\beta_{F}\) as follows: 1. if \(\Delta(F)=\emptyset\) (i.e. \(F\) is a \(0\)-dimensional standard flat), we let \(\beta_{F}\) be a point; 2. if \(\Delta(F)\neq\emptyset\), writing \(F=\prod_{\mathsf{v}\in V(\Delta(F))}\ell_{\mathsf{v}}\), where each \(\ell_{\mathsf{v}}\subset F\) is a standard \(\mathsf{v}\)-line, we let \(\beta_{F}=\prod_{\mathsf{v}\in V(\Delta(F))}\beta_{\mathsf{v}}\). Whenever \(F^{\prime}\subseteq F\) are two standard flats, we can write \[F^{\prime}=\prod_{\mathsf{v}\in V(\Delta(F^{\prime}))}\ell_{\mathsf{v}}\times \prod_{\mathsf{v}\in V(\Delta(F))\setminus V(\Delta(F^{\prime}))}\{x_{\mathsf{ v}}\},\] where each \(x_{\mathsf{v}}\) is a vertex in \(\ell_{\mathsf{v}}\). Then we define an isometric embedding \(\beta_{F^{\prime}}\hookrightarrow\beta_{F}\) as follows: \[\beta_{F^{\prime}}=\prod_{\mathsf{v}\in V(\Delta(F^{\prime}))}\beta_{\mathsf{v} }\cong\prod_{\mathsf{v}\in V(\Delta(F^{\prime}))}\beta_{\mathsf{v}}\times\prod_ {\mathsf{v}\in V(\Delta(F)\setminus\Delta(F^{\prime}))}\{f_{\ell_{\mathsf{v}}} (x_{\mathsf{v}})\}\hookrightarrow\prod_{\mathsf{v}\in V(\Delta(F))}\beta_{ \mathsf{v}}=\beta_{F}.\] **Definition 3.5** (Blow-up building).: _The space \(Y\) obtained from the disjoint union of the branched lines \(\beta_{F}\) by identifying \(\beta_{F^{\prime}}\) as a subset of \(\beta_{F}\) whenever \(F^{\prime}\subseteq F\), according to the above isometric embeddings, is called the blow-up building associated to \((g_{\ell})_{\ell}\), \((f_{\ell})_{\ell}\)._ ### Properties of blow-up buildings We now define a projection map \(\pi:Y\to\mathbb{B}_{\Gamma}\). Note that for each standard line \(\ell\subset G\), we can define a map \(\pi:\beta_{\ell}\to\mathbb{B}_{\Gamma}\) by sending the core of \(\beta_{\ell}\) to the rank \(1\) vertex in \(\mathbb{B}_{\Gamma}\) associated with \(\ell\), sending each vertex in \(t(\beta_{\ell})\) to the associated rank \(0\) vertex in \(\mathbb{B}_{\Gamma}\), and extending linearly. More generally, let \(F=\prod_{\mathsf{v}\in V(\Delta(F))}\ell_{\mathsf{v}}\) be a standard flat, where each \(\ell_{\mathsf{v}}\subset F\) is a standard \(\mathsf{v}\)-line, and let \(\beta_{F}=\prod_{\mathsf{v}\in V(\Delta(F))}\beta_{\mathsf{v}}\) be the associated branched flat. We define \(\pi:\beta_{F}\to\mathbb{B}_{\Gamma}\) as follows. Every vertex \(x\in\beta_{F}\) lies in the core of a unique subcomplex of the form \(\beta_{F^{\prime}}\), with \(F^{\prime}\subseteq F\), and we let \(\pi(x)\) be the vertex of \(\mathbb{B}_{\Gamma}\) associated to \(F^{\prime}\). These maps defined on each \(\beta_{F}\) are compatible with the gluing pattern, hence induce a map \(\pi:Y\to\mathbb{B}_{\Gamma}\). Note that the restriction of \(\pi:Y\to\mathbb{B}_{\Gamma}\) to each cube is either an isometry or collapses the cube to a cube of smaller dimension (by collapsing some of the interval factors). We say that a vertex \(y\in Y\) has _rank_\(k\) if \(\pi(y)\) has rank \(k\). We record the following properties of \(Y\). 1. The natural map \(\beta_{F}\to Y\) is injective for each standard flat \(F\subset G\), see [11, Lemma 5.16]. The image of this embedding is called a _standard branched flat_. From now on we slightly abuse notation and again denote by \(\beta_{F}\) its image in \(Y\). The core of a standard branched flat is called a _standard flat_. The map sending a standard flat \(F\) to the core of \(\beta_{F}\) is a one-to-one correspondence between standard flats in \(G\) and standard flats in \(Y\). 2. Given any two standard flats \(F_{1},F_{2}\) of \(G\), one has \(\beta_{F_{1}}\cap\beta_{F_{2}}=\beta_{F_{1}\cap F_{2}}\), see [11, Lemma 8.1]. Thus if the cores of \(\beta_{F_{1}}\) and \(\beta_{F_{2}}\) have nontrivial intersection, then \(\beta_{F_{1}}=\beta_{F_{2}}\): indeed, if \(F^{\prime}\subsetneq F\), then \(\beta_{F^{\prime}}\) does not intersect the core of \(\beta_{F}\); so we must have \(F_{1}\cap F_{2}=F_{1}=F_{2}\). In particular, different standard flats in \(Y_{\Gamma}\) are disjoint. 3. There exists a unique injective map \(f:G\to Y\) whose restriction to any standard line \(\ell\) coincides with \(f_{\ell}\). This map \(f\) sends the vertex set of each standard flat of \(G\) bijectively to the tips of a standard branched flat. The image of \(f\) is exactly the set of \(0\)-dimensional standard flats in \(Y\). In fact, the space \(Y\) is a CAT(0) cube complex by [11]. **Lemma 3.6** ([11, Corollary 5.30]).: _Let \((g_{\ell})_{\ell}\) be a uniformly locally finite blow-up datum, let \((f_{\ell})_{\ell}\) be an adapted family of bijections, and let \(Y\) be the associated blow-up building._ _Then \(G\) and \(Y\) are quasi-isometric._ Each \(\mathsf{v}\)-line \(\ell\subset G_{\Gamma}\) has a canonical identification with \(Z_{\mathsf{v}}\). So the factor action \(\alpha_{\mathsf{v}}:H_{\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\) can also be viewed as an action \(\alpha_{\mathsf{v},\ell}:H_{\mathsf{v}}\curvearrowright\ell\). **Definition 3.7**.: _Let \((g_{\ell})_{\ell}\) be a blow-up datum, and \((f_{\ell})_{\ell}\) be an adapted family of bijections. Let \(\alpha:H\curvearrowright G_{\Gamma}\) be an action of a group \(H\) by flat-preserving bijections. We say that \(\alpha\) and \((g_{\ell}),(f_{\ell})\) are compatible if there exists a family of isometric actions \((\alpha^{\prime}_{\mathsf{v}}:H_{\mathsf{v}}\to\operatorname{Isom}(\beta_{ \mathsf{v}}))_{\mathsf{v}\in V\Gamma^{\epsilon}}\), such that_ 1. _for each_ \(\mathsf{v}\)_-line_ \(\ell\)_, the map_ \(f_{\ell}:\ell\to t(\beta_{\mathsf{v}})\subset\beta_{\mathsf{v}}\) _is_ \((\alpha_{\mathsf{v},\ell},\alpha^{\prime}_{\mathsf{v}})\)_-equivariant;_ 2. _if_ \(h\in H\) _sends a_ \(\mathsf{v}\)_-line_ \(\ell\) _to a_ \(\mathsf{w}\)_-line_ \(\ell^{\prime}\)_, then the map_ \(f_{\ell^{\prime}}\circ h\circ f_{\ell}^{-1}:t(\beta_{\mathsf{v}})\to t(\beta_{ \mathsf{w}})\) _extends to an_ \((\alpha^{\prime}_{\mathsf{v}},\alpha^{\prime}_{\mathsf{w}})\)_-equivariant isometry between_ \(\beta_{\mathsf{v}}\) _and_ \(\beta_{\mathsf{w}}\)_._ Since flat-preserving bijections of \(G_{\Gamma}\) are naturally in one-to-one correspondence with cubical automorphisms of \(\mathbb{B}_{\Gamma}\), we will also say that an action \(\alpha:H\to\operatorname{Aut}(\mathbb{B}_{\Gamma})\) is _compatible_ with \((g_{\ell}),(f_{\ell})\) if the corresponding \(H\)-action on \(G_{\Gamma}\) by flat-preserving bijections is. The following lemma is a consequence of [10, Lemma 5.25]. **Lemma 3.8**.: _Let \(\alpha:H\curvearrowright\mathbb{B}_{\Gamma}\) be an action by cubical automorphisms. Let \((g_{\ell})_{\ell}\) be a blow-up datum, let \((f_{\ell})_{\ell}\) be an adapted family of bijections, and let \(Y\) be the associated blow-up building._ _If \(\alpha\) is compatible with \((g_{\ell}),(f_{\ell})\), then there exists an action \(\alpha^{\prime}:H\curvearrowright Y\) by cellular isometries such that the map \(\pi:Y\to\mathbb{B}_{\Gamma}\) is \((\alpha^{\prime},\alpha)\)-equivariant._ ### Proof of the quasi-isometry criterion Proof of Theorem 3.2.: The first step is to choose a blow-up datum which is compatible with the \(H\)-action. We claim that the action \(H\curvearrowright V\Gamma^{e}\) has finitely many orbits. Indeed, as each point of \(G\) is contained in finitely many standard lines, and the action \(\alpha\) is flat-preserving, the action \(\alpha:H\curvearrowright G\) has finitely many orbits of standard lines. Hence the claim follows. Let \(\{\mathsf{v}_{1},\ldots,\mathsf{v}_{n}\}\) be a (finite) set of representatives of the orbits of vertices for the action \(H\curvearrowright\Gamma^{e}\). By assumption, the action \(\alpha_{\mathsf{v}_{i}}\) is conjugate to an action on \(\mathbb{Z}\) by uniform quasi-isometries. Therefore, by [10, Proposition 6.3], it is semi-conjugate to an action on \(\mathbb{Z}\) by isometries. More precisely, there exist an isometric action \(\gamma_{\mathsf{v}_{i}}:H_{\mathsf{v}_{i}}\curvearrowright\mathbb{Z}\), an \((\alpha_{\mathsf{v}_{i}},\gamma_{\mathsf{v}_{i}})\)-equivariant surjection \(g_{i}:Z_{\mathsf{v}_{i}}\to\mathbb{Z}\), and \(C_{i}>0\) satisfying \(|g_{i}^{-1}(n)|\leq C_{i}\) for every \(n\in\mathbb{Z}\). For every standard line \(\ell\), we now define a map \(g_{\ell}:\ell\to\mathbb{Z}\), following the construction in [10, Section 5.6]. First, if \(\ell\) is a \(\mathsf{v}_{i}\)-line for some \(i\in\{1,\ldots,n\}\), we let \(p_{i}:P_{\mathsf{v}_{i}}\to Z_{\mathsf{v}_{i}}\) be the projection map, and let \(g_{\ell}=(g_{i}\circ p_{i})_{|\ell}\). In general \(\ell\) is a \(\mathsf{w}\)-line for some \(\mathsf{w}\in V\Gamma^{e}\). For every \(\mathsf{w}\in V\Gamma^{e}\), we choose an element \(h_{\mathsf{w}}\in H\) such that \(h_{\mathsf{w}}\mathsf{w}=\mathsf{v}_{i}\) for some \(i\in\{1,\ldots,n\}\). Now if \(\ell\) is a \(\mathsf{w}\)-line, then \(h_{\mathsf{w}}(\ell)\) is a \(\mathsf{v}_{i}\)-line, and we let \(g_{\ell}=g_{h_{\mathsf{w}}(\ell)}\circ h_{\mathsf{w}}\). As the maps \(g_{i}\) are finite-to-one, the family \((g_{\ell})_{\ell}\) is a uniformly locally finite blow-up datum. Let \((\beta_{\mathsf{v}})_{\mathsf{v}\in V\Gamma^{e}}\) be the family of branched lines associated to the blow-up datum \((g_{\ell})_{\ell}\), and let \((f_{\ell}:\ell\to t(\beta_{\mathsf{v}}))_{\ell}\) be an adapted family of bijections. For every \(i\in\{1,\ldots,n\}\), the equivariance property of the map \(g_{i}\) gives an isometric action \(\alpha^{\prime}_{\mathsf{v}_{i}}:H_{\mathsf{v}_{i}}\curvearrowright\beta_{ \mathsf{v}_{i}}\). More generally, for every \(\mathsf{w}\in V\Gamma^{e}\), there is an isometric action \(\gamma_{\mathsf{w}}:H_{\mathsf{w}}\curvearrowright\mathbb{Z}\) given by \(\gamma_{\mathsf{w}}(h)(z)=\gamma_{\mathsf{v}_{i}}(h_{\mathsf{w}}hh_{\mathsf{w }}^{-1})(z)\) (where \(i\) is such that \(h_{\mathsf{w}}\mathsf{w}=\mathsf{v}_{i}\)). Then for every \(\mathsf{w}\)-line \(\ell\), the map \(g_{\ell}:\ell\to\mathbb{Z}\) constructed above is \((\alpha_{\mathsf{w},\ell},\gamma_{\mathsf{w}})\)-equivariant. So it yields an isometric action \(\alpha^{\prime}_{\mathsf{w}}:H_{\mathsf{w}}\curvearrowright\beta_{\mathsf{w}}\), such that the map \(f_{\ell}:\ell\to t(\beta_{\mathsf{w}})\) is \((\alpha_{\mathsf{w},\ell},\alpha^{\prime}_{\mathsf{w}})\)-equivariant. One also checks that the second compatibility condition in Definition 3.7 is satisfied. Let \(Y\) be the blow-up building associated to \((g_{\ell})_{\ell},(f_{\ell})_{\ell}\). By Lemma 3.6, the space \(Y\) is quasi-isometric to \(G\). By Lemma 3.8, there is an action \(\alpha^{\prime}:H\curvearrowright Y\) by cellular isometries, such that the map \(\pi:Y\to\mathbb{B}_{\Gamma}\) is \((\alpha^{\prime},\alpha)\)-equivariant. We now prove that the action \(\alpha^{\prime}\) is proper and cocompact. This will imply that \(H\) is finitely generated and quasi-isometric to \(G\), as desired. By definition of \(\pi\), every rank \(0\) vertex of \(\mathbb{B}_{\Gamma}\) has a unique preimage under \(\pi\). As the map \(\pi:Y\to\mathbb{B}_{\Gamma}\) is \((\alpha^{\prime},\alpha)\)-equivariant, it follows that the action \(\alpha^{\prime}\) has finitely many orbits of rank \(0\) vertices. Note that every vertex \(y\) of \(Y\) of rank at least \(1\) is adjacent to at least one vertex of lower rank (this follows by considering the standard flat containing \(y\) and the associated standard branched flat). Thus there exists \(C>0\) such that any vertex in \(Y\) is at most distance \(C\) from a rank \(0\) vertex. As \(Y\) is uniformly locally finite, it follows that \(\alpha^{\prime}\) has finitely many orbits of vertices. Using again that \(Y\) is uniformly locally finite, this is enough to ensure that there are only finitely many orbits of cells, so \(\alpha^{\prime}\) is cocompact. As \(H\) acts cocompactly on a uniformly locally finite complex, to show that the action is proper, it suffices to show that the stabilizer of each vertex is finite. The case of rank \(0\) vertices follows from the first assumption of the theorem, the equivariance of \(\pi\), and the fact that \(\pi\) is a bijection between rank \(0\) vertices of \(Y\) and \(\mathbb{B}_{\Gamma}\). The equivariance of \(\pi\) ensures that the \(H\)-action on \(Y\) preserves the rank of vertices. Therefore, the stabilizer of each vertex \(y\in Y\) of rank at least \(1\) permutes the non-empty finite set of vertices of lower rank which are adjacent to \(y\). By induction on the rank, we thus deduce that the stabilizer of every vertex is finite. The moreover part of the theorem follows from [12, Corollary 6.5]. Actually, the cube complex is exactly \(Y\). ## 4 Measure equivalence couplings In this section, we first review the definition and framework of measure equivalence and couplings. We then establish a few general statements that will be specialized to the context of right-angled Artin groups in later sections. ### Review on measure equivalence Recall from the introduction that a _measure equivalence coupling_ between two countable groups \(\mathsf{G}\) and \(\mathsf{H}\) is a standard measure space \(\Omega\) (of positive measure) equipped with a measure-preserving action of \(\mathsf{G}\times\mathsf{H}\) such that both factor actions \(\mathsf{G}\curvearrowright\Omega\) and \(\mathsf{H}\curvearrowright\Omega\) are free and have a finite measure fundamental domain. Here, a _fundamental domain_ for the action of \(\mathsf{G}\) on \(\Omega\) is a Borel subset \(X_{\mathsf{G}}\subseteq\Omega\) such that \(\mathsf{G}\cdot X_{\mathsf{G}}=\Omega\) up to null sets, and for every nontrivial element \(g\in\mathsf{G}\), the intersection \(X_{\mathsf{G}}\cap gX_{\mathsf{G}}\) is a null set. Two countable groups \(\mathsf{G},\mathsf{H}\) are _measure equivalent_ if there exists a measure equivalence coupling between \(\mathsf{G}\) and \(\mathsf{H}\). This turns out to be an equivalence relation on the set of countable groups, see [10, Section 2]. There is a dictionary between measure equivalence and stable orbit equivalence, that was established by Furman [10]. Let us briefly mention what we will need from this dictionary. Let \(\Omega\) be a measure equivalence coupling between two countable groups \(\mathsf{G}\) and \(\mathsf{H}\), and let \(X_{\mathsf{G}},X_{\mathsf{H}}\) be respective fundamental domains for the actions of \(\mathsf{G},\mathsf{H}\) on \(\Omega\) whose intersection \(U\) has positive measure (these always exist, as shown by translating \(X_{\mathsf{H}}\) if needed). There are induced actions \(\mathsf{G}\curvearrowright X_{\mathsf{H}}\) and \(\mathsf{H}\curvearrowright X_{\mathsf{G}}\), defined (on conull subsets) through the identifications \(X_{\mathsf{H}}\approx\mathsf{H}\backslash\Omega\) and \(X_{\mathsf{G}}\approx\mathsf{G}\backslash\Omega\). To distinguish these actions, when \(g\in\mathsf{G}\) and \(x\in X_{\mathsf{H}}\), we will write \(gx\in\Omega\) for the image of \(x\) under the action of \(g\) on \(\Omega\), and \(g\cdot x\in X_{\mathsf{H}}\) for its image under the induced action of \(g\) on \(X_{\mathsf{H}}\). More concretely \(g\cdot x\) is the unique element of \(X_{\mathsf{H}}\) in the same \(\mathsf{H}\)-orbit as \(gx\) (uniqueness is ensured almost everywhere using that \(X_{\mathsf{H}}\) is a fundamental domain for the \(\mathsf{H}\)-action on \(\Omega\)). The orbits of the two induced actions \(\mathsf{G}\curvearrowright X_{\mathsf{H}}\) and \(\mathsf{H}\curvearrowright X_{\mathsf{G}}\) have the same intersection with \(U\) (up to a null set): indeed if \(x,g\cdot x\in U\) for some \(g\in\mathsf{G}\), then there exists \(h\in\mathsf{H}\) such that \(hgx\in U\); as the actions of \(\mathsf{G}\) and \(\mathsf{H}\) on \(\Omega\) commute, we have \(ghx\in U\), showing that \(h\cdot x=g\cdot x\). There is a natural cocycle \(c:\mathsf{G}\times X_{H}\to\mathsf{H}\), defined by letting \(c(g,x)\) be the unique element \(h\in H\) such that \(hgx\in X_{\mathsf{H}}\). Likewise there is a cocycle \(\mathsf{H}\times X_{\mathsf{G}}\to\mathsf{G}\). These are called the _measure equivalence cocycles_ associated to \(\Omega\) and to the fundamental domains \(X_{\mathsf{H}}\), \(X_{\mathsf{G}}\). Here the cocycle relation means that \(c(g_{1}g_{2},x)=c(g_{1},g_{2}x)c(g_{2},x)\) for every \(g_{1},g_{2}\in\mathsf{G}\) and almost every \(x\in X_{\mathsf{H}}\). We also mention that changing the fundamental domain \(X_{\mathsf{H}}\) to another one \(X^{\prime}_{\mathsf{H}}\) changes \(c\) to a cocycle \(c^{\prime}\) which is _cohomologous_, i.e. such that there exists a measurable map \(\varphi:X\to\mathsf{H}\) such that for every \(g\in\mathsf{G}\) and almost every \(x\in X_{\mathsf{H}}\), if we denote by \(x^{\prime}\in X^{\prime}_{\mathsf{H}}\) the unique element in the same \(\mathsf{H}\)-orbit as \(x\), then \(c^{\prime}(g,x^{\prime})=\varphi(gx)c(g,x)\varphi(x)^{-1}\). The above can also be reformulated in the language of measured groupoids, see e.g. [10, Section 2.2] or [11, Section 3] for an introduction. Every measure-preserving action of a countable group \(\mathsf{G}\) on a standard probability space \(X\) gives rise to a measured groupoid \(\mathsf{G}\ltimes X\) over \(X\): as a Borel set this is \(\mathsf{G}\times X\), and the composition law is given by \((h,gx)(g,x)=(hg,x)\), see e.g. [10, Example 2.20] for more details. Every element \(\gamma\) in a measured groupoid \(\mathcal{G}\) over \(X\) has a source \(s(\gamma)\) and a range \(r(\gamma)\) in \(X\): in the above example \(s(g,x)=x\) and \(r(g,x)=g\cdot x\). The measured groupoid \(\mathsf{G}\ltimes X\) is naturally equipped with a measurable cocycle (i.e. a homomorphism of measured groupoids) \(\rho_{\mathsf{G}}:\mathcal{G}\to\mathsf{G}\), defined by letting \(\rho_{\mathsf{G}}(g,x)=g\). Also, for every measured groupoid \(\mathcal{G}\) over a standard probability space \(X\) and every positive measure Borel subset \(U\subseteq X\), we can consider the restricted measured groupoid \(\mathcal{G}_{|U}\), consisting of all elements \(\gamma\in\mathcal{G}\) with \(s(\gamma),r(\gamma)\in U\). Coming back to the above situation of a measure equivalence coupling \(\Omega\) between \(\mathsf{G}\) and \(\mathsf{H}\), the measured groupoids \(\mathcal{G}_{1},\mathcal{G}_{2}\) coming from the respective actions \(\mathsf{G}\curvearrowright X_{\mathsf{H}},\mathsf{H}\curvearrowright X_{ \mathsf{G}}\) have isomorphic restrictions to \(U=X_{\mathsf{G}}\cap X_{\mathsf{H}}\) (where isomorphism is understood up to restricting to a conull Borel subset). ### Reduction of couplings In this section, we review work of Kida [10, 11] and Bader-Furman-Sauer [1]. Let \(L\) be a Polish group, and let \(\mathsf{G}\) be a countable subgroup of \(L\). Then \(L\) is equipped with an action of \(\mathsf{G}\times\mathsf{G}\) by left-right multiplication, namely \((g_{1},g_{2})\cdot\ell=g_{1}\ell g_{2}^{-1}\). We say that the inclusion \(\mathsf{G}\subseteq L\) is _strongly ICC_ if the Dirac mass at \(\mathrm{id}\) is the unique probability measure on \(L\) which is invariant under the conjugation by every element of \(\mathsf{G}\). **Theorem 4.1** (Kida [10, Theorem 3.5], Bader-Furman-Sauer [1, Theorem 2.6]).: _Let \(L\) be a Polish group, and let \(\mathsf{G}\) be a countable subgroup of \(L\), such that the inclusion \(\mathsf{G}\subseteq L\) is strongly ICC. Assume that for every self measure equivalence coupling \(\Sigma\) of \(\mathsf{G}\), there exists a measurable almost \((\mathsf{G}\times\mathsf{G})\)-equivariant2 map \(\Sigma\to L\)._ Footnote 2: Whenever we say that a map is _almost equivariant_, we mean that the equivariance relation holds almost everywhere. _Let \(\mathsf{H}\) be a countable group that is measure equivalent to \(\mathsf{G}\), and let \(\Omega\) be a measure equivalence coupling between \(\mathsf{G}\) and \(\mathsf{H}\)._ _Then there exist a homomorphism \(\iota:\mathsf{H}\to L\) with finite kernel, and a measurable almost \((\mathsf{G}\times\mathsf{H})\)-equivariant map \(\theta:\Omega\to L\), i.e. for a.e. \(\omega\in\Omega\), and any \((g,h)\in\mathsf{G}\times\mathsf{H}\) _one has \(\theta((g,h)\cdot\omega)=g\theta(\omega)\iota(h)^{-1}\)._ Our assumption on \(\mathsf{G}\) is _coupling rigidity_ in the sense of Kida [11, Definition 3.3], or _tautness_ in the sense of Bader-Furman-Sauer [1, Definition 1.3]. Notice that the latter notion of tautness of the self-coupling \(\Sigma\) requires the equivariant map \(\Sigma\to L\) to be essentially unique. However, uniqueness is automatically ensured by the strong ICC assumption, see [1, Lemma A.8(1)]. **Lemma 4.2** (Kida [11, Lemma 5.8]).: _Let \(L\) be a Polish group, and let \(\mathsf{G},\hat{\mathsf{G}}\) be countable subgroups of \(L\), with \(\mathsf{G}\) normal in \(\hat{\mathsf{G}}\) and of finite index in \(\hat{\mathsf{G}}\). Assume that the inclusion \(\mathsf{G}\subseteq L\) is strongly ICC._ _Let \(\Sigma\) be a self measure equivalence coupling of \(\hat{\mathsf{G}}\), and let \(\Phi:\Sigma\to L\) be a measurable map which is almost \((\mathsf{G}\times\mathsf{G})\)-equivariant._ _Then \(\Phi\) is almost \((\hat{\mathsf{G}}\times\tilde{\mathsf{G}})\)-equivariant._ Proof.: This is almost exactly [11, Lemma 5.8], except that the group \(L\) is not supposed to be discrete - however the proof is exactly the same, upon replacing the ICC condition in Kida's lemma by the strong ICC property. ### Restricting couplings to stabilizers When \(K\) is a polyhedral complex with countably many cells, the group \(\operatorname{Aut}(K)\), equipped with the pointwise convergence topology, is a Polish group. A faithful action of a countable group \(\mathsf{G}\) on \(K\) enables to view \(\mathsf{G}\) as a subgroup of \(L=\operatorname{Aut}(K)\). In the present section, we will formulate two general lemmas about measure equivalence couplings that involve a Polish group \(L\), and specialize them to the context where \(L=\operatorname{Aut}(K)\). They will enable us, starting with two countable subgroups \(\mathsf{G},\mathsf{H}\) of \(\operatorname{Aut}(K)\), and a measure equivalence coupling between \(\mathsf{G}\) and \(\mathsf{H}\) that factors through the natural one on \(\operatorname{Aut}(K)\), to induce measure equivalence couplings at the level of stabilizers of vertices of \(K\), and also get a control on orbits. In later sections, this will often be applied to the action of the right-angled Artin group on its right-angled building. The results appearing in the present section are inspired by work of Kida [11, Section 5]. **Proposition 4.3**.: _Let \(L\) be a Polish group and \(\mathsf{G}\) be a countable subgroup of \(L\). Let \(L^{\prime}\subseteq L\) be a Borel subgroup such that \(\mathsf{G}\cdot L^{\prime}=L\)._ _Let \(\mathsf{H}\) be a countable group that is measure equivalent to \(\mathsf{G}\), let \((\Omega,\mu)\) be a measure equivalence coupling between \(\mathsf{G}\) and \(\mathsf{H}\), and assume we are given a homomorphism \(\iota:\mathsf{H}\to L\) and a measurable almost \((\mathsf{G}\times\mathsf{H})\)-equivariant map \(\theta:\Omega\to L\), where the \((\mathsf{G}\times\mathsf{H})\)-action on \(L\) is via \((g,h)\cdot\ell=g\ell\iota(h)^{-1}\)._ _Then the groups \(\mathsf{G}^{\prime}=\mathsf{G}\cap L^{\prime}\) and \(\mathsf{H}^{\prime}=\iota^{-1}(L^{\prime})\) are measure equivalent. More precisely \(\Omega^{\prime}=\theta^{-1}(L^{\prime})\) is a measure equivalence coupling between \(\mathsf{G}^{\prime}\) and \(\mathsf{H}^{\prime}\). In addition, for every subgroup \(\mathsf{K}\) of either \(\mathsf{G}\) or \(\mathsf{H}\), every Borel fundamental domain for the action of \(\mathsf{K}\cap\mathsf{G}^{\prime}\) (or \(\mathsf{K}\cap\mathsf{H}^{\prime}\)) on \(\Omega^{\prime}\) is contained in a Borel fundamental domain for the action of \(\mathsf{K}\) on \(\Omega\)._ Proof.: By definition \(\Omega^{\prime}\) is a \((\mathsf{G}^{\prime}\times\mathsf{H}^{\prime})\)-invariant Borel subset of \(\Omega\). Let \(\mathsf{K}\subseteq\mathsf{H}\) be a subgroup, let \(\mathsf{K}^{\prime}=\mathsf{K}\cap\mathsf{H}^{\prime}\), and let \(Y^{\prime}\) be a Borel fundamental domain for the action of \(\mathsf{K}^{\prime}\) on \(\Omega^{\prime}\): this exists because the \(\mathsf{H}\)-action on \(\Omega\) has one. We claim that for any \(k\in\mathsf{K}\setminus\{1\}\), one has \(\mu(kY^{\prime}\cap Y^{\prime})=0\). Indeed, for a.e. \(x,y\in Y^{\prime}\), if \(y=kx\) for some \(k\in\mathsf{K}\), then \(\theta(y)=\theta(x)\iota(k)^{-1}\). As \(\theta(x),\theta(y)\in L^{\prime}\), we have \(k\in\mathsf{K}^{\prime}\). The claim thus follows from the fact that \(Y^{\prime}\) is a fundamental domain for the action of \(\mathsf{K}^{\prime}\) on \(\Omega^{\prime}\). The same argument applies to subgroups of \(\mathsf{G}\). When \(K=H\), the above claim implies in particular that every Borel fundamental domain for the action of \(H^{\prime}\) on \(\Omega^{\prime}\) has finite measure. Likewise, any Borel fundamental domain for the action of \(G^{\prime}\) on \(\Omega^{\prime}\) has finite measure. In order to conclude that \(G^{\prime}\) and \(H^{\prime}\) are measure equivalent, there remains to prove that \(\mu(\Omega^{\prime})>0\). Since \(L=G\cdot L^{\prime}\), the space \(\Omega\) is covered by the countably many subsets \(\theta^{-1}(gL^{\prime})=g\theta^{-1}(L^{\prime})\), with \(g\) varying in \(G\). As the \(G\)-action on \(\Omega\) is measure-preserving, the subsets \(\theta^{-1}(gL^{\prime})\) all have the same measure, and therefore this measure is positive. In particular \(\Omega^{\prime}=\theta^{-1}(L^{\prime})\) has positive measure, as desired. In the sequel, Proposition 4.3 will be applied in the form of the following corollary (and often with \(L=\operatorname{Aut}(K)\)). **Corollary 4.4**.: _Let \(K\) be a polyhedral complex with countably many cells. Let \(L\) be a Polish group acting on \(K\) through a measurable homomorphism \(L\to\operatorname{Aut}(K)\). Let \(G\) be a countable subgroup of \(L\), and assume that the actions of \(G\) and \(L\) on \(VK\) have the same orbits._ _Let \(H\) be a countable group that is measure equivalent to \(G\), let \((\Omega,\mu)\) be a measure equivalence coupling between \(G\) and \(H\), and assume we are given a homomorphism \(\iota:H\to L\) and a measurable almost \((G\times H)\)-equivariant map \(\theta:\Omega\to L\)._ _Then for every \(v\in VK\), the groups \(G_{v}=\operatorname{Stab}_{G}(v)\) and \(H_{v}=\iota^{-1}(\operatorname{Stab}_{L}(v))\) are measure equivalent. More precisely \(\Omega_{v}=\theta^{-1}(\operatorname{Stab}_{L}(v))\) is a measure equivalence coupling between \(G_{v}\) and \(H_{v}\), and every Borel fundamental domain for the action of \(G_{v}\) (resp. \(H_{v}\)) on \(\Omega_{v}\) is contained in a Borel fundamental domain for the action of \(G\) (resp. \(H\)) on \(\Omega\)._ Proof.: This follows from Proposition 4.3, applied with \(L^{\prime}=\operatorname{Stab}_{L}(v)\). The fact that \(L=G\cdot L^{\prime}\) follows from our assumption that \(G\) and \(L\) have the same orbits on \(VK\). _Remark 4.5_.: In view of Proposition 4.3, more generally, for every subgroup \(K\subseteq G\), every Borel fundamental domain for the action of \(K\cap G_{v}\) on \(\Omega_{v}\) is contained in a Borel fundamental domain for the action of \(K\) on \(\Omega\). Given a countable set \(\mathbb{D}\), we denote by \(\operatorname{Bij}(\mathbb{D})\) the group of all bijections of \(\mathbb{D}\); we equip it with the topology of pointwise convergence, for which it is a Polish group. Recall from the introduction that a group \(H\) has _bounded torsion_ if there is a bound on the cardinality of a finite subgroup of \(H\). **Proposition 4.6**.: _Let \(L^{0}\) be a Polish group with a countable subgroup \(G\). Assume that \(G\) acts transitively on a countable set \(\mathbb{D}\) with finite stabilizers through a measurable homomorphism \(L^{0}\to\operatorname{Bij}(\mathbb{D})\)._ _Let \(H\) be a countable group with bounded torsion that is measure equivalent to \(G\), and let \(\Omega\) be a measure equivalence coupling between \(G\) and \(H\). Assume that we are given a homomorphism \(\iota:H\to L^{0}\) and a measurable almost \((G\times H)\)-equivariant map \(\theta:\Omega\to L^{0}\)._ _Then \(H\) acts on \(\mathbb{D}\) (through \(\iota\)) with only finitely many orbits and finite stabilizers._ Proof.: We denote by \(\kappa\) the common cardinality of all \(G\)-stabilizers on \(\mathbb{D}\). Given two elements \(s,u\in\mathbb{D}\), we let \(L^{0}_{s\to u}\) be the Borel subset of \(L^{0}\) consisting of all elements that send \(s\) to \(u\), and we let \(\Omega_{s\to u}=\theta^{-1}(L^{0}_{s\to u})\). Notice that \(\Omega_{s\to u}\) is invariant under \(G_{u}\) - and on the other hand \(\mu(g\Omega_{s\to u}\cap\Omega_{s\to u})=0\) for every \(g\in G\setminus G_{u}\). In addition, the \(G\)-translates of \(\Omega_{s\to u}\) cover \(\Omega\) because the \(G\)-action on \(\mathbb{D}\) is transitive. Thus, any Borel fundamental domain for the action of \(G_{u}\) on \(\Omega_{s\to u}\) is also a fundamental domain for the action of \(G\) on \(\Omega\). The measure of any such fundamental domain is equal to \(\mu(\Omega_{s\to u})/|G_{u}|=\mu(\Omega_{s\to u})/\kappa\). In particular, as \(s,u\) vary in the \(\mathsf{G}\)-orbit of \(v\), the sets \(\Omega_{s\to u}\) all have the same (positive) measure, which we denote by \(m\). Corollary 4.4 (applied with \(L=L^{0}\) and \(K=\mathbb{D}\), so that \(\operatorname{Aut}(K)=\operatorname{Bij}(\mathbb{D})\)) ensures that the \(\mathsf{H}\)-stabilizer of any element \(u\in\mathbb{D}\) is measure equivalent to \(\mathsf{G}_{u}\), whence finite. Let \(k\) be a bound on the cardinality of a finite subgroup of \(\mathsf{H}\). For every \(v\in\mathsf{G}\cdot u\), the set \(\Omega_{u\to v}\) is \(\mathsf{H}_{u}\)-invariant; we let \(\Omega^{\prime}_{u\to v}\subseteq\Omega_{u\to v}\) be a fundamental domain for the action of the finite group \(\mathsf{H}_{u}\). Then \(\Omega^{\prime}_{u\to v}\) has measure at least \(m/k\). One has \(\Omega=\coprod_{u\in\mathbb{D}}\Omega_{u\to v}\). Therefore, if \(T\subseteq\mathbb{D}\) is a set of representatives of the \(\mathsf{H}\)-orbits, then \(\coprod_{u\in T}\Omega^{\prime}_{u\to v}\) is a fundamental domain for the \(\mathsf{H}\)-action on \(\Omega\). Since \(\mathsf{H}\) has a finite measure fundamental domain, and the measure of the above fundamental domain is at least equal to \(m|T|/k\), it follows that \(|T|<+\infty\). This concludes our proof. **Corollary 4.7**.: _Let \(\mathsf{G}\) be a countable group, acting faithfully by automorphisms on a polyhedral complex \(K\) with countably many cells, in such a way that \(\mathsf{G}\) and \(\operatorname{Aut}(K)\) have the same orbits of vertices. Let \(\mathsf{H}\) be a countable group with bounded torsion that is measure equivalent to \(\mathsf{G}\), let \((\Omega,\mu)\) be a measure equivalence coupling between \(\mathsf{G}\) and \(\mathsf{H}\), and assume we are given a homomorphism \(\iota:\mathsf{H}\to\operatorname{Aut}(K)\) and a measurable almost \((\mathsf{G}\times\mathsf{H})\)-equivariant map \(\theta:\Omega\to\operatorname{Aut}(K)\)._ 1. _If_ \(v\in VK\) _is a vertex with finite_ \(\mathsf{G}\)_-stabilizer, then the_ \(\mathsf{G}\)_-orbit of_ \(v\) _is the union of finitely many_ \(\mathsf{H}\)_-orbits._ 2. _If_ \(v\in VK\) _is a vertex, and if_ \(V\subseteq VK\) _is a set that is invariant under_ \(\operatorname{Stab}_{\operatorname{Aut}(K)}(v)\)_, consisting of vertices having finite_ \(\mathsf{G}\)_-stabilizer, and on which_ \(\mathsf{G}_{v}\) _acts transitively, then the action of_ \(\mathsf{H}_{v}=\iota^{-1}(\operatorname{Stab}_{\operatorname{Aut}(K)}(v))\) _on_ \(V\) _has finitely many orbits._ Proof.: The first assertion follows from Proposition 4.6, applied with \(L^{0}=\operatorname{Aut}(K)\) and with \(\mathbb{D}=\mathsf{G}\cdot v\), on which \(L^{0}\) acts because \(\mathsf{G}\cdot v=\operatorname{Aut}(K)\cdot v\). The second assertion follows from Proposition 4.6, applied with \(L^{0}=\operatorname{Stab}_{\operatorname{Aut}(K)}(v)\), with \(\mathsf{G}_{v}\) in place of \(\mathsf{G}\) and \(\mathsf{H}_{v}\) in place of \(\mathsf{H}\), and with \(\mathbb{D}=V\). The required measure equivalence coupling \(\Omega_{v}\) between \(\mathsf{G}_{v}\) and \(\mathsf{H}_{v}\), coming with maps \(\iota_{v}:\mathsf{H}_{v}\to L^{0}\) and \(\theta_{v}:\Omega_{v}\to L^{0}\), is provided by Corollary 4.4, applied with \(L=\operatorname{Aut}(K)\). _Remark 4.8_.: Proposition 4.6 and Corollary 4.7 fail if one does not assume that \(\mathsf{H}\) has bounded torsion. Indeed, it is possible to find a countable group \(\mathsf{G}\) acting properly and cocompactly on a polyhedral complex \(K\), satisfying the assumptions of Corollary 4.7, and a non-uniform lattice \(\mathsf{H}\) in \(\operatorname{Aut}(K)\), acting with infinitely many orbits of vertices. In this case \(\mathsf{G}\) and \(\mathsf{H}\) are measure equivalent. ### Measure equivalence couplings and index The following lemma will be used in Section 8 of the paper. **Lemma 4.9**.: _Let \(\mathsf{G}\) and \(\mathsf{H}\) be two countable groups, and let \(\Sigma\) be a measure equivalence coupling between \(\mathsf{G}\) and \(\mathsf{H}\). Let \(\mathsf{H}^{\prime}\subset\mathsf{H}\) be a subgroup. Assume that there exists a positive measure \((\mathsf{G}\times\mathsf{H}^{\prime})\)-invariant Borel subset \(\Sigma^{\prime}\subset\Sigma\) such that for any \(h\in\mathsf{H}\setminus\mathsf{H}^{\prime}\) we have \(\mu(h\Sigma^{\prime}\cap\Sigma^{\prime})=0\)._ _Then \(\mathsf{H}^{\prime}\) is of finite index in \(\mathsf{H}\)._ Proof.: Let \(E\) and \(E^{\prime}\) be respective Borel fundamental domains for the actions of \(\mathsf{G}\) and \(\mathsf{H}^{\prime}\) on \(\Sigma^{\prime}\), chosen such that \(U=E\cap E^{\prime}\) has positive measure (these exist because \(\mathsf{G}\) and \(\mathsf{H}\) admit Borel fundamental domains on \(\Sigma\)). We claim that \(E\) is contained in a Borel fundamental domain \(X_{\mathsf{G}}\) for the \(\mathsf{G}\)-action on \(\Sigma\), and \(E^{\prime}\) is contained in a Borel fundamental domain \(X_{\mathsf{H}}\) for the \(\mathsf{H}\)-action on \(\Sigma\). For the first part of the claim, we take any Borel fundamental domain \(X_{\mathsf{G}}^{\prime}\) for the \(\mathsf{G}\)-action on \(\Sigma\) and take \(X_{\mathsf{G}}=(X_{\mathsf{G}}^{\prime}\setminus\Sigma^{\prime})\cup E\). For the second part of the claim, it suffices to prove that for any \(h\in\mathsf{H}\), the intersection \(hE^{\prime}\cap E^{\prime}\) has measure zero. If \(h\in\mathsf{H}^{\prime}\), then this follows from that \(E^{\prime}\) is a fundamental domain for the \(\mathsf{H}^{\prime}\)-action on \(\Sigma^{\prime}\). If \(h\notin\mathsf{H}^{\prime}\), then our assumption implies that \(\mu(h\Sigma^{\prime}\cap\Sigma^{\prime})=0\), and the claim follows. As recalled in Section 4.1, there is a natural measure-preserving action \(\mathsf{H}\curvearrowright X_{\mathsf{G}}\), obtained from the identification \(X_{\mathsf{G}}\approx\mathsf{G}\backslash\Sigma\). We let \(\mathcal{G}_{1}\) be the associated measured groupoid over \(X_{\mathsf{G}}\); it is equipped with a natural cocycle \(\mathcal{G}_{1}\to\mathsf{H}\). Similarly, we have a natural measure-preserving action \(\mathsf{G}\curvearrowright X_{\mathsf{H}}\), and we let \(\mathcal{G}_{2}\) be the associated measured groupoid over \(X_{\mathsf{H}}\). It is equipped with a natural cocycle \(\mathcal{G}_{2}\to\mathsf{G}\). Let now \(U_{1}=X_{\mathsf{G}}\cap X_{\mathsf{H}}\): this is a positive measure Borel subset (because it contains \(U=E\cap E^{\prime}\)). We have \((\mathcal{G}_{1})_{|U_{1}}=(\mathcal{G}_{2})_{|U_{1}}\). We denote by \(\mathcal{G}\) this measured groupoid over \(U_{1}\); up to restricting \(U_{1}\) to a conull Borel subset, \(\mathcal{G}\) is naturally equipped with two cocycles \(\rho_{\mathsf{G}}:\mathcal{G}\to\mathsf{G}\) and \(\rho_{\mathsf{H}}:\mathcal{G}\to\mathsf{H}\). We now claim that \(\mathcal{G}_{|U}=\rho_{\mathsf{H}}^{-1}(\mathsf{H}^{\prime})_{|U}\). The lemma follows from this claim by using e.g. [11, Lemma B.3]. To prove the claim, it suffices to show that \(\mathcal{G}_{|U}\subseteq\rho_{\mathsf{H}}^{-1}(\mathsf{H}^{\prime})_{|U}\). Let \(\gamma\in\mathcal{G}_{|U}\), let \(u=s(\gamma)\) be the source of \(\gamma\). As \(\Sigma^{\prime}\) is \(\mathsf{G}\)-invariant, we have \(gu\in\Sigma^{\prime}\). Since \(E^{\prime}\) is a fundamental domain for the action of \(\mathsf{H}^{\prime}\) on \(\Sigma^{\prime}\), there exists a unique \(h\in\mathsf{H}^{\prime}\) such that \(hgu\in E^{\prime}\). So \(h\) is also the unique element of \(\mathsf{H}\) satisfying \(hgu\in X_{\mathsf{H}}\). Therefore \(\rho_{\mathsf{H}}(\gamma)=h\), showing that \(\gamma\in\rho_{\mathsf{H}}^{-1}(\mathsf{H}^{\prime})\), as desired. ## 5 Proximal dynamics and strong ICC property for \(\operatorname{Aut}(\mathbb{B})\) The goal of this section is to prove the following proposition. We refer to the first paragraph of Section 4.2 for the definition of the strong ICC property. **Proposition 5.1**.: _Let \(G\) be a right-angled Artin group with trivial center, and let \(\mathbb{B}\) be its right-angled building._ _Then the inclusion \(G\subseteq\operatorname{Aut}(\mathbb{B})\) is strongly ICC._ ### A strong ICC lemma, after Bader-Furman-Sauer Recall that an action of a countable group \(G\) by homeomorphisms on a compact metrizable space \(K\) is _strongly proximal_ in the sense of Furstenberg [10] if the \(G\)-orbit of every \(\nu\in\operatorname{Prob}(K)\) contains a Dirac mass in its weak-\(*\) closure. The action is _minimal_ if every \(G\)-orbit is dense in \(K\). Given a compact metrizable space \(K\), the group \(\operatorname{Homeo}(K)\) is equipped with the topology of uniform convergence, which turns it into a Polish group. The following lemma is a small variation over an argument of Bader-Furman-Sauer [1, Lemma 2.4]. **Lemma 5.2**.: _Let \(G_{1},\dots,G_{k}\) be countable groups, let \(K_{1},\dots,K_{k}\) be compact metrizable spaces, and assume that for every \(i\in\{1,\dots,k\}\), the group \(G_{i}\) acts faithfully, minimally and strongly proximally on \(K_{i}\)._ _Then the inclusion \(G_{1}\times\cdots\times G_{k}\subseteq\mathrm{Homeo}(K_{1}\times\cdots\times K_{k})\) is strongly ICC._ Proof.: We write \(G=G_{1}\times\cdots\times G_{k}\), and \(K=K_{1}\times\cdots\times K_{k}\). Let \(\mu\) be a Borel probability measure on \(\mathrm{Homeo}(K)\) which is invariant under the conjugation by any element of \(G\). Let \[\mathrm{Prob}_{\mu}(K)=\{\nu\in\mathrm{Prob}(K)\mid\mu*\nu=\nu\}\] be the space of Borel \(\mu\)-stationary probability measures on \(K\), which is a nonempty compact subset of \(\mathrm{Prob}(K)\), equipped with the weak-\(*\) topology. Note that \(\mathrm{Prob}_{\mu}(K)\) is \(G\)-invariant, using that \(\mu\) is conjugation-invariant. We claim that for every \(x=(x_{1},\ldots,x_{k})\in K\), the Dirac mass \(\delta_{x}\) belongs to \(\mathrm{Prob}_{\mu}(K)\). This claim will imply that for every \(x\in K\), we have \(\mu\{f\in\mathrm{Homeo}(K)\mid f(x)=x\}=1\), so \(\mu=\delta_{\mathrm{id}}\), and the lemma will follow. We are thus left with proving the above claim. Let \(\nu\in\mathrm{Prob}_{\mu}(K)\). For every \(i\in\{1,\ldots,k\}\), let \(\nu_{i}\in\mathrm{Prob}(K_{i})\) be the pushforward of \(\nu\) under the projection map \(K\to K_{i}\). Since the \(G_{i}\)-action on \(K_{i}\) is minimal and strongly proximal, we can find a sequence \((g_{i,n})_{n\in\mathbb{N}}\in G_{i}^{\mathbb{N}}\) such that \((g_{i,n})_{*}\nu_{i}\) converges to \(\delta_{x_{i}}\), as \(n\) goes to \(+\infty\), in the weak-\(*\) topology. For every \(n\in\mathbb{N}\), let \(g_{n}=(g_{1,n},\ldots,g_{k,n})\). We will now prove that the probability measures \((g_{n})_{*}\nu\) converge to \(\delta_{x}\), as \(n\) goes to \(+\infty\), in the weak-\(*\) topology. This will conclude our proof. Let \(U\subseteq K\) be an open set that contains \(x\). Then there exist open neighborhoods \(U_{i}\) of \(x_{i}\), for every \(i\in\{1,\ldots,k\}\), such that \(U_{1}\times\cdots\times U_{k}\subseteq U\). For every \(i\in\{1,\ldots,n\}\), we have \((g_{i,n})_{*}\nu_{i}(U_{i})\to 1\) by the Portmanteau Theorem (see [12, Theorem 17.20]). This means that \[\nu(K_{1}\times\cdots\times K_{i-1}\times g_{i,n}^{-1}(U_{i})\times K_{i+1} \times\cdots\times K_{k})\to 1\] as \(n\) goes to \(+\infty\). Intersecting these sets as \(i\) varies in \(\{1,\ldots,k\}\), we obtain that \(\nu(g_{n}^{-1}(U_{1}\times\cdots\times U_{k}))\to 1\) as \(n\) goes to \(+\infty\). In particular \((g_{n})_{*}\nu(U)\to 1\). Since \(U\) was an arbitrary open set containing \(x\), by the Portmanteau Theorem, this is enough to prove that \((g_{n})_{*}\nu\) converges to \(\delta_{x}\), as desired. ### Dynamics on the regular boundary We refer to [20] for background on hyperplanes and halfspaces in CAT(0) cube complexes. Let \(X\) be a CAT(0) cube complex. The Roller compactification of \(X\) is defined in [13, BCG\({}^{+}\)09] as follows. Let HS be the set of all halfspaces in \(X\) (i.e. complementary components of hyperplanes). Let \(\varphi:VX\to\{0,1\}^{\mathrm{HS}}\) be the map such that \(\varphi(v)(h)=1\) if \(v\in h\), and \(\varphi(v)(h)=0\) otherwise. The _Roller compactification_, denoted \(\overline{X}^{R}\), is the closure of \(\varphi(VX)\) in \(\{0,1\}^{\mathrm{HS}}\), in the topology of pointwise convergence. Thus, every point \(\xi\in\overline{X}^{R}\) is a function from HS to \(\{0,1\}\); we denote by \(\langle\xi,h\rangle\) the value \(\xi(h)\), and say that \(h\)_points towards_\(\xi\) if \(\langle\xi,h\rangle=1\). The _Roller boundary_ is \(\partial_{R}X=\overline{X}^{R}\setminus VX\). If \(X\) has countably many cubes (and therefore countably many half-spaces), the compactification \(\overline{X}^{R}\) is metrized as follows: fix an enumeration \(\mathrm{HS}=\{h_{n}\}_{n\in\mathbb{N}}\), and for \(\xi_{1}\neq\xi_{2}\), let \(d(\xi_{1},\xi_{2})=2^{-n}\), where \(n\) is the smallest integer such that \(\langle\xi_{1},h_{n}\rangle\neq\langle\xi_{2},h_{n}\rangle\). Two hyperplanes \(\mathfrak{h}_{1}\) and \(\mathfrak{h}_{2}\) in \(X\) are _strongly separated_[1, Definition 2.1] if no hyperplane \(\mathfrak{h}\) has a non-empty intersection with both \(\mathfrak{h}_{1}\) and \(\mathfrak{h}_{2}\) (this implies in particular that \(\mathfrak{h}_{1}\) and \(\mathfrak{h}_{2}\) are disjoint). Following Fernos [14, Definition 7.3 and Proposition 7.4], we say that an element \(\xi\in\overline{X}^{R}\) is _regular_ (also called _strongly separated_ in [13, Definition 11]) if there exists an infinite nested sequence of halfspaces \(h_{1}\supseteq h_{2}\supseteq\dots\) pointing towards \(\xi\), whose boundary hyperplanes are pairwise strongly separated. Following [11, Definition 5.9], if \(X\) is irreducible, i.e. does not split as a product of two non-trivial convex subcomplexes, we define the _regular boundary_\(R(X)\subseteq\overline{X}^{R}\) (denoted by \(S(X)\) in [10]) as the closure of the set of regular elements, which is compact. More generally, when \(X=X_{1}\times\dots\times X_{n}\), with each \(X_{i}\) irreducible, the Roller compactification splits naturally as \(\overline{X}^{R}=\overline{X}^{R}_{1}\times\dots\times\overline{X}^{R}_{n}\), and we let \(R(X)=R(X_{1})\times\dots\times R(X_{n})\), a compact subspace of \(\overline{X}^{R}\). Following [10], we say that an action of a group \(G\) on a \(\operatorname{CAT}(0)\) cube complex \(X\) is _essential_ if no \(G\)-orbit remains in a bounded neighborhood of a halfspace of \(X\). The action is _non-elementary_ if it has no global fixed point, and no finite orbit in the visual boundary \(\partial_{\infty}X\). **Lemma 5.3**.: _Let \(G=G_{\Gamma}\) be a non-cyclic irreducible right-angled Artin group, and let \(\mathbb{B}\) be its right-angled building. Then \(\mathbb{B}\) is irreducible and non-Euclidean, and \(G\) acts essentially and non-elementarily on \(\mathbb{B}\)._ Proof.: It was proved in [11, Theorem 1.3 and Section 6.2] that \(G\) acts on \(\mathbb{B}\) with two independent strongly contracting isometries, which implies that \(\mathbb{B}\) is irreducible, non-Euclidean, and that \(G\) acts non-elemenarily on \(\mathbb{B}\). For essentiality, notice that every hyperplane crosses an edge \(e\) joining a rank \(0\) vertex to a rank \(1\) vertex. It is enough to prove that \(e\) is contained in a periodic bi-infinite geodesic (for the \(\operatorname{CAT}(0)\) metric on \(\mathbb{B}\)). Let \(\Gamma^{c}\) be the complement graph of \(\Gamma\), i.e. \(V\Gamma^{c}=V\Gamma\), and two vertices of \(\Gamma^{c}\) are adjacent if and only if they are non-adjacent in \(\Gamma\). As \(\Gamma\) is not a join, \(\Gamma^{c}\) is connected. Let \(a_{1}\) be the label of \(e\), and let \(\gamma\) be an immersed loop in \(\Gamma^{c}\) based at \(a_{1}\). Let \(a_{1},\dots,a_{n},a_{n+1}=a_{1}\) be the vertices encountered along \(\gamma\), and let \(w=a_{1}\dots a_{n}\), an element of \(G\). Recall that each vertex of \(\mathbb{B}\) is represented by a left coset of a standard flat in \(G\). We consider the following sequence of vertices in \(\mathbb{B}\), alternating between rank \(0\) and rank \(1\) vertices: \[\{\text{id}\},\ \langle a_{1}\rangle,\ a_{1},\ a_{1}\langle a_{2}\rangle,\ a_{1}a _{2},\ a_{1}a_{2}\langle a_{3}\rangle,\ \dots,\ a_{1}a_{2}\cdots a_{n}.\] Consecutive members in this sequence are adjacent vertices in \(\mathbb{B}\), so the above gives an edge path \(Q\). We claim that \(Q\) is a geodesic segment: indeed, the angle (in the sense of e.g. [1, Chapter II.3]) between two adjacent edges at a rank \(1\) vertex is \(\pi\) (using that the link of any rank \(1\) vertex has a bipartition into rank \(0\) and higher-rank vertices). And the angle between two adjacent edges at a rank \(0\) vertex is \(\pi\) because \(a_{i}\) and \(a_{i+1}\) are not adjacent in \(\Gamma\). This is enough to ensure that \(Q\) is a geodesic segment as local geodesics in a \(\operatorname{CAT}(0)\) space are global geodesics [1, Proposition II.1.4(2)]. Likewise, as \(a_{1}\) and \(a_{n}\) are not adjacent in \(\Gamma\), the concatenation \(\cup_{i\in\mathbb{Z}}w^{i}Q\) is a geodesic line \(\ell_{w}\) in \(\mathbb{B}\), which is an axis for \(w\). It has a translate passing through \(e\), which is the desired bi-infinite geodesic. The following corollary is then an immediate application of [10, Proposition 1]. **Corollary 5.4**.: _Let \(G\) be an irreducible right-angled Artin group, and let \(\mathbb{B}\) be its right-angled building. Then the \(G\)-action on \(R(\mathbb{B})\) is minimal and strongly proximal. _ **Lemma 5.5**.: _Let \(G\) be a group acting on a \(\operatorname{CAT}(0)\) cube complex \(X\) which is irreducible and non-Euclidean. We assume that_ 1. _the action of_ \(G\) _on_ \(X\) _is essential and non-elementary;_ 2. _if an element of_ \(G\) _fixes a cube setwise, then it fixes the cube pointwise._ _Then the homomorphism \(\operatorname{Aut}(X)\to\operatorname{Homeo}(R(X))\) is injective._ Proof.: Let \(f\in\operatorname{Aut}(X)\) act trivially on \(R(X)\). By [11, Corollary 7.7] or [12, Lemma 8], \(R(X)\) is non-empty, so let \(\xi\in R(X)\) be a regular point. By [10, Corollary 6.2] (which relies on work of Caprace-Lytchak [13, Theorem 1.1]), there is an \(\operatorname{Aut}(X)\)-equivariant map \(\pi:\partial_{R}X\to\partial_{\infty}X\). Since the action of \(G\) on \(X\) is non-elementary, the \(G\)-orbit of \(\pi(\xi)\) in \(\partial_{\infty}X\) is infinite, which implies that the \(G\)-orbit of \(\xi\) is infinite. In particular \(R(X)\) contains at least \(3\) regular points, and by assumption they are all fixed by \(f\). We can therefore use a barycenter argument, provided by [12, Lemma 13] or [10, Lemma 5.14], and deduce that \(f\) fixes a point \(x\in X\). Let \(Z\) be the fix point set of \(f\). Assumption 2 ensures that \(Z\) is a subcomplex. As \(X\) is \(\operatorname{CAT}(0)\), \(Z\) is convex. Thus \(Z\) is a convex subcomplex of \(X\). By \(G\)-invariance of \(R(X)\), we know that \(Z\) contains the \(G\)-orbit of \(x\). Assumption 1 implies that each hyperplane of \(X\) separates at least two points in \(Z\). Thus \(Z=X\) by [13, Lemma 13.8]. **Lemma 5.6**.: _Let \(G\) be a right-angled Artin group with trivial center, and let \(\mathbb{B}\) be its right-angled building. Then the homomorphism \(\operatorname{Aut}(\mathbb{B})\to\operatorname{Homeo}(R(\mathbb{B}))\) is measurable and injective._ Proof.: Injectivity in the case where \(G\) is irreducible follows from Lemma 5.5, using Lemma 5.3 to check the first assumption (essentiality and non-elementarity), and Lemma 2.5 to check the second (setwise and pointwise stabilizers of cubes coincide). We now prove injectivity in general, without assuming irreducibility. By [12, Proposition 2.6], every \(f\in\operatorname{Aut}(X)\) preserves the product structure \(\mathbb{B}=\mathbb{B}_{1}\times\cdots\times\mathbb{B}_{k}\) (but could potentially permute the factors). Since \(|R(\mathbb{B}_{i})|\geq 2\) for every \(i\in\{1,\ldots,k\}\), and since \(f\) acts trivially on \(R(\mathbb{B})\), the automorphism \(f\) does not permute the factors (as shown by comparing the images of two points of \(R(\mathbb{B})\) that differ on only one coordinate). By using the above for each factor independently, we see that \(f=\operatorname{id}\). We finally prove the measurability of the natural map \(\operatorname{Aut}(\mathbb{B})\to\operatorname{Homeo}(R(\mathbb{B}))\). Given \(f\in\operatorname{Aut}(\mathbb{B})\), we will denote by \(f_{\infty}\) its image by this map. The Polish group \(\operatorname{Homeo}(R(\mathbb{B}))\) is metrized with the uniform metric, i.e. \(d(f,g)=\sup_{\xi\in R(\mathbb{B})}d(f(\xi),g(\xi))\). It is enough to prove that for every \(n\in\mathbb{N}\), the set of all automorphisms \(f\in\operatorname{Aut}(\mathbb{B})\) such that \(d(f_{\infty},\operatorname{id})<2^{-n}\) is a Borel subset of \(\operatorname{Aut}(\mathbb{B})\). Say that two half-spaces \(h,h^{\prime}\) are \(R\)-_indistinguishable_ if for every \(\xi\in R(\mathbb{B})\), one has \(\langle\xi,h\rangle=\langle\xi,h^{\prime}\rangle\). Then \(d(f_{\infty},\operatorname{id})<2^{-n}\) if and only if for every \(k\leq n\), the half-spaces \(h_{k}\) and \(f^{-1}(h_{k})\) are \(R\)-indistinguishable, where we recall our enumeration \(\operatorname{HS}=\{h_{k}\}_{k\in\mathbb{N}}\). This is a Borel condition, as it can be expressed by saying that for every \(k\leq n\), there exists a half-space \(h\) that is \(R\)-indistinguishable from \(h_{k}\), such that for every vertex \(v\in h\), one has \(f(v)\in h_{k}\). We are now ready to complete our proof of Proposition 5.1. Proof.: Write \(G=G_{1}\times\cdots\times G_{k}\), where no \(G_{i}\) splits as a direct product. For every \(i\in\{1,\ldots,k\}\), let \(\mathbb{B}_{i}\) be the right-angled building of \(G_{i}\). By Corollary 5.4, for every \(i\in\{1,\ldots,k\}\), the action of \(G_{i}\) on the compact metrizable space \(R(\mathbb{B}_{i})\) is minimal and strongly proximal, and it is faithful by Lemma 5.6. Therefore, by Lemma 5.2, the inclusion \(G\subseteq\operatorname{Homeo}(R(\mathbb{B}))\) is strongly ICC. Now, if \(\mu\) is a conjugation-invariant probability measure on \(\operatorname{Aut}(\mathbb{B})\), Lemma 5.6 enables us to pushforward \(\mu\) to a conjugation-invariant probability measure on \(\mathrm{Homeo}(R(\mathbb{B}))\). It follows that \(\mu\) is the Dirac mass at \(\mathrm{id}\), as desired. ## 6 Action on the right-angled building with amenable stabilizers The goal of the present section is to prove the following statement. **Proposition 6.1**.: _Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), let \(\mathbb{B}\) be its right-angled building, and let \(H\) be a countable group. Assume that \(G\) and \(H\) are measure equivalent._ _Then \(H\) acts on \(\mathbb{B}\) with amenable vertex stabilizers. If in addition \(H\) has bounded torsion, then this action can be chosen to be cocompact._ Proposition 6.1 will be proved by applying the general statements established in Section 4 to the action of an appropriate finite-index extension \(\hat{G}\) of \(G\) on \(\operatorname{Aut}(\mathbb{B})\). ### A finite-index extension of \(G\) with the same transitivity as \(\operatorname{Aut}(\mathbb{B})\) Throughout the section, we let \(G=G_{\Gamma}\) be a right-angled Artin group. The (finite) automorphism group \(\operatorname{Aut}(\Gamma)\) naturally acts on \(G\) by (outer) automorphisms. We let \(\hat{G}=G\rtimes\operatorname{Aut}(\Gamma)\). The action of \(\operatorname{Aut}(\Gamma)\) on \(G\) sends standard flat to standard flat. Therefore, the \(G\)-action on its right-angled building \(\mathbb{B}\) extends to an action of \(\hat{G}\) by cubical automorphisms, where an element \((g,\theta)\in\hat{G}\) sends a vertex representing a coset \(hG_{\Lambda}\), where \(\Lambda\subseteq\Gamma\) is a complete subgraph, to the vertex representing \(g\theta(hG_{\Lambda})\). The importance of the group \(\hat{G}\) for us is that it acts on \(\mathbb{B}\) with the same transitivity as the full automorphism group \(\operatorname{Aut}(\mathbb{B})\), as shown by the following lemma. This property will be crucial in order to apply Corollary 4.7. **Lemma 6.2**.: _For every edge \(e\in E\mathbb{B}\), the orbits of \(e\) under \(\hat{G}\) and under \(\operatorname{Aut}(\mathbb{B})\) coincide._ Proof.: It suffices to show that if \(he=e^{\prime}\) for some \(h\in\operatorname{Aut}(\mathbb{B})\), then there exists \(g\in\hat{G}\) such that \(ge=e^{\prime}\). Recall that vertices of \(\mathbb{B}\) are in one-to-one correspondence with standard flats in \(G\), and elements in \(\operatorname{Aut}(\mathbb{B})\) can be viewed as flat-preserving bijections of \(G\). Edges of \(\mathbb{B}\) are in one-to-one correspondence with codimension one inclusions of standard flats \(F_{1}\subseteq F_{2}\). Let \(F_{1}\subseteq F_{2}\) and \(F_{1}^{\prime}\subseteq F_{2}^{\prime}\) be the inclusions of standard flats associated to the edges \(e\) and \(e^{\prime}\), respectively. Since every automorphism of \(\mathbb{B}\) preserves the ranks of vertices (Lemma 2.5), we have \(h(F_{1},F_{2})=(F_{1}^{\prime},F_{2}^{\prime})\). Let \(x\in F_{1}^{\prime}\) be a vertex. Then there is a unique standard flat \(\tilde{F}_{1}\) containing \(x\) such that \(F_{1}\) and \(\tilde{F}_{1}\) have the same type. Likewise there exists a unique standard flat \(\tilde{F}_{2}\) containing \(\tilde{F}_{1}\) which has the same type as \(F_{2}\). Then \((F_{1},F_{2})\) and \((\tilde{F}_{1},\tilde{F}_{2})\) differ by a left translation, i.e. there exists \(g_{1}\in G\) such that \(g_{1}(F_{1},F_{2})=(\tilde{F}_{1},\tilde{F}_{2})\). Thus \(g_{1}h^{-1}(F_{1}^{\prime},F_{2}^{\prime})=(\tilde{F}_{1},\tilde{F}_{2})\). Let \(x^{\prime}=g_{1}h^{-1}(x)\in\tilde{F}_{1}\). As \(x\) and \(x^{\prime}\) both belong to \(\tilde{F}_{1}\), there exists \(g_{2}\in G\) such that \(g_{2}(x^{\prime})=x\) and \(g_{2}(\tilde{F}_{1},\tilde{F}_{2})=(\tilde{F}_{1},\tilde{F}_{2})\). Thus \(g_{2}g_{1}h^{-1}(F_{1}^{\prime},F_{2}^{\prime})=(\tilde{F}_{1},\tilde{F}_{2})\). Also \(g_{2}g_{1}h^{-1}(x)=x\) belongs to \(F_{1}^{\prime}\cap\tilde{F}_{1}\). Let \(v_{x}\) be the vertex in \(\mathbb{B}\) associated to \(x\), and let \(\tilde{e}\) be the edge associated to \((\tilde{F}_{1},\tilde{F}_{2})\). Let \(K_{x}\) be the union of all cubes in \(\mathbb{B}\) containing \(v_{x}\). Note that \(\tilde{e},e^{\prime}\subset K_{x}\), and \(g_{2}g_{1}h^{-1}\) stabilizes \(K_{x}\), sending \(e^{\prime}\) to \(\tilde{e}\). Note that \(\mathrm{lk}(v_{x},K_{x})\) is isomorphic to the _flag completion_ of the defining graph \(\Gamma\) of \(G\), i.e. vertices of \(\mathrm{lk}(v_{x},K_{x})\) are in one-to-one correspondence with vertices of \(\Gamma\), and a collection of vertices in \(\mathrm{lk}(v_{x},K_{x})\) span a simplex whenever the associated vertices in \(\Gamma\) span a complete subgraph. So the map \((g_{2}g_{1}h^{-1})_{|K_{x}}:K_{x}\to K_{x}\), and its conjugate \((x^{-1}g_{2}g_{1}h^{-1}x)_{|K_{\mathrm{id}}}:K_{\mathrm{id}}\to K_{\mathrm{id}}\), are induced by an automorphism of \(\Gamma\). This means that \(x^{-1}g_{2}g_{1}h^{-1}x\) has the same action on \(K_{\mathrm{id}}\) as an element of \(\operatorname{Aut}(\Gamma)\subset\hat{G}\). Thus there exists \(g_{3}\in\hat{G}\) such that \(g_{3}(e^{\prime})=\tilde{e}\), and hence \(g_{3}^{-1}g_{1}(e)=e^{\prime}\), as desired. **Corollary 6.3**.: _For every vertex \(v\in V\mathbb{B}\), the orbits of \(v\) under \(\hat{G}\) and under \(\operatorname{Aut}(\mathbb{B})\) coincide._ Proof.: Let \(v^{\prime}\in V\mathbb{B}\), and assume that there exists \(h\in\operatorname{Aut}(\mathbb{B})\) such that \(hv=v^{\prime}\). Let \(e\) be an edge that contains \(v\), and let \(e^{\prime}=he\). By Lemma 6.2, there exists \(g\in\hat{G}\) such that \(e^{\prime}=ge\). Since every automorphism of \(\mathbb{B}\) preserves the ranks of vertices (Lemma 2.5), we deduce that \(v^{\prime}=gv\), and the corollary follows. We will also need to know that \(\hat{G}\) and \(\operatorname{Aut}(\mathbb{B})\simeq\operatorname{Aut}(\Gamma^{e})\) have the same orbits of vertices when acting on the extension graph, as shown by the following lemma. **Lemma 6.4**.: _Assume that \(|\operatorname{Out}(G)|<+\infty\). Then for every \(\mathsf{v}\in V\Gamma^{e}\), the orbits of \(\mathsf{v}\) under \(\hat{G}\) and \(\operatorname{Aut}(\Gamma^{e})\) coincide._ Proof.: Let \(h\in\operatorname{Aut}(\Gamma^{e})\) and let \(\mathsf{w}=h\mathsf{v}\). We aim to prove that there exists \(g\in\hat{G}\) such that \(\mathsf{w}=g\mathsf{v}\). When viewed as an automorphism of \(\mathbb{B}\) through the isomorphism \(\operatorname{Aut}(\Gamma^{e})\to\operatorname{Aut}(\mathbb{B})\) provided by Lemma 2.6, the element \(h\) sends a \(\mathsf{v}\)-line \(\ell\) (associated to a rank \(1\) vertex \(v\in V\mathbb{B}\)) to some \(\mathsf{w}\)-line \(\ell^{\prime}\) (associated to a rank \(1\) vertex \(v^{\prime}\in V\mathbb{B}\)). By Corollary 6.3, there exists \(g\in\hat{G}\) with \(gv=v^{\prime}\). Since \(g\) sends the \(\mathsf{v}\)-line \(\ell\) to the \(\mathsf{w}\)-line \(\ell^{\prime}\), we have \(g\mathsf{v}=\mathsf{w}\), which concludes our proof. ### Reduction of self couplings to the right-angled building The following lemma establishes the crucial reduction property of self-couplings from Theorem 4.1, for the action of \(\hat{G}\) on \(\mathbb{B}\). **Lemma 6.5**.: _Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), and let \(\Sigma\) be a self measure equivalence coupling of \(\hat{G}\)._ _Then there exists a measurable almost \((\hat{G}\times\hat{G})\)-equivariant map \(\Sigma\to\operatorname{Aut}(\mathbb{B})\)._ Our proof of Lemma 6.5 is essentially a translation of the main results of our earlier paper [11] from the language of measured groupoids to the language of self-couplings, using the dictionary between measure equivalence and stable orbit equivalence developed by Furman [12] and recalled in Section 4.1, and some arguments from the work of Kida [10]. Proof.: Since \(G\) has finite index in \(\hat{G}\), the space \(\Sigma\) is also a self measure equivalence coupling of \(G\). Let \(X_{\ell},X_{r}\subseteq\Sigma\) be respective fundamental domains for the actions of \(G_{\ell}=G\times\{1\}\) and \(G_{r}=\{1\}\times G\) on \(\Sigma\). In view of [10, Lemma 2.27], we can (and will) choose \(X_{\ell},X_{r}\) so that \((G\times G)\cdot(X_{\ell}\cap X_{r})=\Sigma\) up to null sets. Let \(U=X_{\ell}\cap X_{r}\). As recalled in Section 4.1, there are natural measure-preserving actions \(G_{\ell}\curvearrowright X_{r}\) and \(G_{r}\curvearrowright X_{\ell}\), obtained through the identifications \(X_{r}\approx G_{r}\backslash\Sigma\) and \(X_{\ell}\approx G_{\ell}\backslash\Sigma\). Their orbits coincide on \(U\), so the two corresponding measured groupoids \(G_{\ell}\ltimes X_{r}\) and \(G_{r}\ltimes X_{\ell}\) restrict to isomorphic measured groupoids on \(U\). We denote by \(\mathcal{G}\) this common measured groupoid over \(U\), which is naturally equipped (up to restricting to a conull Borel subset of \(U\)) with two cocycles \(\rho_{\ell}:\mathcal{G}\to G_{\ell}\) and \(\rho_{r}:\mathcal{G}\to G_{r}\). The first two paragraphs of the proof of [11, Theorem 3.19] yield a Borel map \(\theta:V\Gamma^{e}\times U\to V\Gamma^{e}\) such that for every \(\mathsf{v}\in V\Gamma^{e}\), there exists a partition \(U^{*}=\sqcup_{i\in I}U_{i}\) of a conull Borel subset \(U^{*}\subseteq U\) into at most countably many Borel subsets satisfying the following properties: 1. for every \(i\in I\), the restriction \(\theta|_{\{\mathsf{v}\}\times U_{i}}\) takes constant value, denoted \(\mathsf{w}_{i}\in V\Gamma^{e}\); 2. for every \(i\in I\), we have \(\rho_{\ell}^{-1}(\operatorname{Stab}_{G_{\ell}}(\mathsf{v}))_{|U_{i}}=\rho_{ r}^{-1}(\operatorname{Stab}_{G_{r}}(\mathsf{w}_{i}))_{|U_{i}}\). Moreover, it is shown in the proof of [11, Theorem 3.19] that for almost every \(u\in U\), the map \(\theta(\cdot,u)\) gives an element in \(\operatorname{Aut}(\Gamma^{e})\), and this gives a measurable map \(\bar{\theta}:U\to\operatorname{Aut}(\Gamma^{e})\) (up to replacing \(U\) by a conull Borel subset). We claim that the map \(\bar{\theta}\) satisfies the following equivariance: up to restricting \(U\) to a conull Borel subset, for every \(g\in\mathcal{G}\), one has \(\bar{\theta}(r(g))=\rho_{r}(g)\bar{\theta}(s(g))\rho_{\ell}(g)^{-1}\). The argument comes from [10, Lemma 5.5], and is the following. It is enough to prove this equivariance on a Borel subset \(B\subseteq\mathcal{G}\) where the values of \(\rho_{\ell}\) and \(\rho_{r}\) are constant, and which induces a Borel isomorphism between \(s(B)\) and \(r(B)\) - indeed \(\mathcal{G}\) is covered by countably many such Borel subsets. We denote by \(h_{\ell},h_{r}\in G\) the respective values of \(\rho_{\ell},\rho_{r}\) on \(B\). Let \(\mathsf{v}\in V\Gamma^{e}\). Up to a countable Borel partition of \(B\), we can further assume that \(\bar{\theta}(\cdot)(\mathsf{v})\) is constant on \(s(B)\) (with value denoted by \(\mathsf{w}\)), and that \(\bar{\theta}(\cdot)(h_{\ell}\mathsf{v})\) is constant on \(r(B)\) (with value denoted by \(\mathsf{w}^{\prime}\)). Then \(\rho_{\ell}^{-1}(\operatorname{Stab}_{G}(\mathsf{v}))_{|s(B)}=\rho_{r}^{-1}( \operatorname{Stab}_{G}(\mathsf{w}))_{|s(B)}\) by the definition of \(\bar{\theta}\). Thus \(\rho_{\ell}^{-1}(\operatorname{Stab}_{G}(h_{\ell}\mathsf{v}))_{|r(B)}\) is both equal to \(\rho_{r}^{-1}(\operatorname{Stab}_{G}(h_{r}\mathsf{w}))_{|r(B)}\) (by the choice of \(B\)) and to \(\rho_{r}^{-1}(\operatorname{Stab}_{G}(\mathsf{w}^{\prime}))_{|r(B)}\) (by the definition of \(\bar{\theta}\)). Hence \(h_{r}\mathsf{w}=\mathsf{w}^{\prime}\), using the uniqueness statement [11, Lemma 3.14]. In other words, for every \(g\in B\), we have proved that \(\bar{\theta}(r(g))(\rho_{\ell}(g)\mathsf{v})=\rho_{r}(g)(\bar{\theta}(s(g))( \mathsf{v}))\). As \(V\Gamma^{e}\) is countable, this is exactly the desired equivariance. Under the natural isomorphism between \(\operatorname{Aut}(\Gamma^{e})\) and \(\operatorname{Aut}(\mathbb{B})\) recalled in Section 2.2, we view \(\bar{\theta}\) as a map \(U\to\operatorname{Aut}(\mathbb{B})\), which satisfies the same equivariance (see Remark 2.7). Recall that by our choice of \(U\), there exists a conull Borel subset \(\Sigma^{*}\subseteq\Sigma\) such that \((G\times G)\cdot U=\Sigma^{*}\). We now claim, following [10, Theorem 5.6], that the assignment \((g_{1},g_{2})x\mapsto g_{1}\bar{\theta}(x)^{-1}g_{2}^{-1}\), with \(x\in U\), is a well-defined \((G\times G)\)-equivariant Borel map \(\Sigma^{*}\to\operatorname{Aut}(\mathbb{B})\). The only point we need to check is that if \((g_{1},g_{2})x=y\) with \(x,y\in U\), then \(g_{1}\bar{\theta}(x)^{-1}g_{2}^{-1}=\bar{\theta}(y)^{-1}\). This is precisely the contents of the equivariance proved at the level of groupoids, so our claim is proved. Finally, using that \(G\) is normal in \(\bar{G}\), Lemma 4.2 and Proposition 5.1 ensure that \(\bar{\theta}\) is in fact \((\hat{G}\times\hat{G})\)-equivariant, which completes our proof. ### Proof of Proposition 6.1 Proposition 6.1 follows from the combination of Lemmas 6.6, 6.7 and Corollary 6.9 below. **Lemma 6.6**.: _Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), let \(H\) be a countable group, and let \(\Omega\) be a measure equivalence coupling between \(\hat{G}\) and \(H\)._ _Then there exist a group homomorphism \(\iota:H\to\operatorname{Aut}(\mathbb{B})\) with finite kernel and a measurable almost \((\hat{G}\times H)\)-equivariant map \(\theta:\Omega\to\operatorname{Aut}(\mathbb{B})\), i.e. such that for all \((g,h)\in\hat{G}\times H\) and a.e. \(\omega\in\Omega\), one has \(\theta((g,h)\omega)=g\theta(\omega)\iota(h)^{-1}\)._ Proof.: This is a consequence of Theorem 4.1, applied with \(\mathsf{G}=\hat{G}\) and \(L=\operatorname{Aut}(\mathbb{B})\), using that the \(\hat{G}\)-action on \(\mathbb{B}\) is faithful. Indeed the inclusion \(\hat{G}\subseteq\operatorname{Aut}(\mathbb{B})\) is strongly ICC by Proposition 5.1, and self-couplings of \(\hat{G}\) map to \(\operatorname{Aut}(\mathbb{B})\) by Lemma 6.5. Let \((\Omega,\mu)\) be a measure equivalence coupling between \(\hat{G}\) and \(H\). Lemma 6.6 gives a measurable group homomorphism \(\iota:H\to\operatorname{Aut}(\mathbb{B})\) and a measurable almost \((\hat{G}\times H)\)-equivariant map \(\theta:\Omega\to\operatorname{Aut}(\mathbb{B})\). In particular \(\iota\) provides an action of \(H\) on \(\mathbb{B}\). Let \(v\in V\mathbb{B}\), let \(\Omega_{v}=\theta^{-1}(\operatorname{Stab}_{\operatorname{Aut}(\mathbb{B})}(v))\), let \(H_{v}=\iota^{-1}(\operatorname{Stab}_{\operatorname{Aut}(\mathbb{B})}(v))\), and let \(G_{v}\) (resp. \(\hat{G}_{v}\)) be the stabilizer of \(v\) in \(G\) (resp. \(\hat{G}\)). The equivariance of \(\theta\) ensures that \(\Omega_{v}\) is invariant under the action of \(\hat{G}_{v}\times H_{v}\). **Lemma 6.7**.: _The space \(\Omega_{v}\) is a measure equivalence coupling between \(G_{v}\) and \(H_{v}\), in particular \(H_{v}\) is amenable._ _In addition, every fundamental domain for the action of \(\hat{G}_{v}\) (resp. \(G_{v}\), resp. \(H_{v}\)) on \(\Omega_{v}\) is contained in a fundamental domain for the action of \(\hat{G}\) (resp. \(G\), resp. \(H\)) on \(\Omega\)._ Proof.: The fact that \(\Omega_{v}\) is a measure equivalence coupling between \(G_{v}\) and \(H_{v}\) follows from Corollary 4.4, applied (with \(L=\operatorname{Aut}(\mathbb{B})\)) to the action of \(\mathsf{G}=\hat{G}\) on \(\mathbb{B}\). Indeed \(\hat{G}\) and \(\operatorname{Aut}(\mathbb{B})\) have the same orbits of vertices by Corollary 6.3. The amenability of \(H_{v}\) follows from the fact that a countable group which is measure equivalent to an amenable one, is itself amenable, see e.g. [10, Corollary 1.3]. The statement about fundamental domains for the actions of \(\hat{G}_{v}\) and \(H_{v}\) also follows from Corollary 4.4. Finally, the statement about fundamental domains for the action of \(G_{v}\) follows from Remark 4.5, applied with \(\mathsf{K}=G\). Given a vertex \(v\in V\mathbb{B}\), we denote by \(V^{0}(\mathbb{B})_{\leq v}\) the set of all rank 0 vertices of \(\mathbb{B}\) that are smaller than \(v\) (for the partial order on \(V\mathbb{B}\) introduced in Section 2.2). **Lemma 6.8**.: _Assume that \(H\) has bounded torsion. Then the \(H\)-action on the set of rank 0 vertices of \(\mathbb{B}\) has finite stabilizers and finitely many orbits. In addition, for every vertex \(v\in V\mathbb{B}\), the \(H_{v}\)-action on \(V^{0}(\mathbb{B})_{\leq v}\) has finitely many orbits._ Proof.: The first part follows from the first conclusion of Corollary 4.7, using that \(H\) has bounded torsion and that rank 0 vertices have finite \(\hat{G}\)-stabilizers and all belong to the same \(\hat{G}\)-orbit. The second part follows from the second conclusion of Corollary 4.7, applied to \(V=V^{0}(\mathbb{B})_{\leq v}\): indeed \(V\) is invariant under \(\operatorname{Stab}_{\operatorname{Aut}(\mathbb{B})}(v)\) because \(\operatorname{Aut}(\mathbb{B})\) preserves ranks of vertices (Lemma 2.5), and the action of \(G_{v}\) on \(V\) is transitive. **Corollary 6.9**.: _Assume that \(H\) has bounded torsion. Then the \(H\)-action on \(\mathbb{B}\) is cocompact._ Proof.: By Lemma 6.8, \(H\) acts on the set of rank 0 vertices of \(\mathbb{B}\) with finitely many orbits. Recall that the action of \(\operatorname{Aut}(\mathbb{B})\) on \(\mathbb{B}\) preserves ranks of vertices (Lemma 2.5), hence respects the order of vertices on \(\mathbb{B}\). As each vertex of rank at least 1 is lower bounded by at least one rank 0 vertex, and there is a bound \(C\) such that each rank 0 vertex is smaller than at most \(C\) vertices of higher rank in \(\mathbb{B}\), it follows that the action of \(H\) on \(\mathbb{B}\) has finitely many orbits of vertices. More generally, given any vertex \(s\) of \(\mathbb{B}\), there are only finitely many \(k\)-cells that contain \(s\) as a minimal rank vertex. Thus the action of \(H\) on \(\mathbb{B}\) has only finitely many orbits of cells, and is therefore cocompact. Exploiting integrability In this section, we exploit the integrability condition on the measure equivalence coupling between \(G\) and \(H\) in order to prove that the stabilizers of rank \(1\) vertices for the \(H\)-action on the right-angled building of \(G\) are virtually cyclic. Recall from the introduction that an _\((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\)_ is a measure equivalence coupling \((\Omega,\mu)\) between \(H\) and \(G\) for which there exists a fundamental domain \(X_{G}\) for the \(G\)-action on \(\Omega\) such that for each \(h\in H\), \[\int_{X_{G}}|c(h,x)|_{G}\;d\mu(x)<+\infty,\] where \(c:H\times X_{G}\to G\) is the associated measure equivalence cocycle, and \(|\cdot|_{G}\) is a word length on \(G\) associated to some finite generating set. **Proposition 7.1**.: _Let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), let \(\mathbb{B}\) be its right-angled building, and let \(H\) be a countable group with bounded torsion. Assume that there exists an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\)._ _Then \(H\) acts on \(\mathbb{B}\) with virtually infinite cyclic stabilizers of rank \(1\) vertices._ In the whole section, when \(G\) is a finitely generated group, with a finite generating set \(S\), we will write \(|g|_{S}\) to denote the word length of an element \(g\in G\) with respect to the generating set \(S\). When \(S\) is clear from the context (or irrelevant to the statement), we will sometimes simply write \(|g|_{G}\). ### Integrable coupling between vertex stabilizers We start with the following easy observation. **Lemma 7.2**.: _Let \(G,H\) be countable groups, with \(G\) finitely generated, and let \(\hat{G}\) be a finite-index extension of \(G\). Let \(\hat{\Omega}\) be an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(\hat{G}\)._ _Then \(\hat{\Omega}\) is an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\)._ Proof.: By definition, there exists a fundamental domain \(X_{\hat{G}}\) for the action of \(\hat{G}\) on \(\hat{\Omega}\) such that the cocycle \(\hat{c}:H\times X_{\hat{G}}\to\hat{G}\) is \(L^{1}\)-integrable. Let \(S=\{g_{1},\ldots,g_{k}\}\) be a (finite) set of representatives of the right cosets of \(G\) in \(\hat{G}\). Then \(X_{G}=g_{1}X_{\hat{G}}\cup\cdots\cup g_{k}X_{\hat{G}}\) is a fundamental domain for the \(G\)-action on \(\hat{\Omega}\). We claim that the associated cocycle \(c:H\times X_{G}\to G\) is \(L^{1}\)-integrable. Indeed, let \(h\in H\). For \(x\in X_{G}\), there exists a unique \(\hat{x}\in X_{\hat{G}}\) and \(j\in\{1,\ldots,k\}\) such that \(x=g_{j}\hat{x}\). Also by definition of \(X_{\hat{G}}\), there exists a unique element \(\hat{g}\in\hat{G}\) such that \(\hat{g}hx\in X_{\hat{G}}\). Using the fact that the actions of \(\hat{G}\) and \(H\) on \(\hat{\Omega}\) commute, we see that \((\hat{g}g_{j})h\hat{x}\in X_{\hat{G}}\), showing that \(\hat{c}(h,\hat{x})=\hat{g}g_{j}\). On the other hand, the set \(S^{-1}=\{g_{1}^{-1},\ldots,g_{k}^{-1}\}\) is a set of representatives of the left cosets of \(G\) in \(\hat{G}\), so there exist \(i\in\{1,\ldots,k\}\) and \(g\in G\) such that \(\hat{g}=g_{i}^{-1}g\). We then have \(ghx\in g_{i}X_{\hat{G}}\), showing that \(c(h,x)=g\). It follows that \(c(h,x)\in S\hat{c}(h,\hat{x})S^{-1}\). As this is true for every \(h\in H\) and \(x\in X_{G}\) and \(S\) is a finite set, the lemma follows. Let now \(G,H\) be as in the statement of Proposition 7.1. Let \(\Omega\) be an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\), and let \(X_{G}\) be a Borel fundamental domain for the \(G\)-action on \(\Omega\) such that the measure equivalence cocycle \(c:H\times X_{G}\to G\) is \(L^{1}\)-integrable. Let \(\hat{G}\) be the finite-index extension of \(G\) introduced in Section 6.1. Let \(\hat{\Omega}\) be the induced coupling between \(\hat{G}\) and \(H\), namely \(\hat{\Omega}=G\backslash(\hat{G}\times\Omega)\) - here \(\hat{G}\times G\) acts on \(\hat{G}\) via \((\hat{g},g)\cdot h=\hat{g}hg^{-1}\), and \(\hat{G}\) acts trivially on \(\Omega\) while \(H\) acts trivially on \(\hat{G}\), and we are taking the quotient by the diagonal action of \(G\) on \(\hat{G}\times\Omega\). Notice that \(X_{\hat{G}}=\{\operatorname{id}\}\times X_{G}\), identified to a subset of \(\hat{\Omega}\), is a Borel fundamental domain for the action of \(\hat{G}\) on \(\hat{\Omega}\). The associated measure equivalence cocycle \(H\times X_{\hat{G}}\to\hat{G}\) takes its values in \(G\) and coincides with \(c\). In particular \(\hat{\Omega}\) is an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(\hat{G}\). We can therefore apply Lemma 7.2 and obtain a Borel fundamental domain \(\hat{X}_{G}\) for the \(G\)-action on \(\hat{\Omega}\), such that the measure equivalence cocycle \(\hat{c}:H\times\hat{X}_{G}\to G\) is \(L^{1}\)-integrable. Lemma 6.6 gives us a homomorphism \(\iota:H\to\operatorname{Aut}(\mathbb{B})\) and an almost equivariant map \(\theta:\hat{\Omega}\to\operatorname{Aut}(\mathbb{B})\). In particular we have an action of \(H\) on \(\mathbb{B}\). Given \(v\in V\mathbb{B}\), we denote by \(G_{v}\) and \(H_{v}\) its stabilizers for the actions of \(G\) and \(H\), respectively. **Lemma 7.3**.: _For every \(v\in V\mathbb{B}\), there exists an \((L^{1},L^{0})\)-measure equivalence coupling from \(H_{v}\) to \(G_{v}\)._ Our proof of Lemma 7.3 is inspired by an argument coming from ongoing work of Escalier and the first-named author [EH]. Proof.: As above, let \(\hat{X}_{G}\) be a fundamental domain for the action of \(G\) on \(\hat{\Omega}\), such that the cocycle \(\hat{c}:H\times\hat{X}_{G}\to G\) is \(L^{1}\)-integrable. By Lemma 6.7, the space \(\hat{\Omega}_{v}=\theta^{-1}(\operatorname{Stab}_{\operatorname{Aut}( \mathbb{B})}(v))\) is a measure equivalence coupling between \(G_{v}\) and \(H_{v}\). Let \(Y_{G_{v}},Y_{H_{v}}\) be respective Borel fundamental domains for these actions on \(\hat{\Omega}_{v}\). Lemma 6.7 also shows that \(Y_{G_{v}},Y_{H_{v}}\) extend to Borel fundamental domains \(Y_{G},Y_{H}\) for the actions of \(G\) and \(H\) on \(\hat{\Omega}\). Let \(c_{v}:H_{v}\times Y_{G_{v}}\to G_{v}\) be the measure equivalence cocycle. Then \(c_{v}\) extends to a measure equivalence cocycle \(c^{\prime}:H\times Y_{G}\to G\). The cocycle \(c^{\prime}\) is cohomologous to \(\hat{c}\), i.e. there exists \(\theta:Y_{G}\to G\) such that \(c^{\prime}(h,x)=\theta(h\cdot x)^{-1}\hat{c}(h,\tilde{x})\theta(x)\), where \(\tilde{x}\) is the unique element of \(\hat{X}_{G}\) in the same \(G\)-orbit as \(x\). As usual \(G\) is equipped with its standard generating set. Let \(\psi:G\to G\) be defined by letting \(\psi(g)\) be the unique element with smallest word length such that \(\psi(g)G_{v}=gG_{v}\) (uniqueness comes from the normal form in the right-angled Artin group, in the sense of graph products [10]). Let \(\theta^{\prime}=\psi\circ\theta\), and let \(c^{\prime\prime}:H\times Y_{G}\to G\) be defined by \(c^{\prime\prime}(h,x)=\theta^{\prime}(h\cdot x)^{-1}\hat{c}(h,\tilde{x})\theta ^{\prime}(x)\). By definition of \(\psi\), for every \(x\in X\), there exists \(g_{x}\in G_{v}\) such that \(\theta^{\prime}(x)=\theta(x)g_{x}\). Using this and the fact that \(c^{\prime}(H_{v}\times Y_{G_{v}})\subseteq G_{v}\), we deduce that \(c^{\prime\prime}(H_{v}\times Y_{G_{v}})\subseteq G_{v}\). In addition, a normal form argument shows that \(|c^{\prime\prime}(h,x)|_{G_{v}}\leq|\hat{c}(h,\tilde{x})|_{G}\) for every \(h\in H_{v}\) and \(x\in Y_{G_{v}}\) (indeed, one writes \(\theta^{\prime}(hx)c^{\prime\prime}(h,x)\theta^{\prime}(x)^{-1}=\hat{c}(h, \tilde{x})\), and observes that the subword \(c^{\prime\prime}(h,x)\) on the left cannot be shortened by cancellations with \(\theta^{\prime}(x)\) and \(\theta^{\prime}(hx)\) in view of the choice of \(\theta^{\prime}\)). Therefore the cocycle \(c^{\prime\prime}:H_{v}\times Y_{G_{v}}\to G_{v}\) is \(L^{1}\)-integrable. It is \(G_{v}\)-cohomologous to \(c^{\prime}\), and therefore it is a measure equivalence cocycle from \(H_{v}\) to \(G_{v}\), for the coupling \(\Omega_{v}\). This completes our proof. ### Integrable embeddings Let \(G,H\) be countable groups, with \(G\) finitely generated. Let \((X,\mu)\) be a standard probability space equipped with a measure-preserving action of \(H\) by Borel automorphisms. We say that a cocycle \(c:H\times X\to G\) is _\(L^{1}\)-integrable_ if for every \(h\in H\), one has \[\int_{X}|c(h,x)|_{G}\;d\mu(x)<+\infty.\] In the sequel of this section, we will work with the notion of integrable embeddings, as defined by Bowen in the appendix of [11]. **Definition 7.4** (Integrable embedding (Bowen [11, Definition B.8])).: _Let \(G,H\) be countable groups, with \(G\) finitely generated. Let \((X,\mu)\) be a standard probability space equipped with a measure-preserving action of \(H\) by Borel automorphisms._ _A cocycle \(c:H\times X\to G\) is an \(L^{1}\)-integrable embedding if for every \(\varepsilon>0\), there exist an \(L^{1}\)-integrable cocycle \(c^{\prime}:H\times X\to G\) which is cohomologous to \(c\), a Borel subset \(X_{0}\subseteq X\) with \(\mu(X_{0})\geq 1-\varepsilon\), and a constant \(C=C(\varepsilon)\), such that for every \(h\in H\) and almost every \(x\in X_{0}\), one has \(|\{g\in G\mid gx\in X_{0}\text{ and }c^{\prime}(g,x)=h\}|<C.\)_ The following elementary lemma, analogous to Bowen's [11, Theorem B.9], enables to check the second property from the above definition for measure equivalence cocycles. **Lemma 7.5**.: _Let \(\Omega\) be a measure equivalence coupling between two countable groups \(G\) and \(H\), let \(X_{G}\subseteq\Omega\) be a fundamental domain for the \(H\)-action, and let \(c:H\times X_{G}\to G\) be the associated measure equivalence cocycle. Let \(\varepsilon>0\)._ _Then there exist \(C>0\) and a measurable subset \(X_{\varepsilon}\) of \(X_{G}\) with \(\mu(X_{\varepsilon})\geq\mu(X_{G})-\varepsilon\) such that for every \(x\in X_{\varepsilon}\) and every \(g\in G\), one has_ \[|\{h\in H\mid h\cdot x\in X_{\varepsilon}\text{ and }c(h,x)=g\}|<C.\] Proof.: Let \(X_{H}\) be a fundamental domain for the \(H\)-action on \(\Omega\). Then \(\Omega\) is covered by countably many pairwise disjoint \(H\)-translates of \(X_{H}\). For every \(h\in H\), let \(X_{G,h}=X_{G}\cap hX_{H}\). Then the subsets \(X_{G,h}\) form a countable partition of \(X_{G}\). Choose a finite subset \(F\subseteq H\) such that \(\mu(\cup_{h\in F}X_{G,h})>\mu(X_{G})-\varepsilon\). Let \(C=|F|\), and let \(X_{\varepsilon}=\cup_{g\in F}X_{G,h}\). Let now \(g\in G\) and \(x\in X_{\varepsilon}\). We claim that if two distinct elements \(h_{1},h_{2}\in H\) satisfy \(c(h_{1},x)=c(h_{2},x)=g\), then \(h_{1}x\) and \(h_{2}x\) belong to different subsets of the partition \(g^{-1}X_{G}=\sqcup g^{-1}X_{G,h}\). Indeed, notice first that as \(c(h_{1},x)=c(h_{2},x)=g\), we have \(h_{1}x,h_{2}x\in g^{-1}X_{G}\). Arguing by contradiction that the claim fails, there exists \(h\in H\) such that \(h_{1}x,h_{2}x\in g^{-1}(X_{G}\cap hX_{H})\). In particular \(gh_{1}x,gh_{2}x\in hX_{H}\). As the actions of \(G\) and \(H\) on \(\Omega\) commute, it follows that \(h_{1}\) and \(h_{2}\) both send \(gx\) to a common fundamental domain (namely \(hX_{H}\)) of the \(H\)-action on \(\Omega\). This is a contradiction, which proves the claim. When considering the \(H\)-action on \(X_{G}\), the above claim translates as follows: if two distinct elements \(h_{1},h_{2}\in H\) are such that \(c(h_{1},x)=c(h_{2},x)=g\), then \(h_{1}\cdot x\) and \(h_{2}\cdot x\) belong to different subsets of the partition \(X_{G}=\sqcup_{h\in H}X_{G,h}\). Since only \(C\) of these subsets are contained in \(X_{\varepsilon}\), the lemma follows. An immediate corollary of Lemma 7.5 is the following, see also [11, Theorem B.9]. **Corollary 7.6**.: _Let \(G\) and \(H\) be countable groups, with \(G\) finitely generated. If there is an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\), then there is an \(L^{1}\)-integrable embedding from \(H\) to \(G\)._ ### Stabilizers of rank \(1\) vertices are virtually cyclic In this section, we prove Proposition 7.1. We already know that there exists an \((L^{1},L^{0})\)-measure equivalence coupling from \(H_{v}\) to \(G_{v}\) (Lemma 7.3). If we additionally knew that \(H_{v}\) were finitely generated, then we could use a theorem of Bowen [14, Theorem B.10] to deduce that the growth of \(H_{v}\) is at most equal to the growth of \(G_{v}\), whence at most linear. Being infinite, \(H_{v}\) is therefore virtually isomorphic to \(\mathbb{Z}\). However, we do not know _a priori_ that \(H_{v}\) is finitely generated. In this section, we extend Bowen's theorem in order to deal with this issue, see Theorem 7.10. Given a finitely generated group \(G\), with a finite generating set \(S\), we let \(B_{G,S}(e,n)=\{g\in G\mid|g|_{S}\leq n\}\). The _growth function_\(\operatorname{gr}_{G,S}:\mathbb{N}\to\mathbb{N}\) is defined as \[\operatorname{gr}_{G,S}(n)=|B_{G,S}(e,n)|.\] Given two functions \(f,g:\mathbb{N}\to\mathbb{N}\), write \(f\preceq g\) if there exists \(A\) such that \(f(n)\leq Ag(An+A)+A\). Write \(f\sim g\) if \(f\preceq g\) and \(g\preceq f\): this is an equivalence relation on maps \(\mathbb{N}\to\mathbb{N}\). The growth functions associated to two finite generating sets of the same group are always equivalent; we denote by \(\operatorname{gr}_{G}\) their equivalence class. When \(S\) is clear from context, we will also simply write \(B_{G}(e,n)\) instead of \(B_{G,S}(e,n)\). **Lemma 7.7**.: _Let \(L\) be a countable group, let \(G\) be a finitely generated group, and assume that there exists an \((L^{1},L^{0})\)-measure equivalence coupling from \(L\) to \(G\)._ _Then for every finitely generated subgroup \(L^{\prime}\) of \(L\), one has \(\operatorname{gr}_{L^{\prime}}\preceq\operatorname{gr}_{G}\)._ Proof.: Lemma 7.5 ensures that any \(L^{1}\)-integrable measure equivalence cocycle \(L\times X\to G\) is an \(L^{1}\)-integrable embedding. In particular, the same holds true for the restriction \(L^{\prime}\times X\to G\). The corollary is therefore a consequence of Bowen's theorem regarding the behavior of growth with respect to \(L^{1}\)-integrable embeddings [14, Theorem B.10]. Given a probability measure-preserving action \(G\curvearrowright(X,\mu)\), a Borel subset \(Z\subset X\) and \(x\in X\), the _return time set_ of \(x\) with respect to \(Z\) is \(R_{Z}(x)=\{g\in G\mid gx\in Z\}\). **Lemma 7.8**.: _Let \(L\) and \(J\) be finitely generated groups, with \(L\) infinite, equipped with word metrics \(|\cdot|_{L}\) and \(|\cdot|_{J}\) (associated to finite generating sets). Let \(L\curvearrowright(X,\mu)\) be a measure-preserving action on a standard probability space, and \(c:L\times X\to J\) an \(L^{1}\)-integrable cocycle. Let \(M>0\). Assume that there exists \(M^{\prime}>0\) such that for every \(g\in L\) satisfying \(|g|_{L}\geq M^{\prime}\), one has \(\int_{X}|c(g,x)|_{J}\;d\mu(x)\leq M|g|_{L}\). Let \(X_{0}\subseteq X\) be a measurable subset with \(\mu(X_{0})\geq 0.9\)._ _Then there exists \(n_{0}>0\) such that for every \(n\geq n_{0}\), there exists a measurable subset \(Y_{n}\subset X_{0}\) with \(\mu(Y_{n})\geq 0.3\), such that for every \(x\in Y_{n}\), one has_ \[\frac{|\{g\in B_{L}(e,n):g\in R_{X_{0}}(x)\text{ and }|c(g,x)|_{J}\leq 120M|g|_{L} \}|}{|B_{L}(e,n)|}\geq 0.1.\] Proof.: We follow Bowen's proof of [14, Theorem B.10] very closely. For \(g\in L\), we write \(\kappa(g)=\int_{X}|c(g,x)|_{J}\;d\mu(x)\). As \(L\) is infinite and there are only finitely many elements in \(L\) such that \(\kappa(g)>M|g|_{L}\), there exists \(n_{0}\) such that for every \(n\geq n_{0}\), one has \[\frac{1}{|B_{L}(e,n)|}\sum_{g\in B_{L}(e,n)}\frac{\kappa(g)}{|g|_{L}}\leq 2M.\] We now fix \(n\geq n_{0}\), and find \(Y_{n}\) as in the lemma. As \[\int_{X}\left(\frac{1}{|B_{L}(e,n)|}\sum_{g\in B_{L}(e,n)}\frac{|c(g,x)|_{J}}{|g| _{L}}\right)d\mu(x)\leq 2M,\] we deduce from the Markov inequality that there exists a measurable subset \(X_{1}\subset X\) with \(\mu(X_{1})\geq 0.9\) such that \[\frac{1}{|B_{L}(e,n)|}\sum_{g\in B_{L}(e,n)}\frac{|c(g,x)|_{J}}{|g|_{L}}\leq 20M\] for any \(x\in X_{1}\). For every \(x\in X_{1}\), let \[L_{x}=\{g\in B_{L}(e,n)\mid|c(g,x)|_{J}\leq 120M|g|_{L}\},\] then \(|L_{x}|\geq(5/6)|B_{L}(e,n)|\). Let \(X_{2}=X_{0}\cap X_{1}\). Then \(\mu(X_{2})\geq 0.8\). By [1, Lemma B.11], \[\int_{X_{2}}\frac{|R_{X_{2}}(x)\cap B_{L}(e,n)|}{|B_{L}(e,n)|}d\mu(x)\geq 2 \mu(X_{2})-\mu(X)\geq 0.6.\] Thus \[\int_{X_{2}}\frac{|R_{X_{0}}(x)\cap L_{x}|}{|B_{L}(e,n)|}d\mu(x) \geq\int_{X_{2}}\frac{|R_{X_{2}}(x)\cap L_{x}|}{|B_{L}(e,n)|}d\mu(x)\] \[\geq\int_{X_{2}}\frac{|R_{X_{2}}(x)\cap B_{L}(e,n)|+|L_{x}|-|B_{L} (e,n)|}{|B_{L}(e,n)|}d\mu(x)\geq 0.6+\mu(X_{2})(5/6-1)\geq 0.4.\] Then the measurable subset \[X_{3}:=\left\{x\in X_{2}\ \Big{|}\ \frac{|R_{X_{0}}(x)\cap L_{x}|}{|B_{L}(e,n)|} \geq 0.1\right\}\] satisfies \(\mu(X_{3})\geq 0.3\), otherwise \[\int_{X_{2}}\frac{|R_{X_{0}}(x)\cap L_{x}|}{|B_{L}(e,n)|}d\mu(x)= \int_{X_{3}}\frac{|R_{X_{0}}(x)\cap L_{x}|}{|B_{L}(e,n)|}d\mu(x)+\int_{X_{2} \setminus X_{3}}\frac{|R_{X_{0}}(x)\cap L_{x}|}{|B_{L}(e,n)|}d\mu(x)\] \[\leq\mu(X_{3})+0.1\mu(X_{2}\setminus X_{3})\leq\mu(X_{3})+0.1\mu (X_{2})<0.4,\] which is a contradiction. Now the lemma follows with \(Y_{n}=X_{3}\). Recall that a group \(L\) is _locally virtually cyclic_ if every finitely generated subgroup of \(L\) is virtually cyclic (possibly finite). **Lemma 7.9**.: _Let \(L\) be a locally virtually cyclic countable group with bounded torsion. Then either \(L\) is virtually cyclic, or else there exists an infinite cyclic subgroup \(L_{1}\subseteq L\) such that for every \(k\in\mathbb{N}\), there exists an infinite cyclic subgroup \(L_{k}\subseteq L\) that contains \(L_{1}\) as a subgroup of index at least \(k\)._ Proof.: Suppose first that \(L\) is locally finite, i.e. every finitely generated subgroup is finite. Then \(L\) must be finite because of the bounded torsion assumption. Now assume \(L\) is not locally finite. The conclusion is obvious if \(L\) is finitely generated, so we assume otherwise. The group \(L\) has an infinite finitely generated subgroup, which is virtually cyclic by our assumption. Thus there exists an infinite cyclic subgroup \(L^{\prime}_{1}\subseteq L\). By adding extra elements successively, we can find an infinite increasing sequence of finitely generated, whence virtually cyclic, subgroups \(L^{\prime}_{1}\subsetneq L^{\prime}_{2}\subsetneq\dots\) In particular, as \(k\) goes to \(+\infty\), the index \([L^{\prime}_{k}:L^{\prime}_{1}]\) goes to \(+\infty\). Let \(K\) be a bound on the cardinality of a finite subgroup of \(L\). Observe that every infinite virtually cyclic group with no finite subgroup of cardinality larger than \(K\), contains an infinite cyclic subgroup of index at most \(2K\): this follows, for instance, from the fact that every infinite virtually cyclic group is either finite-by-\(\mathbb{Z}\) or finite-by-dihedral. For every \(k\in\mathbb{N}\), let \(L^{\prime\prime}_{k}\subseteq L^{\prime}_{k}\) be an infinite cyclic subgroup of order at most \(2K\). Letting \(L_{1}(k)=L^{\prime}_{1}\cap L^{\prime\prime}_{k}\), there is a bound on the index \([L^{\prime}_{1}:L_{1}(k)]\). Therefore, up to a subsequence, we can assume that all groups \(L_{1}(k)\) are equal to the same group \(L_{1}\). As \(L_{1}\subseteq L^{\prime\prime}_{k}\) and the index \([L^{\prime\prime}_{k}:L_{1}]\) goes to \(+\infty\), the lemma follows (by letting \(L_{k}=L^{\prime\prime}_{\sigma(k)}\) for some increasing map \(\sigma:\mathbb{N}\to\mathbb{N}\) chosen so that \([L^{\prime\prime}_{\sigma(k)}:L_{1}]\geq k\)). **Theorem 7.10**.: _Let \(L\) be a countable group with bounded torsion, and assume that there exists an \(L^{1}\)-integrable embedding from \(L\) to \(\mathbb{Z}\)._ _Then \(L\) is virtually cyclic (possibly finite)._ Proof.: We write \(J=\mathbb{Z}\). We choose once and for all a generator for \(J\), and consider the associated word length on \(J\) for the rest of the argument. For \(C>0\), we say that a map between two sets \(f:E_{1}\to E_{2}\) is _at most \(C\)-to-1_ if the cardinality of each point inverse of \(f\) is at most \(C\). Without loss of generality, we can assume that \(L\) is infinite. By Lemma 7.7, every finitely generated subgroup of \(L\) has growth at most linear, and therefore is virtually cyclic (possibly finite), see e.g. [11]. Assuming towards a contradiction that \(L\) is not virtually cyclic, let \(L_{1}\subseteq L\) be an infinite cyclic subgroup of \(L\) given by Lemma 7.9. We will prove that there exists \(K>0\) such that for every other infinite cyclic subgroup \(L_{2}\) containing \(L_{1}\), one has \([L_{2}:L_{1}]\leq K\). This will contradict Lemma 7.9 and conclude our proof. By assumption, there exists a measure-preserving action of \(L\) on a standard probability space \((X,\mu)\), a cocycle \(c:L\times X\to J\) which is \(L^{1}\)-integrable, a Borel subset \(X_{0}\subseteq X\) with \(\mu(X_{0})\geq 0.9\), and a constant \(C>0\), such that for every \(x\in X_{0}\), the restriction \(c(\cdot,x)_{|R_{X_{0}}(x)}\) is at most \(C\)-to-1. Let \(L_{2}\) be an infinite cyclic subgroup of \(L\) that contains \(L_{1}\), and let \(k=[L_{2}:L_{1}]\). For every \(i\in\{1,2\}\), we choose a generator \(g_{i}\) of \(L_{i}\), and we endow \(L_{i}\) with the word metric with respect to this choice of generator. For \(g\in L_{2}\), let \[\kappa(g)=\int_{X}|c(g,x)|_{J}\ d\mu(x).\] Let \(M_{1}=\kappa(g_{1})\) and \(D=\max\{\kappa(g_{2}^{i})\}_{0\leq i\leq k-1}\). Let \(g\in L_{2}\), and write it as \(g=\ell g_{1}+ig_{2}\) with \(\ell\in\mathbb{Z}\) and \(0\leq i\leq k-1\). Using the cocycle relation, we see that \(\kappa(g)\leq|\ell|M_{1}+D\). On the other hand \(|g|_{L_{2}}=k|\ell|\pm i\). Therefore, there exists \(M^{\prime}\in\mathbb{N}\) such that \(\kappa(g)\leq\frac{2M_{1}}{k}|g|_{L_{2}}\) whenever \(|g|_{L_{2}}\geq M^{\prime}\). Recall that \(\mu(X_{0})\geq 0.9\). We can therefore apply Lemma 7.8 with \(M=\frac{2M_{1}}{k}\), and with \(L_{2}\) in place of \(L\), and deduce that for \(n\) large enough, there exists a Borel subset \(Y_{n}\subseteq X_{0}\) with \(\mu(Y_{n})\geq 0.3\) such that for every \(x\in Y_{n}\), one has \[\frac{|\{g\in B_{L_{2}}(e,n):g\in R_{X_{0}}(x)\text{ and }|c(g,x)|_{J}\leq 240\frac{ M_{1}}{k}|g|_{L_{2}}\}|}{|B_{L_{2}}(e,n)|}\geq 0.1.\] Let \(B_{L_{2},x}^{\text{good}}(e,n)\) be the set defined as in the numerator of the above expression. By definition of \(X_{0}\), for \(x\in Y_{n}\), the map \(c(\cdot,x)\) restricted to \(B_{L_{2},x}^{\text{good}}(e,n)\) is at most \(C\)-to-\(1\). On the other hand, \[c(B_{L_{2},x}^{\text{good}}(e,n)\times\{x\})\subseteq B_{J}\left(e,\frac{240M_ {1}n}{k}\right),\] thus \[|B_{L_{2},x}^{\text{good}}(e,n)|\leq\frac{480CM_{1}n}{k}+C.\] For \(n\) large enough, it follows that \[\frac{481CM_{1}n/k}{|B_{L_{2}}(e,n)|}\geq 0.1.\] As \(|B_{L_{2}}(e,n)|=2n+1\), we deduce that \(\frac{481CM_{1}n}{2kn}\geq 0.1\). Thus \(k\leq 2405CM_{1}\). _Remark 7.11_.: Theorem 7.10 does not hold if \(L\) is not assumed to have bounded torsion. The simplest example is to take \(L=\bigoplus_{\mathbb{N}}\mathbb{Z}/2\mathbb{Z}\). Then \(L\) and \(\mathbb{Z}\) have orbit equivalent actions on \(X=\{0,1\}^{\mathbb{N}}\) (equipped with the product measure of the uniform measure on \(\{0,1\}\)). Here \(\mathbb{Z}\) acts as the odometer, and \(L\) acts by coordinatewise addition; two elements are in the same orbit if and only if they have the same tail. Letting \(c:L\times X\to\mathbb{Z}\) be the orbit equivalence cocycle, it is easy to see that \(|c(s,\cdot)|\) is bounded for every \(s\in L\). Proof of Proposition 7.1.: Let \(v\in V\mathbb{B}\) be a rank \(1\) vertex. By Lemma 7.3, there is an \((L^{1},L^{0})\)-measure equivalence coupling from \(H_{v}\) to \(G_{v}\). By Corollary 7.6, this yields in particular an \(L^{1}\)-integrable embedding from \(H_{v}\) to \(G_{v}\). Since \(G_{v}\) is isomorphic to \(\mathbb{Z}\), Theorem 7.10 implies that \(H_{v}\) is virtually cyclic. And \(H_{v}\) is infinite because it is measure equivalent to the infinite group \(G_{v}\), so \(H_{v}\) is virtually isomorphic to \(\mathbb{Z}\). ## 8 Controlling the factor actions Throughout the section, we let \(G\) be a right-angled Artin group with \(|\operatorname{Out}(G)|<\infty\), and \(H\) be a countable group with bounded torsion, such that there exists an \((L^{1},L^{0})\)-measure equivalence coupling \(\Omega\) from \(H\) to \(G\). Let \(\hat{\Omega}\) be the measure equivalence coupling between \(H\) and \(\hat{G}\) defined by letting \(\hat{\Omega}=(\hat{G}\times\Omega)/G\) as in Section 7.1. Let \(\iota:H\to\operatorname{Aut}(\Gamma^{e})\) and \(\theta:\hat{\Omega}\to\operatorname{Aut}(\Gamma^{e})\) be the maps given by Lemma 6.6 through the canonical isomorphism \(\operatorname{Aut}(\mathbb{B})\simeq\operatorname{Aut}(\Gamma^{e})\) recalled in Section 2.2. In particular \(H\) acts on \(\Gamma^{e}\), and therefore \(H\) acts on \(G\) by flat-preserving bijections, see Section 2.3. For \(\mathsf{v}\in V\Gamma^{e}\), we let \(\hat{\Omega}_{\mathsf{v}}=\theta^{-1}(\operatorname{Stab}_{\operatorname{Aut }(\Gamma^{e})}(\mathsf{v}))\) and \(H_{\mathsf{v}}=\iota^{-1}(\operatorname{Stab}_{\operatorname{Aut}(\Gamma^{e} )}(\mathsf{v}))\). As usual we denote by \(G_{\mathsf{v}}\) and \(\hat{G}_{\mathsf{v}}\) the stabilizers of \(\mathsf{v}\) for the actions of \(G\) and \(\hat{G}\), respectively. Since the actions of \(\hat{G}\) and of \(\operatorname{Aut}(\Gamma^{e})\) on \(\Gamma^{e}\) have the same orbits of vertices (Lemma 6.4), it follows from Corollary 4.4 that \(\hat{\Omega}_{\mathsf{v}}\) is a measure equivalence coupling between the stabilizers \(\hat{G}_{\mathsf{v}}\) and \(H_{\mathsf{v}}\). Let \(\mathcal{L}_{\mathsf{v}}\) be the set of all \(\mathsf{v}\)-lines in \(G\). Recall from Section 3 that the union \(P_{\mathsf{v}}\) of all \(\mathsf{v}\)-lines in \(G\) is a left coset of form \(gG_{\mathrm{st}(v)}\) for some \(v\in V\Gamma\). Recall also that the action \(H_{\mathsf{v}}\curvearrowright G\) is by flat-preserving bijections and that \(H_{\mathsf{v}}\) preserves \(P_{\mathsf{v}}\) and maps \(\mathsf{v}\)-lines to \(\mathsf{v}\)-lines. In particular, there is an induced action \(H_{\mathsf{v}}\curvearrowright\mathcal{L}_{\mathsf{v}}\). More generally the action of \(\operatorname{Aut}_{\mathsf{v}}(\mathbb{B})\) on \(G\) by flat-preserving bijections preserves \(P_{\mathsf{v}}\) and sends \(\mathsf{v}\)-line to \(\mathsf{v}\)-line. We denote by \(\theta_{\mathsf{v}}:\widehat{\Omega}_{\mathsf{v}}\to\operatorname{Bij}(P_{ \mathsf{v}})\) and \(\iota_{\mathsf{v}}:H_{\mathsf{v}}\to\operatorname{Bij}(P_{\mathsf{v}})\) the induced maps. The goal of this section is to understand the action \(H_{\mathsf{v}}\curvearrowright P_{\mathsf{v}}\) to the extent that we can connect with the assumptions of Theorem 3.2. ### Finite generation of \(H_{\mathsf{v}}\) The main goal of this subsection is to prove that \(H_{\mathsf{v}}\) is finitely generated (Lemma 8.4 below). Let \(\mathbb{B}(\mathsf{v})\) be the union of all cubes in \(\mathbb{B}\) whose vertices correspond to standard flats that are contained in \(P_{\mathsf{v}}\). Then \(\mathbb{B}(\mathsf{v})\) is isomorphic to the right-angled building associated with \(G_{\mathrm{st}(v)}\), in particular it is simply connected. Moreover, \(\mathbb{B}(\mathsf{v})\) is \(H_{\mathsf{v}}\)-invariant. **Lemma 8.1**.: _The action of \(H_{\mathsf{v}}\) on \(P_{\mathsf{v}}\) has finitely many orbits of vertices. In particular,_ 1. _the action of_ \(H_{\mathsf{v}}\) _on_ \(\mathcal{L}_{\mathsf{v}}\) _has finitely many orbits;_ 2. _for every_ \(\mathsf{v}\)_-line_ \(\ell\in\mathcal{L}_{\mathsf{v}}\)_, the action of_ \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\) _on_ \(\ell\) _has finitely many orbits of vertices;_ 3. _the action of_ \(H_{\mathsf{v}}\) _on_ \(\mathbb{B}(\mathsf{v})\) _is cocompact._ Proof.: The action of \(G_{\mathsf{v}}\) on \(P_{\mathsf{v}}\) is transitive. We can thus apply Corollary 4.7 with \(\mathsf{G}=G_{\mathsf{v}}\) and \(\mathsf{H}=H_{\mathsf{v}}\), with \(K=P_{\mathsf{v}}\), and with the coupling \(\widehat{\Omega}_{\mathsf{v}}\) and the maps \(\iota_{\mathsf{v}}\) and \(\theta_{\mathsf{v}}\). Since vertices in \(P_{\mathsf{v}}\) have trivial \(G_{\mathsf{v}}\)-stabilizer, the first part of Corollary 4.7 shows that the action of \(H_{\mathsf{v}}\) on \(P_{\mathsf{v}}\) has finitely many orbits. The first two consequences follow because \(H_{\mathsf{v}}\) sends \(\mathsf{v}\)-lines to \(\mathsf{v}\)-lines. For Assertion 3, note that \(H_{\mathsf{v}}\) acts on the set of rank \(0\) vertices of \(\mathbb{B}(\mathsf{v})\), which is exactly \(P_{\mathsf{v}}\), with finitely many orbits. So Assertion 3 follows by exactly the same argument as in Corollary 6.9. **Lemma 8.2**.: _The \(H\)-stabilizer of any vertex in \(\mathbb{B}\) is finitely generated._ Proof.: Recall that the \(H\)-action on \(\mathbb{B}\) can be equivalently viewed as an \(H\)-action on \(G\) by flat-preserving bijections. Using this viewpoint, for every vertex \(v\in V\mathbb{B}\) with associated standard flat \(F\), the stabilizer \(H_{v}\) coincides with the stabilizer \(H_{F}\) of \(F\) for this \(H\)-action on \(G\). We will use this viewpoint throughout the proof. We first claim that for any standard flat \(F^{\prime}\subset F\), the group \(H_{F^{\prime}}\) has a finite index subgroup \(H_{F^{\prime}}^{0}\) which is contained in \(H_{F}\), and acts on \(F^{\prime}\) with finitely many orbits. Indeed, let \(v_{F^{\prime}}\) be the vertex of \(\mathbb{B}\) associated to \(F^{\prime}\). Lemma 6.8 ensures that \(H_{F^{\prime}}\) acts on the set of rank \(0\) vertices of \(\mathbb{B}\) that are smaller than \(v_{F^{\prime}}\) with finitely many orbits. Therefore \(H_{F^{\prime}}\) acts on \(F^{\prime}\) with finitely many orbits. Any element in \(H_{F^{\prime}}\) sends \(F\) to another standard flat containing \(F^{\prime}\). As there are only finitely many standard flats containing \(F^{\prime}\), we can find a finite index subgroup \(H_{F^{\prime}}^{0}\subseteq H_{F^{\prime}}\) that stabilizes \(F\). The action \(H_{F^{\prime}}^{0}\curvearrowright F^{\prime}\) still has finitely many orbits, thus the claim is proved. We now prove the lemma by induction on the rank of vertices of \(\mathbb{B}\). Stabilizers in \(H\) of rank \(0\) vertices are finite (Lemma 6.8), and the case of rank \(1\) vertices is given by Proposition 7.1. Let now \(v\in V\mathbb{B}\) be a vertex of rank \(n\geq 2\), corresponding to a standard flat \(F\), and assume by induction that the lemma is proven for all vertices of rank at most \(n-1\). Take a standard line \(\ell\subset F\). By the above claim, there is a finite index subgroup \(H^{0}_{\ell}\subseteq H_{\ell}\) that stabilizes both \(F\) and \(\ell\), and whose action on \(\ell\) has finitely many orbits. Let \(\mathcal{C}\) be the set of all standard flats in \(F\) of dimension \(\dim F-1\) that intersect \(\ell\) in exactly one point. As the \(H\)-action on \(F\) is flat-preserving, the group \(H^{0}_{\ell}\) permutes elements in \(\mathcal{C}\), and the permutation action \(H^{0}_{\ell}\curvearrowright\mathcal{C}\) has finitely many orbits. Let \(\{F_{1},\ldots,F_{k}\}\) be a set consisting of exactly one representative in each orbit of this permutation action. By the above claim, for each \(i\in\{1,\ldots,k\}\), there is a finite index subgroup \(H^{0}_{F_{i}}\subseteq H_{F_{i}}\) that stabilizes both \(F\) and \(F_{i}\), and acts on \(F_{i}\) with finitely many orbits. Let \(K_{i}\) be a finite subset of \(F_{i}\) such that \(H^{0}_{F_{i}}K_{i}=F_{i}\). Let \(K=\cup_{i=1}^{k}K_{i}\) and let \(H^{\prime}\) be the subgroup of \(H_{F}\) generated by \(H^{0}_{\ell}\) and \(\{H^{0}_{F_{i}}\}_{i=1}^{k}\). As \(H^{0}_{\ell}\) and \(H^{0}_{F_{i}}\) are finitely generated by induction, it follows that \(H^{\prime}\) is finitely generated. Moreover, our construction implies that \(H^{\prime}K=F\). Thus for any \(h\in H_{F}\), there exists \(h^{\prime}\in H^{\prime}\) such that \(h^{\prime}hK\cap K\neq\emptyset\). On the other hand, as the action \(H_{F}\curvearrowright F\) has finite stabilizers (Lemma 6.8) and \(K\) is finite, there are only finitely many elements \(h\in H_{F}\) such that \(hK\cap K\neq\emptyset\). Thus \(H^{\prime}\) has finite index in \(H_{F}\), and hence \(H_{F}\) is finitely generated. **Corollary 8.3**.: _The \(H\)-stabilizer of any cube in \(\mathbb{B}\) is finitely generated._ Proof.: Let \(C\) be a cube of \(\mathbb{B}\), and let \(v\) be the (unique) vertex of \(\mathbb{B}\) of minimal rank. Then \(v\) is the vertex of minimal rank in only finitely many cubes of \(\mathbb{B}\). As the \(H\)-action on \(\mathbb{B}\) preserves ranks of vertices (Lemma 2.5), it follows that \(\operatorname{Stab}_{H}(C)\) is a finite-index subgroup of \(\operatorname{Stab}_{H}(v)\). The corollary thus follows from Lemma 8.2. Recall that the \(H\)-action on \(\mathbb{B}\) induces an \(H\)-action on the extension graph of \(G\). We will also need the following fact. **Lemma 8.4**.: _For every \(\mathsf{v}\in V\Gamma^{e}\), the stabilizer \(H_{\mathsf{v}}\) is finitely generated._ Proof.: First we claim that \(H_{\mathsf{v}}\) is of finite index in \(\operatorname{Stab}_{H}(P_{\mathsf{v}})\). Indeed, recall that \(P_{\mathsf{v}}=gG_{\operatorname{st}(v)}\) for some \(v\in V\Gamma\). Let \(\operatorname{st}(v)=\{v\}\circ\Gamma_{1}\circ\cdots\circ\Gamma_{k}\) be the join decomposition of \(\operatorname{st}(v)\). This gives \(P_{\mathsf{v}}=gG_{\operatorname{st}(v)}\cong g\langle v\rangle\times gG_{ \Gamma_{1}}\times\cdots\times gG_{\Gamma_{k}}\). As \(\operatorname{Stab}_{H}(P_{\mathsf{v}})\) sends standard flats to standard flats in \(P_{\mathsf{v}}\), its action on \(P_{\mathsf{v}}\) respects this product decomposition (it can possibly permute the factors). Then \(\operatorname{Stab}_{H}(P_{\mathsf{v}})\) has a finite index subgroup sending \(\mathsf{v}\)-lines to \(\mathsf{v}\)-lines, which is \(H_{\mathsf{v}}\). If \(F\subseteq P_{\mathsf{v}}\) is a standard flat (corresponding to a vertex of \(\mathbb{B}(\mathsf{v})\)), then \(F\) is contained in only finitely many regions of the form \(P_{\mathsf{w}}\) with \(\mathsf{w}\in V\Gamma^{e}\). Then \(\operatorname{Stab}_{H}(F)\) has a finite index subgroup preserving \(P_{\mathsf{v}}\). By the claim in the previous paragraph, it follows that \(\operatorname{Stab}_{H_{\mathsf{v}}}(F)\) is a finite-index subgroup of \(\operatorname{Stab}_{H}(F)\), and is thus finitely generated by Lemma 8.2. More generally, by the same argument (and using Corollary 8.3), for any cube \(C\) in \(\mathbb{B}(\mathsf{v})\), the \(H_{\mathsf{v}}\)-stabilizer of \(C\) has finite index in its \(H\)-stabilizer, hence is finitely generated. Now \(H_{\mathsf{v}}\) acts by cubical automorphisms on the simply connected complex \(\mathbb{B}(\mathsf{v})\) cocompactly (Lemma 8.1(3)), with finitely generated cell stabilizers, so it is finitely generated by [11, Theorem 1]. ### Commensurability of stabilizers Let \(Z_{\mathsf{v}}=g\langle v\rangle\). The splitting \(P_{\mathsf{v}}=gG_{\operatorname{st}(v)}=g(\langle v\rangle\times G_{ \operatorname{lk}(v)})\) gives a projection map \(\pi_{1}:P_{\mathsf{v}}\to Z_{\mathsf{v}}\). On the other hand, as \(\mathcal{L}_{\mathsf{v}}\) is the collection of \(\mathsf{v}\)-lines, there is a map \(\pi_{2}:P_{\mathsf{v}}\to\mathcal{L}_{\mathsf{v}}\) sending each point to the \(\mathsf{v}\)-line containing this point. Moreover, \(\mathcal{L}_{\mathsf{v}}\) can be naturally identified with \(gG_{\mathrm{Ik}(v)}\). Note that the map \((\pi_{1},\pi_{2})\) gives an identification between \(P_{\mathsf{v}}\) and \(Z_{\mathsf{v}}\times\mathcal{L}_{\mathsf{v}}\). Moreover, the action of \(G_{\mathsf{v}}\) on \(P_{\mathsf{v}}\) respects this product decomposition, hence gives two factor actions: 1. \(G_{\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\), for which the stabilizer of each element is \(gG_{\mathrm{Ik}(v)}g^{-1}\); 2. \(G_{\mathsf{v}}\curvearrowright\mathcal{L}_{\mathsf{v}}\), for which the stabilizer of each element is \(g\langle v\rangle g^{-1}\). We let \(Z_{1}=Z_{\mathsf{v}}\) and \(Z_{2}=\mathcal{L}_{\mathsf{v}}\). We denote by \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\) the stabilizer of \(\mathsf{v}\) in \(\mathrm{Aut}(\mathbb{B})\) (under the canonical isomorphism between \(\mathrm{Aut}(\mathbb{B})\) and \(\mathrm{Aut}(\Gamma^{e})\) recalled in Section 2.3). Note that 1. \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\) preserves \(P_{\mathsf{v}}\); 2. \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\) sends standard flats in \(P_{\mathsf{v}}\) to standard flats in \(P_{\mathsf{v}}\); 3. \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\) sends \(\mathsf{v}\)-lines to \(\mathsf{v}\)-lines. Thus \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\) preserves the splitting \(P_{\mathsf{v}}=Z_{1}\times Z_{2}\). For every \(i\in\{1,2\}\), we have factor actions of \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})=\mathrm{Stab}_{\mathrm{Aut}(\Gamma^{e}) }(\mathsf{v})\), \(G_{\mathsf{v}}\) and \(H_{\mathsf{v}}\) on \(Z_{i}\). Given \(x,y\in Z_{i}\), we denote by \(\mathrm{Aut}_{\mathsf{v},x}(\mathbb{B})\), \(G_{\mathsf{v},x}\) and \(H_{\mathsf{v},x}\) the stabilizers of \(x\) in \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\), \(G_{\mathsf{v}}\) and \(H_{\mathsf{v}}\) respectively, and by \(G_{\mathsf{v},x,y}\) and \(H_{\mathsf{v},x,y}\) the common stabilizers of \(x\) and \(y\). **Proposition 8.5**.: _For every \(i\in\{1,2\}\), and any \(x,y\in Z_{i}\), the groups \(H_{\mathsf{v},x}\) and \(H_{\mathsf{v},y}\) are commensurable in \(H_{\mathsf{v}}\)._ Proof.: By symmetry, it is enough to prove that \(H_{\mathsf{v},x,y}\) has finite index in \(H_{\mathsf{v},x}\). Our proof will use two facts regarding the action of \(G_{\mathsf{v}}\) on \(Z_{i}\) (proved below), namely: * (Fact 1) The actions of \(G_{\mathsf{v}}\) and of \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\) have the same orbits on \(Z_{i}\) (they are both transitive). * (Fact 2) For any \(x,z\in Z_{i}\), we have \(G_{\mathsf{v},x}=G_{\mathsf{v},z}\). For Fact 1, note that the action \(G_{\mathsf{v}}\curvearrowright P_{\mathsf{v}}\) is transitive and respects the splitting \(P_{\mathsf{v}}\cong Z_{\mathsf{v}}\times\mathcal{L}_{\mathsf{v}}\), so the action \(G_{\mathsf{v}}\curvearrowright Z_{i}\) is transitive for \(i=1,2\). Hence the same holds for the bigger group \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\). Fact 2 follows from the discussion of stabilizers of factor actions before the proposition. As before, let \(\hat{\Omega}\) be a measure equivalence coupling between \(\hat{G}\) and \(H\), and let \(\theta:\hat{\Omega}\to\mathrm{Aut}(\mathbb{B})\cong\mathrm{Aut}(\Gamma^{e})\) be a measurable \((\hat{G}\times H)\)-equivariant map. As recalled in the introductory paragraph of Section 8, the space \(\hat{\Omega}_{\mathsf{v}}=\theta^{-1}(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B}))\) is a measure equivalence coupling between \(\hat{G}_{\mathsf{v}}\) and \(H_{\mathsf{v}}\). Hence \(\hat{\Omega}_{\mathsf{v}}\) is also a measure equivalence coupling between \(G_{\mathsf{v}}\) and \(H_{\mathsf{v}}\). Let \(\hat{\Omega}_{\mathsf{v},x}=\theta^{-1}(\mathrm{Aut}_{\mathsf{v},x}(\mathbb{B}))\). Since the orbits of \(x\) under \(\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\) and \(G_{\mathsf{v}}\) coincide (Fact 1 above), it follows from Corollary 4.4 (applied with \(K=Z_{i}\), with \(L=\mathrm{Aut}_{\mathsf{v}}(\mathbb{B})\), with \(\mathsf{G}=G_{\mathsf{v}}\), and with \(\hat{\Omega}_{\mathsf{v}}\) in place of \(\Omega\)) that \(\hat{\Omega}_{\mathsf{v},x}\) is a measure equivalence coupling between \(G_{\mathsf{v},x}\) and \(H_{\mathsf{v},x}\). We want to apply Lemma 4.9 with \(\mathsf{G}=G_{\mathsf{v},x},\mathsf{H}=H_{\mathsf{v},x},\Sigma=\hat{\Omega}_{ \mathsf{v},x}\) and \(\mathsf{H}^{\prime}=H_{\mathsf{v},x,y}\). It remains to find \(\Sigma^{\prime}\) with desired properties. For the following discussion, we refer to the statement of Lemma 6.6 for our convention of how \(\mathsf{G}\) and \(\mathsf{H}\) act on \(\mathrm{Aut}_{\mathsf{v},x}(\mathbb{B})\). Given \(z\in Z_{i}\), we let \(\mathrm{Aut}_{\mathsf{v},x,y\to z}(\mathbb{B})\) be the Borel subset of \(\mathrm{Aut}_{\mathsf{v},x}(\mathbb{B})\) consisting of all automorphisms that send \(y\) to \(z\), and let \(\hat{\Omega}_{\mathsf{v},x,y\to z}=\theta^{-1}(\mathrm{Aut}_{\mathsf{v},x,y\to z}( \mathbb{B}))\). Then \(\sqcup_{z\in Z_{i}}\hat{\Omega}_{\mathsf{v},x,y\to z}\). Therefore, we can (and will) choose \(z\in Z_{i}\) such that \(\mu(\hat{\Omega}_{\mathsf{v},x,y\to z})>0\). We take \(\Sigma^{\prime}=\hat{\Omega}_{\mathsf{v},x,y\to z}\). It is invariant under \(G_{\mathsf{v},x,z}\) and \(\mathsf{H}^{\prime}=H_{\mathsf{v},x,y}\), and the former group is equal to \(\mathsf{G}=G_{\mathsf{v},x}\) by Fact 2. Now take \(h\in\mathsf{H}\setminus\mathsf{H}^{\prime}\). Then \(h(x)=x\) and \(h(y)\neq y\). Then \(h\Sigma^{\prime}=\hat{\Omega}_{\mathsf{v},x,h(y)\to z}\), which is disjoint from \(\hat{\Omega}_{\mathsf{v},x,y\to z}\), as desired. ### From commensurated to normal Let \(\mathsf{v}\in V\Gamma^{e}\). Proposition 8.5, applied to \(Z_{2}=\mathcal{L}_{\mathsf{v}}\), shows that the \(H_{\mathsf{v}}\)-stabilizers of any two \(\mathsf{v}\)-lines are commensurable (and they are virtually infinite cyclic by Proposition 7.1). In this section, we will improve this by showing that \(H_{\mathsf{v}}\) contains a normal subgroup that preserves all \(\mathsf{v}\)-lines: this is Proposition 8.7 below. We start with a lemma. **Lemma 8.6**.: _Let \(\mathsf{v}\in V\Gamma^{e}\). Let \(\ell_{1},\ell_{2}\) be two \(\mathsf{v}\)-lines in the same \(H_{\mathsf{v}}\)-orbit. Let \(A\) be any finite-index infinite cyclic subgroup of \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{1})\cap\operatorname{Stab}_{H_{ \mathsf{v}}}(\ell_{2})\)._ _Then \([\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{1}):A]=[\operatorname{Stab}_{H_{ \mathsf{v}}}(\ell_{2}):A]\)._ Proof.: Recall that every \(\mathsf{v}\)-line \(\ell\) determines a rank \(1\) vertex \(v_{\ell}\) in \(\mathbb{B}\), and vertices of \(\ell\) correspond to rank \(0\) vertices adjacent to \(v_{\ell}\). Therefore, for every \(\mathsf{v}\)-line \(\ell\), the action of \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\) on \(\ell\) has finite stabilizers (Lemma 6.8) and finitely many orbits (Lemma 8.1). As \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{1})\) and \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{2})\) are commensurable (Proposition 8.5 applied to \(Z_{2}=\mathcal{L}_{\mathsf{v}}\)), and \(A\) is torsion-free and has finite index in \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{1})\cap\operatorname{Stab}_{H_{ \mathsf{v}}}(\ell_{2})\), it follows that the actions \(A\curvearrowright\ell_{1}\) and \(A\curvearrowright\ell_{2}\) are free with finitely many orbits. As \(A\subseteq H_{\mathsf{v}}\), the \(A\)-action on \(P_{\mathsf{v}}=Z_{\mathsf{v}}\times\mathcal{L}_{\mathsf{v}}\) preserves the product structure. Therefore the two actions \(A\curvearrowright\ell_{1}\) and \(A\curvearrowright\ell_{2}\) have the same number of orbits. Let \(h\in H_{\mathsf{v}}\) be such that \(h(\ell_{1})=\ell_{2}\). Then \(h\) induces a bijection \(h_{*}:\mathcal{O}_{1}^{j}\mapsto\mathcal{O}_{2}^{j}\) between the set of orbits \(\{\mathcal{O}_{1}^{1},\dots,\mathcal{O}_{1}^{k}\}\) of \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{1})\curvearrowright\ell_{1}\), and the set of orbits \(\{\mathcal{O}_{2}^{1},\dots,\mathcal{O}_{2}^{k}\}\) of \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{2})\curvearrowright\ell_{2}\). In addition \(h_{*}\) preserves the (finite) cardinalities of stabilizers, i.e. the \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{1})\)-stabilizer of any point in \(\mathcal{O}_{1}^{j}\) has the same cardinality (denoted by \(k_{j}\)) as the \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{2})\)-stabilizer of any point in \(\mathcal{O}_{2}^{j}\). Now, for every \(i\in\{1,2\}\), the number of \(A\)-orbits on \(\ell_{i}\) is equal to \([\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{i}):A]\sum_{j=1}^{k}\frac{1}{k_{j}}\). Therefore \([\operatorname{Stab}_{H_{\mathsf{v}}}(\ell_{1}):A]=[\operatorname{Stab}_{H_{ \mathsf{v}}}(\ell_{2}):A]\). **Proposition 8.7**.: _For every \(\mathsf{v}\in V\Gamma^{e}\), there exists an infinite cyclic normal subgroup \(N_{\mathsf{v}}\unlhd H_{\mathsf{v}}\) which preserves every \(\mathsf{v}\)-line._ Proof.: As the action of \(H_{\mathsf{v}}\) on \(\mathcal{L}_{\mathsf{v}}\) has finitely many orbits (Lemma 8.1), we can (and will) choose a finite subset \(\mathcal{L}_{0}\subseteq\mathcal{L}_{\mathsf{v}}\) such that \(\mathcal{L}_{\mathsf{v}}=H_{\mathsf{v}}\mathcal{L}_{0}\). Recall from Lemma 8.4 that \(H_{\mathsf{v}}\) is finitely generated. Take a finite generating set \(S\) of \(H_{\mathsf{v}}\) containing the trivial element, and let \(\mathcal{L}_{1}=\cup_{s\in S}s^{-1}\mathcal{L}_{0}\). For every \(\ell\in\mathcal{L}_{1}\), recall that \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\) is virtually infinite cyclic by Proposition 7.1. Let \(k_{\ell}\in\mathbb{N}\) be the smallest integer such that the intersection of all subgroups of \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\) of index \(k_{\ell}\) is infinite cyclic, and let \(Z_{\ell}\) be this intersection. Notice that \(Z_{\ell}\) is a characteristic subgroup of \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\). We observe that if \(\ell\in\mathcal{L}_{0}\) and \(s\in S\), then \(s^{-1}\ell\in\mathcal{L}_{1}\) and \(Z_{s^{-1}\ell}=s^{-1}Z_{\ell}s\): indeed \(h\mapsto s^{-1}hs\) determines an isomorphism between \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\) and \(\operatorname{Stab}_{H_{\mathsf{v}}}(s^{-1}\ell)\), so \(k_{\ell}=k_{s^{-1}\ell}\) and the isomorphism \(h\mapsto s^{-1}hs\) sends \(Z_{\ell}\) to \(Z_{s^{-1}\ell}\) in view of the definition of these subgroups. Let \(N_{\mathsf{v}}=\cap_{\ell\in\mathcal{L}_{1}}Z_{\ell}\). We claim that \(sN_{\mathsf{v}}s^{-1}=N_{\mathsf{v}}\) for every \(s\in S\). Indeed, let \(s\in S\), and take \(\ell\in\mathcal{L}_{0}\). Then \(N_{\mathsf{v}}\) has finite index in \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\). As both \(\ell\) and \(s^{-1}\ell\) belong to \(\mathcal{L}_{1}\), Lemma 8.6 shows that \([\operatorname{Stab}_{H_{\mathsf{v}}}(s^{-1}\ell):N_{\mathsf{v}}]=[ \operatorname{Stab}_{H_{\mathsf{v}}}(\ell):N_{\mathsf{v}}]\). As \(N_{\mathsf{v}}\subseteq\operatorname{Stab}_{H_{\mathsf{v}}}(s^{-1}\ell)\), we have \(sN_{\mathsf{v}}s^{-1}\subseteq\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\) and \([\operatorname{Stab}_{H_{\mathsf{v}}}(s^{-1}\ell):N_{\mathsf{v}}]=[ \operatorname{Stab}_{H_{\mathsf{v}}}(\ell):sN_{\mathsf{v}}s^{-1}]\). Thus \(sN_{\mathsf{v}}s^{-1}\) and \(N_{\mathsf{v}}\) are subgroups of \(\operatorname{Stab}_{H_{\mathsf{v}}}(\ell)\) of the same finite index. In addition, \(N_{\mathsf{v}}\) is contained in \(Z_{\ell}\) and in \(Z_{s^{-1}\ell}\), so \(sN_{\mathsf{v}}s^{-1}\subseteq Z_{\ell}\). Therefore \([Z_{\ell}:N_{\mathsf{v}}]=[Z_{\ell}:sN_{\mathsf{v}}s^{-1}]\). As \(Z_{\ell}\) is infinite cyclic, it follows that \(sN_{\mathsf{v}}s^{-1}=N_{\mathsf{v}}\) for every \(s\in S\). As \(S\) generates \(H_{\mathsf{v}}\), it follows that \(N_{\mathsf{v}}\) is a normal subgroup of \(H_{\mathsf{v}}\). As \(N_{\mathsf{v}}\) preserves each line in \(\mathcal{L}_{0}\), and \(H_{\mathsf{v}}\mathcal{L}_{0}=\mathcal{L}_{\mathsf{v}}\), it follows that \(N_{\mathsf{v}}\) preserves every \(\mathsf{v}\)-line. ### Conjugating the factor action to one by uniform quasi-isometries We now consider the factor action \(\alpha_{\mathsf{v}}:H_{\mathsf{v}}\to\operatorname{Bij}(Z_{\mathsf{v}})\). Let \(N_{\mathsf{v}}\unlhd H_{\mathsf{v}}\) be a normal infinite cyclic subgroup that preserves every \(\mathsf{v}\)-line, given by Proposition 8.7. As \(N_{\mathsf{v}}\) is torsion-free, by Lemma 6.8, the action of \(N_{\mathsf{v}}\) on each \(\mathsf{v}\)-line is free, and by Lemma 8.1 it has finitely many orbits. Let \(\{\mathcal{O}_{1},\ldots,\mathcal{O}_{n}\}\) be the set of orbits for the action of \(N_{\mathsf{v}}\) on \(Z_{\mathsf{v}}\). Since \(N_{\mathsf{v}}\) is normal in \(H_{\mathsf{v}}\), the group \(H_{\mathsf{v}}\) acts by permutations of the set \(\{\mathcal{O}_{1},\ldots,\mathcal{O}_{n}\}\). Let \(H_{\mathsf{v}}^{0}\subseteq H_{\mathsf{v}}\) be a finite index subgroup which preserves every orbit \(\mathcal{O}_{i}\). Let \(a\) be a generator of \(N_{\mathsf{v}}\). We can (and will) identify each \(\mathcal{O}_{i}\) with \(\mathbb{Z}\) in such a way that \(a\) acts by translation by \(1\) on each \(\mathcal{O}_{i}\). Since \(N_{\mathsf{v}}\) is normal in \(H_{\mathsf{v}}\), the group \(H_{\mathsf{v}}^{0}\) acts on each \(\mathcal{O}_{i}\) by isometries: indeed, every element \(h\in H_{\mathsf{v}}^{0}\) either commutes with \(a\), and then satisfies \(h(x+1)=h(x)+1\) for \(x\in\mathcal{O}_{i}\cong\mathbb{Z}\), whence acts by translations; or else \(hah^{-1}=a^{-1}\), in which case \(h(x+1)=h(x)-1\) and \(h\) acts by an orientation-reversing isometry. Let \(H_{\mathsf{v}}^{1}\subseteq H_{\mathsf{v}}^{0}\) be the finite-index subgroup consisting of all elements that act by positive isometries (i.e. translations) on each \(\mathcal{O}_{i}\). Let \(\tau_{i}:H_{\mathsf{v}}^{1}\to\mathbb{Z}\) be the translation length homomorphism on \(\mathcal{O}_{i}\). **Lemma 8.8**.: _For any \(i,j\in\{1,\ldots,n\}\) and any \(h\in H_{\mathsf{v}}^{1}\), one has \(\tau_{i}(h)=\tau_{j}(h)\)._ Proof.: Arguing towards a contradiction, let \(h\in H_{\mathsf{v}}^{1}\) be such that \(\tau_{i}(h)\neq\tau_{j}(h)\). Let \(x\in\mathcal{O}_{i}\) and \(y\in\mathcal{O}_{j}\). Then all powers of \(ha^{-\tau_{i}(h)}\) belong to \(H_{\mathsf{v},x}\setminus H_{\mathsf{v},y}\). This contradicts the fact that \(H_{\mathsf{v},x}\) and \(H_{\mathsf{v},y}\) are commensurable (Proposition 8.5 applied to \(Z_{1}=Z_{\mathsf{v}}\)). **Lemma 8.9**.: _The action of \(H_{\mathsf{v}}\) on \(Z_{\mathsf{v}}\) is conjugate to an action on \(\mathbb{Z}\) by uniform quasi-isometries._ Proof.: Let \(\sigma:H_{\mathsf{v}}\to\mathfrak{S}(\{1,\ldots,n\})\) be the homomorphism given by the permutation of the orbits \(\mathcal{O}_{i}\). Recall that we have fixed a bijection between every orbit \(\mathcal{O}_{i}\) and \(\mathbb{Z}\), such that the action of \(a\) on each \(\mathcal{O}_{i}\) is by translation by \(1\). Take \(h\in H_{\mathsf{v}}\). We claim that for each \(i\in\{1,\ldots,n\}\), the map \(h_{|\mathcal{O}_{i}}:\mathcal{O}_{i}\cong\mathbb{Z}\to\mathcal{O}_{\sigma(h)(i)} \cong\mathbb{Z}\) is an isometry. Indeed, either \(hah^{-1}=a\), in which case \(h(x+1)=h(x)+1\) for any \(x\in\mathcal{O}_{1}\cup\cdots\cup\mathcal{O}_{n}\). Or else \(hah^{-1}=a^{-1}\), in which case \(h(x+1)=h(x)-1\) for any \(x\in\mathcal{O}_{1}\cup\cdots\cup\mathcal{O}_{n}\). This proves the claim. Moreover, either \(h_{|\mathcal{O}_{i}}\) is a translation for all \(i\), or \(h_{|\mathcal{O}_{i}}\) is a reflection for all \(i\). We now choose a bijection between \(Z_{\mathsf{v}}\) and \(\mathbb{Z}\) such that, under this bijection, each \(\mathcal{O}_{i}\) is identified to the subset \(\{mn+i\}_{m\in\mathbb{Z}}\), and the action of \(a\) is by translation by \(n\). It follows from the previous paragraph that for every \(h\in H_{\mathsf{v}}\), there exist integers \(c_{1}(h),\ldots,c_{n}(h)\) such that either \(h(mn+i)=(m+c_{i}(h))n+\sigma(h)(i)\) for any \(m\in\mathbb{Z}\) and any \(i\in\{1,\ldots,n\}\), or else \(h(mn+i)=(c_{i}(h)-m)n+\sigma(h)(i)\) for any \(x\in\mathbb{Z}\) and any \(i\in\{1,\ldots,n\}\). Thus each \(h\in H_{\mathsf{v}}\) acts on \(\mathbb{Z}\) by a quasi-isometry. We are now left with proving that the quasi-isometry constants are in fact uniform. Lemma 8.8 implies that the action of \(H_{\mathsf{v}}^{1}\) on \(Z_{\mathsf{v}}\cong\mathbb{Z}\) is by translations (in particular by uniform quasi-isometries). Recall that \(H_{\mathsf{v}}^{1}\) has finite index in \(H_{\mathsf{v}}\). Let \(F\) be a finite set of representatives of the left cosets of \(H^{1}_{\mathsf{v}}\) in \(H_{\mathsf{v}}\). Any \(h\in H\) can then be decomposed as \(h=fh^{\prime}\) for some \(f\in F\) and some \(h^{\prime}\in H^{1}_{\mathsf{v}}\). Thus the quasi-isometry constant of \(h\) is bounded by a constant that only depends on the quasi-isometry constants of elements of \(F\). This concludes our proof. ## 9 Proof of the main theorem We are now in position to complete the proof of our main theorem. Proof of Theorem 1.: The conclusion is obvious if \(G=\{1\}\). The case where \(G\) is isomorphic to \(\mathbb{Z}\) is a consequence of Theorem 7.10. From now on we assume that \(G\) is not cyclic. Let \(\mathbb{B}\) be the right-angled building of \(G\). Lemma 6.6 yields an action of \(H\) on \(\mathbb{B}\) by cubical automorphisms (equivalently, an action of \(H\) on \(G\) by flat-preserving bijections). Lemma 6.8 ensures that this flat-preserving action has finite point stabilizers, and finitely many orbits of vertices. Finally, Lemma 8.9 ensures that for every \(\mathsf{v}\in V\Gamma^{\mathsf{e}}\), the factor action \(H_{\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\) is conjugate to an action on \(\mathbb{Z}\) by uniform quasi-isometries. Therefore, Theorem 3.2 applies and shows that \(H\) is finitely generated and quasi-isometric to \(G\). We now show the importance of the bounded torsion assumption in our theorem, by giving two examples of infinitely generated groups \(H\) with unbounded torsion such that there exists an \((L^{\infty},L^{0})\)-measure equivalence coupling \(\Omega\) from \(H\) to a right-angled Artin group \(G\) (possibly with \(|\operatorname{Out}(G)|<+\infty\)). The \((L^{\infty},L^{0})\)-condition means that there exists a Borel fundamental domain \(X_{G}\) for the \(G\)-action on \(\Omega\) such that, denoting by \(c:H\times X_{G}\to G\) the associated cocycle, for every \(h\in H\), \(c(h,\cdot)\) takes essentially only finitely many values. **Proposition 9.1**.: _Let \(K\) be a locally finite countable polyhedral complex, let \(G\) be a finitely generated group acting properly discontinuously and cocompactly on \(K\), and let \(H\) be a lattice in \(\operatorname{Aut}(K)\)._ _Then there exists an \((L^{\infty},L^{0})\)-measure equivalence coupling from \(H\) to \(G\)._ In particular, let \(G\) be a non-abelian right-angled Artin group, and let \(H\) be an infinitely generated (non-uniform) lattice in the automorphism group of the universal cover of the Salvetti complex of \(G\) (examples were constructed in [11, Section 4.2], and these have unbounded torsion). Then there exists an \((L^{\infty},L^{0})\)-measure equivalence coupling from \(H\) to \(G\). Proof.: We view \(\operatorname{Aut}(K)\), equipped with its Haar measure, as a measure equivalence coupling between \(G\) and \(H\), where the action of \(G\times H\) is via \((g,h)\cdot f=gfh^{-1}\). As \(G\) acts cocompactly on \(K\), we can find a finite set \(V_{0}\) of representatives of the \(G\)-orbits of vertices in \(K\). Fix \(v_{0}\in V_{0}\). Then \(X=\{f\in\operatorname{Aut}(K)\mid f(v_{0})\in V_{0}\}\) is a measurable fundamental domain for the \(G\)-action on \(\operatorname{Aut}(K)\). We let \(c:H\times X\to G\) be the associated measure equivalence cocycle. We aim to prove that for every \(h\in H\), \(c(h,\cdot)\) takes only finitely many values. Fix \(h\in H\). Let \(d\) be the metric on the \(1\)-skeleton of \(K\) obtained by assigning length \(1\) to every edge, and considering the induced path metric. For every \(f\in X\), we have \[d(v_{0},f(h^{-1}v_{0}))\leq d(v_{0},f(v_{0}))+d(v_{0},h^{-1}v_{0})\leq \operatorname{diam}(V_{0})+d(v_{0},h^{-1}v_{0}).\] Since the \(G\)-action on \(K\) is properly discontinuous, we can find a finite set \(B_{h}\subseteq G\) such that for every \(f\in X\), one has \(f(h^{-1}v_{0})\in B_{h}^{-1}V_{0}\). In particular, there exists \(g(f)\in B_{h}\) such that \(g(f)fh^{-1}\in X\). This shows that \(c(h,f)\in B_{h}\) and concludes our proof. An \((L^{\infty},L^{0})\)_-orbit equivalence coupling_ from \(H\) to \(G\) is a measure equivalence coupling \(\Omega\) for which there exists a common Borel fundamental domain \(X\) for the actions of \(G\) and \(H\) on \(\Omega\), such that denoting by \(c:H\times X\to G\) the associated cocycle, for every \(h\in H\), \(c(h,\cdot)\) essentially takes only finitely many values. The existence of an \((L^{\infty},L^{0})\)-orbit equivalence coupling is equivalent to requiring that \(G\) and \(H\) admit free, measure-preserving actions on a standard probability space \(X\) with the same orbits, so that for the natural orbit equivalence cocycle \(c:H\times X\to G\), for every \(h\in H\), \(c(h,\cdot)\) essentially only takes finitely many values. **Proposition 9.2**.: _Let \(\Gamma\) be a finite simplicial graph. Let \(H\) be any graph product over \(\Gamma\) with vertex groups isomorphic to \(\bigoplus_{\mathbb{N}}\mathbb{Z}/2\mathbb{Z}\)._ _Then there exists an \((L^{\infty},L^{0})\)-orbit equivalence coupling from \(H\) to \(G\)._ Proof.: There exists an \((L^{\infty},L^{0})\)-orbit equivalence coupling from \(\bigoplus_{\mathbb{N}}\mathbb{Z}/2\mathbb{Z}\) to \(\mathbb{Z}\), see Remark 7.11. It thus follows from [11, Proposition 4.2] that \(G\) and \(H\) are orbit equivalent. The proposition follows because the argument in [11, Proposition 4.2] preserves the integrability of the coupling (the computation will be carried in detail in [EH]). ## 10 Lattice embeddings of right-angled Artin groups In this section, we first prove a theorem that gives restrictions on the possible lattice embeddings of a countable group \(H\) with bounded torsion which is measure equivalent to a right-angled Artin group \(G\) with \(|\operatorname{Out}(G)|<+\infty\) (Theorem 10.1). And we then describe all possible lattice embeddings under an integrability condition on the coupling between \(G\) and \(H\) (Theorem 10.5). For the second statement, we will first need to introduce the language of quasi-actions, which will be also useful in the next section. ### Lattice embeddings and measure equivalence **Theorem 10.1**.: _Let \(G\) be a non-cyclic right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), and let \(H\) be a countable group with bounded torsion that is measure equivalent to \(G\). Let \(\mathfrak{H}\) be a locally compact second countable group, and let \(\tau:H\to\mathfrak{H}\) be a lattice embedding._ _Then \(\mathfrak{H}\) maps continuously with compact kernel to a totally disconnected locally compact group, and \(\tau\) is cocompact._ In the proof, we will make use of the extension of the notion of measure equivalence to unimodular locally compact second countable groups given by [1, Definition 1.1]. Proof.: The group \(\mathfrak{H}\) is measure equivalent to \(G\). Let \((\Omega,\mu)\) be a measure equivalence coupling between \(G\) and \(\mathfrak{H}\). Recall that the inclusion \(G\subseteq\operatorname{Aut}(\mathbb{B})\) is strongly ICC (Proposition 5.1). By Lemma 6.5, any self measure equivalence coupling of \(G\) is taut relative to the inclusion \(G\subseteq\operatorname{Aut}(\mathbb{B})\) in the sense of [1, Definition 1.10] (the uniqueness of the tautening map in this definition is ensured by [1, Lemma A.8(1)]). Notice also that the locally compact second countable group \(\mathfrak{H}\) is unimodular because it contains \(H\) as a lattice. Therefore we can apply [1, Theorem 2.6] and deduce that there exists a continuous homomorphism \(\iota:\mathfrak{H}\to\operatorname{Aut}(\mathbb{B})\) with compact kernel \(K\) and closed image. The intersection \(H\cap K\) is finite, whence a lattice in the compact group \(K\). Let \(\pi:\mathfrak{H}\to\mathfrak{H}/K\) be the projection map. By [12, Theorem 1.13] (as restated in [12, Lemma 3.5]), the \(\pi\)-image of \(H\) in \(\mathfrak{H}/K\) is again a lattice. Since \(\operatorname{Aut}(\mathbb{B})\) is totally disconnected, so is its closed subgroup \(\mathfrak{H}/K\). Since every lattice embedding of a countable group with bounded torsion in a totally disconnected locally compact group is cocompact (see [1, Corollary 4.11]), it follows that the image of \(H\) in \(\mathfrak{H}/K\) is cocompact. Let \(K^{\prime}\subseteq\mathfrak{H}/K\) be a compact set whose \(H\)-translates cover \(\mathfrak{H}/K\). The map \(\pi\) is closed, continuous, and it has compact fibers because \(K\) is compact, see [1, Theorem 1.5.7]. It follows that \(\pi^{-1}(K^{\prime})\) is compact, see [13, Theorem 3.7.2], and therefore \(H\) is cocompact in \(\mathfrak{H}\). ### Quasi-actions In this section and the next, we will need the notion of a quasi-action of a group. Given \(L\geq 1\) and \(A\geq 0\), an _\((L,A)\)-quasi-action_ of a group \(\mathfrak{H}\) on a metric space \((Z,d)\) is a map \(\rho:\mathfrak{H}\times Z\to Z\) such that * for every \(h\in\mathfrak{H}\), the map \(\rho(h,\cdot):Z\to Z\) is an \((L,A)\)-quasi-isometry, * for every \(h_{1},h_{2}\in\mathfrak{H}\) and every \(z\in Z\), one has \(d(\rho(h_{1},\rho(h_{2},z)),\rho(h_{1}h_{2},z))<A\), and * for every \(z\in Z\), one has \(d(\rho(e,z),z)<A\). A _quasi-action_ of \(\mathfrak{H}\) on \((Z,d)\) is an \((L,A)\)-quasi-action for some \(L\geq 1\) and \(A\geq 0\). If \(\mathfrak{H}\) is a topological group, the quasi-action \(\rho\) is _proper_ if for any \(z\in Z\) and \(R>0\), the set \(\{h\in\mathfrak{H}\mid\rho(h,z)\in B_{R}(z)\}\) has compact closure. Notice that when \(\mathfrak{H}\) is discrete, this amounts to requiring that this set is finite. The quasi-action \(\rho\) is _cobounded_ if there exist \(z\in Z\) and \(L>0\) such that \(Z\) is contained in the \(L\)-neighborhood of \(\rho(\mathfrak{H}\times\{z\})\). Two quasi-actions \(\rho,\rho^{\prime}\) of \(\mathfrak{H}\) on the same metric space \(Z\) are _equivalent_ if \[\sup_{h\in\mathfrak{H}}\sup_{z\in Z}d(\rho(h,z),\rho^{\prime}(h,z))<+\infty.\] More generally, two quasi-actions \(\rho,\rho^{\prime}\) of \(\mathfrak{H}\) on two metric spaces \(Z,Z^{\prime}\) are _quasi-conjugate_ if there exists a quasi-isometry \(\varphi:Z\to Z^{\prime}\) such that \[\sup_{h\in\mathfrak{H}}\sup_{z\in Z}d(\varphi\circ\rho(h,z),\rho^{\prime}(h, \varphi(z)))<+\infty.\] The study of quasi-actions on right-angled Artin groups relies on the following theorem of the second-named author. **Theorem 10.2** ([16, Theorem 4.18]).: _Let \(G\) be a non-cyclic right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\). For every \(L\geq 1,A\geq 0\), there exists \(D=D(A,L)\geq 0\) such that the following holds. For every \((L,A)\)-quasi-isometry \(q:G\to G\), there exists a unique flat-preserving bijection \(q^{\prime}:G\to G\) such that \(d(q(x),q^{\prime}(x))\leq D\) for every \(x\in G\)._ _In addition, for every \(x\in G\), denoting by \(\mathcal{F}_{x}\) the set of all maximal standard flats that contain \(x\), one has_ \[\{q^{\prime}(x)\}=\bigcap_{F\in\mathcal{F}_{x}}q_{*}(F),\] _where \(q_{*}(F)\) is the unique maximal standard flat at finite Hausdorff distance of \(q(F)\)._ We refer to [12, Lemma 4.12 and Equation 4.13] for the "in addition" statement of the above theorem. We record the following corollary of Theorem 10.2. **Corollary 10.3**.: _Let \(G\) be a non-cyclic right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), and let \(\mathfrak{H}\) be a locally compact second countable group. Then any quasi-action \(\rho:\mathfrak{H}\times G\to G\) is equivalent to a unique action \(\alpha:\mathfrak{H}\times G\to G\) by flat-preserving bijections. Moreover, if \(\rho\) is a measurable map, then \(\alpha\) is a continuous map._ Proof.: The existence and uniqueness of \(\alpha\) follow from Theorem 10.2. Its measurability follows from the measurability of \(\rho\) and the fact that \(\alpha(h,g)=g^{\prime}\) if and only if there exists \(M\in\mathbb{N}\) such that for every maximal standard flat \(F\) containing \(g\), there exists a maximal standard flat \(F^{\prime}\) containing \(g^{\prime}\) such that for every \(x\in F\), there exists \(x^{\prime}\in F^{\prime}\) such that \(d(\rho(h,x),x^{\prime})\leq M\). Equipped with the compact-open topology, the group \(\operatorname{Bij}_{\operatorname{FP}}(G)\) is second-countable. So the measurable map \(\mathfrak{H}\to\operatorname{Bij}_{\operatorname{FP}}(G)\) induced by \(\alpha\) is in fact automatically continuous [11, Theorem B.3]. Therefore \(\alpha\) is continuous. **Proposition 10.4**.: _Let \(G\) be a non-cyclic right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\). Let \(H\) be a finitely generated group that is quasi-isometric to \(G\), and let \(\mathfrak{H}\) be a locally compact second countable group that contains \(H\) as a cocompact lattice._ _Then \(\mathfrak{H}\) has a proper, cobounded, continuous action on \(G\) by flat-preserving bijective quasi-isometries with uniform constants, and a proper, cocompact, continuous action on a locally finite \(\operatorname{CAT}(0)\) cube complex \(Y\) which is quasi-isometric to \(G\)._ Proof.: By [10, Lemma 28], the group \(\mathfrak{H}\) has a proper, cobounded, measurable quasi-action \(\rho\) on \(H\). Notice that, while properness and measurability of the quasi-action are not stated in [10, Lemma 28], they follow from the construction. Indeed, to construct \(\rho\), one starts with an enumeration \(H=\{h_{n}\}_{n\in\mathbb{N}}\) and chooses a symmetric compact subset \(K\subseteq\mathfrak{H}\) containing the identity, such that \(HK=\mathfrak{H}\). For \(g\in\mathfrak{H}\) and \(h\in H\), one lets \(\rho(g,h)=h^{\prime}\), where \(h^{\prime}\) is the first element of the enumeration of \(H\) such that \(gh\in h^{\prime}K\). In particular \(\mathfrak{H}\) has a proper, cobounded, measurable quasi-action on \(G\), through a quasi-isometry between \(H\) and \(G\). By Corollary 10.3, there exists a unique continuous flat-preserving action \(\alpha:\mathfrak{H}\to\operatorname{Bij}_{\operatorname{FP}}(G)\) such that there exists \(D\geq 0\) such that for every \(h\in\mathfrak{H}\) and every \(g\in G\), one has \(d(\rho(h,g),\alpha(h)(g))\leq D\). In particular the action \(\alpha\) is by quasi-isometries, and it is proper and cobounded. This proves the first part of the proposition. The second part of the proposition then follows from [12, Theorem 6.2], and \(Y\) arises as a blow-up building as in Section 3. As in the proof of [12, Theorem 6.2], the properness of the \(\mathfrak{H}\)-action on \(Y\) follows from the properness of its action on \(G\), and the fact that there is an equivariant quasi-isometry from \(G\) to the set of rank \(0\) vertices of \(Y\) (see the paragraphs after the proof of the claim in [12, p. 587]). The coboundedness (in fact cocompactness as \(Y\) is locally finite) of the action follows from the same argument. And the measurability of the \(\mathfrak{H}\)-action on \(Y\) also follows from the measurability of its action on \(G\) by the same argument (the argument gives the measurability at the level of rank \(0\) vertices, but the action on rank \(0\) vertices determines the whole action). Finally, the continuity of the homomorphism \(\mathfrak{H}\to\operatorname{Aut}(Y)\) follows again from an automatic continuity statement due to Mackey, see [11, Theorem B.3]. ### Lattice embeddings under an integrability assumption **Theorem 10.5**.: _Let \(G\) be a non-cyclic right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\), and let \(H\) be a countable group with bounded torsion. Assume that there exists an \((L^{1},L^{0})\) measure equivalence coupling from \(H\) to \(G\). Let \(\mathfrak{H}\) be a locally compact second countable group in which \(H\) embeds as a lattice._ _Then there exists a uniformly locally finite \(\operatorname{CAT}(0)\) cube complex \(Y\) quasi-isometric to \(G\), and a continuous homomorphism \(\mathfrak{H}\to\operatorname{Aut}(Y)\) with compact kernel and cocompact image. Such \(Y\) can be chosen to be a blow-up building in the sense of Section 3.1._ Proof.: Since there is an \((L^{1},L^{0})\)-measure equivalence coupling from \(H\) to \(G\), Theorem 1 implies that \(H\) is finitely generated and quasi-isometric to \(G\). Since \(H\) is a cocompact lattice in \(\mathfrak{H}\) by Theorem 10.1, it follows from Proposition 10.4 that there exists a continuous proper cobounded action by cubical automorphisms \(\alpha:\mathfrak{H}\to\operatorname{Aut}(Y)\) on a \(\operatorname{CAT}(0)\) cube complex \(Y\) that is quasi-isometric to \(G\). The kernel of \(\alpha\) is then a closed subgroup of \(\mathfrak{H}\), in fact compact by properness of the action. And the cocompactness of the image of \(\mathfrak{H}\) in \(\operatorname{Aut}(Y)\) follows from the cocompactness of the \(\mathfrak{H}\)-action on \(Y\). ## 11 Lack of virtual locally compact model for products In this section, we prove Theorem 4 from the introduction. More precisely, we start with a right-angled Artin group \(G_{\Lambda}\) with \(|\operatorname{Out}(G_{\Lambda})|<+\infty\), which splits as a direct product \(G_{\Lambda}=G_{\Gamma_{1}}\times G_{\Gamma_{2}}\). We construct a sequence of finite-index subgroups \(G_{\Lambda_{n}}=G_{\Gamma_{1,n}}\times G_{\Gamma_{2,n}}\), and cocompact torsion-free lattices \(U_{n}\) in the automorphism group \(\operatorname{Aut}(X_{2n,1}\times X_{2n,2})\) of the universal cover of the Salvetti complex of \(\Lambda_{2n}\), such that the groups \(U_{n}\) do not embed as lattices in a common locally compact group \(\mathfrak{G}\), even up to passing to finite-index subgroups (see Theorem 11.8). There are several building blocks in our construction. We start from cocompact lattices in the automorphism group of a product of two trees, coming from the celebrated construction of Burger-Mozes [1, 1] - this will be reviewed in Section 11.2 below. We will then use two constructions, presented in Section 11.1, which will allow us to extend the action on a product of trees, to a cocompact action on the product \(X_{2n,1}\times X_{2n,2}\). The construction of the groups \(U_{n}\) is completed in Section 11.3. Checking that the groups \(U_{n}\) do not admit any common lattice embedding, even virtually, will require a very fine analysis on factor actions (which were already key in the proof of the main theorem of this paper). This analysis is carried in Section 11.4, and the proof of Theorem 4 is completed in Section 11.5. ### Two arguments for extending actions to a bigger complex For each \(n\in\mathbb{N}\), let \(T_{n}\) be the regular tree of valence \(n\), with each edge of length \(2\). An action of a group \(H\) on \(T_{n}\) by automorphisms is _even_ if it preserves the colours of vertices in any bipartite colouring of \(VT_{n}\). Let \(J_{n}\) be the Cayley graph of a rank \(n\) free group with respect to a free generating set \(X\), with the usual orientation and labeling of edges by elements of \(X\), where edges have length \(1\). Notice that \(J_{n}\) is a regular tree of valence \(2n\). Let \(\tilde{X}\) be a finite set in bijection with \(X\). We label edges of \(J_{n}\times J_{n}\) by elements of \(X\cup\tilde{X}\): edges with a nondegenerate projection to the first (resp. second) factor are labeled by elements of \(X\) (resp. \(\tilde{X}\)). A _standard line_ in \(J_{n}\) is a line made of edges of the same label. A _standard flat_ in \(J_{n}\times J_{n}\) is a product of two standard lines, one in each factor. An action \(\alpha\) of a group \(H\) on a product \(T\times T^{\prime}\) of two trees is _factor-preserving_ if there exist actions \(\beta:H\to\operatorname{Aut}(T)\) and \(\beta^{\prime}:H\to\operatorname{Aut}(T^{\prime})\) such that \(\alpha(h)=(\beta(h),\beta^{\prime}(h))\) for every \(h\in H\) - in particular \(\alpha\) does not swap the two factors. The actions \(\beta\) and \(\beta^{\prime}\) are called the _factor actions_ of \(\alpha\). **Lemma 11.1**.: _Let \(\alpha:H\curvearrowright T_{n}\times T_{n}\) be a free, cocompact, factor-preserving action of a group \(H\), whose factor actions are even._ _Then there exist an isometric embedding \(\theta:T_{n}\times T_{n}\to J_{2n}\times J_{2n}\), a group \(H^{\prime}\) with a factor-preserving, free and cocompact action \(\alpha^{\prime}:H^{\prime}\to\mathrm{Aut}(J_{2n}\times J_{2n})\) which preserves the orientation of edges and sends standard flats to standard flats, and an injective homomorphism \(\varphi:H\to H^{\prime}\), such that \(\theta\) is equivariant with respect to \(\alpha,\alpha^{\prime}\) and \(\varphi\)._ _In addition \(\theta\) can be chosen so that for any \(x\in V(T_{n}\times T_{n})\), and any two distinct edges \(e_{1},e_{2}\in E(T_{n}\times T_{n})\) containing \(x\), the labels of \(\theta(e_{1})\) and \(\theta(e_{2})\) are different._ Proof.: Let \(S=\{s_{1},\ldots,s_{n}\}\) be a finite set of cardinality \(n\). We label each edge of \(T_{n}\) by a letter in \(S\), in such a way that two edges incident on the same vertex never have the same label. We fix a bipartition of \(T_{n}\) into _black_ and _white_ vertices. Let \(T^{\prime}_{n}\) be the barycentric subdivision of \(T_{n}\). Vertices of \(T^{\prime}_{n}\) are of three types: black vertices (in \(V(T_{n})\)), white vertices (in \(V(T_{n})\)), and _gray_ vertices, corresponding to the midpoints of edges of \(T_{n}\). We orient each edge of \(T^{\prime}_{n}\) so that it points towards its gray vertex. We fix a set \(\{a_{1},\ldots,a_{n},a^{\prime}_{1},\ldots,a^{\prime}_{n}\}\) of labels. Any edge of \(T^{\prime}_{n}\) from a white (resp. black) vertex of \(T_{n}\) to the midpoint of an \(s_{i}\)-labeled edge of \(T_{n}\), is labeled by \(a_{i}\) (resp. \(a^{\prime}_{i}\)). Let \(W_{2n}\) be a wedge sum of \(2n\) oriented circles, labeled by \(a_{1},\ldots,a_{n},a^{\prime}_{1},\ldots,a^{\prime}_{n}\). There is a unique map \(\pi:T^{\prime}_{n}\to W_{2n}\) which preserves labels and orientations of edges. But \(\pi\) is not a covering map. Now we enlarge \(T^{\prime}_{n}\) to a larger space \(K_{n}\), and extend \(\pi\) to a map \(\pi:K_{n}\to W_{2n}\) which is a covering map. This relies on Haglund and Wise's _canonical completion_ procedure [11, Section 6], which we explain in this special case. We enlarge \(T^{\prime}_{n}\) as follows. For each edge \(e\) of \(T^{\prime}_{n}\) oriented from a vertex \(x\) to another vertex \(y\), we add an edge \(e^{\prime}\) to \(T^{\prime}_{n}\) such that \(e^{\prime}\) and \(e\) have the same label, but \(e^{\prime}\) is oriented from \(y\) to \(x\). Next for each white (resp. black) vertex \(x\) of \(T^{\prime}_{n}\), we attach \(n\) oriented loops based \(x\), labeled by \(\{a^{\prime}_{1},\ldots,a^{\prime}_{n}\}\) (resp. \(\{a_{1},\ldots,a_{n}\}\)). Finally for each gray vertex \(x\) of \(T^{\prime}_{n}\) which is the midpoint of an \(s_{i}\)-labeled edge of \(T_{n}\), we attached \(2(n-1)\) oriented loops based at \(x\), labeled by \(\{a_{1},\ldots,\hat{a}_{i},\ldots,a_{n},a^{\prime}_{1},\ldots,\hat{a}^{\prime}_ {i},\ldots,a^{\prime}_{n}\}\) - here \(\hat{a}_{i},\hat{a}^{\prime}_{i}\) means that we remove these two elements from the set of labels. Now one readily verifies that the map \(\pi:T^{\prime}_{n}\to W_{2n}\) extends to a label and orientation preserving covering map \(\pi:K_{n}\to W_{2n}\). A _standard circle_ in \(K_{n}\) is an embedded copy of \(\mathbb{S}^{1}\) made of edges with the same label. Each standard circle in \(K_{n}\) has either one or two edges. Now consider the action \(H\curvearrowright T_{n}\times T_{n}\), which gives two factor actions \(H\curvearrowright T_{n}\) preserving the bipartite vertex coloring of \(T_{n}\). Any factor action induces an action \(H\curvearrowright T^{\prime}_{n}\) which preserves orientation of edges and colors of vertices of \(T^{\prime}_{n}\) (though it might not preserve labels of edges of \(T^{\prime}_{n}\)). And since the number of edge-loops at vertices of a given colour is constant, the action \(H\curvearrowright T^{\prime}_{n}\) extends to an action \(H\curvearrowright K_{n}\) which preserves the orientation of edges and sends standard circles to standard circles. Now the free and cocompact action \(H\curvearrowright T_{n}\times T_{n}\) extends to a free and cocompact action \(H\curvearrowright K_{n}\times K_{n}\). A _standard torus_ in \(K_{n}\times K_{n}\) is a product of two standard circles, one from each factor. Then the action \(H\curvearrowright K_{n}\times K_{n}\) preserves the edge orientation and sends every standard torus to a standard torus. Moreover, the subspace \(T^{\prime}_{n}\times T^{\prime}_{n}\) is invariant under this action. Let \(J_{2n}\times J_{2n}\) be the universal cover of \(K_{n}\times K_{n}\) with the induced label and orientation of edges (where the edges coming from the second factor are labeled by \(\tilde{a}_{i},\tilde{a}^{\prime}_{i}\)). Then standard flats in \(J_{2n}\times J_{2n}\) are lifts of standard tori in \(K_{n}\times K_{n}\) - for this it is important to notice that components of \(K_{n}\) consisting of edges with the same label are reduced to circles. Let \(H^{\prime}\subseteq\operatorname{Aut}(J_{2n}\times J_{2n})\) be the subgroup consisting of all automorphisms that lift automorphisms in \(H\). Then the \(H^{\prime}\)-action on \(J_{2n}\times J_{2n}\) is free and cocompact, preserves the orientation of edges, and sends standard flats to standard flats. In addition, the inclusion map \(T^{\prime}_{n}\times T^{\prime}_{n}\to K_{n}\times K_{n}\) lifts to an isometric embedding \(\theta:T^{\prime}_{n}\times T^{\prime}_{n}\to J_{2n}\times J_{2n}\). As \(T^{\prime}_{n}\times T^{\prime}_{n}\) is invariant under the \(H\)-action on \(K_{n}\times K_{n}\), such a lift gives an injective homomorphism \(\varphi:H\to H^{\prime}\) such that \(\theta\) is equivariant with respect to \(\varphi\). Finally, the additional part of the lemma follows from our construction. We need another extension criterion, which is based on an idea of Hughes in [10, Section 7.2]. Our next lemma is a variation over a statement to be found in ongoing work of Mj and the second-named author [HM]. Recall that \(X_{\Gamma}\) is the locally finite cube complex canonically associated to \(G_{\Gamma}\), in other words \(X_{\Gamma}\) is the universal cover of the Salvetti complex of \(G_{\Gamma}\). Given a group \(H\) acting by flat-preserving bijections on \(G_{\Gamma}\), we have a _type cocycle_\(c:H\times VX_{\Gamma}\to\operatorname{Aut}(\Gamma)\) where \(c(h,x)\) with \(h\in H\) and \(x\in VX_{\Gamma}\) is defined as follows. Given a vertex \(v\in V\Gamma\), let \(\ell\) be the standard line of type \(v\) containing \(x\). Then \(c(h,x)(v)\) is defined to be the type of \(h(\ell)\). One readily vertices that \(c(h,x):V\Gamma\to V\Gamma\) preserves adjacency of vertices, hence extends to an automorphism of \(\Gamma\), and that \(c(h,x)\) is indeed a cocycle. **Lemma 11.2**.: _Let \(\Gamma_{1}\) be an induced subgraph of \(\Gamma_{2}\). Suppose \(H\) is a group acting freely and cocompactly on \(X_{\Gamma_{1}}\), preserving the orientation of edges, and sending standard flats to standard flats. Let \(c_{1}:H\times VX_{\Gamma_{1}}\to\operatorname{Aut}(\Gamma_{1})\) be the associated type cocycle. Suppose that there exists a cocycle \(c_{2}:H\times VX_{\Gamma_{1}}\to\operatorname{Aut}(\Gamma_{2})\) such that \(c_{2}(h,x)_{|\Gamma_{1}}=c_{1}(h,x)\) for any \(h\in H\) and \(x\in VX_{\Gamma_{1}}\)._ _Then there exist a group \(H^{\prime}\) acting freely and compactly on \(X_{\Gamma_{2}}\) sending standard flats to standard flats, an injective group homomorphism \(\phi:H\to H^{\prime}\), and a \(\phi\)-equivariant embedding \(j:X_{\Gamma_{1}}\to X_{\Gamma_{2}}\) preserving labels and orientations of edges. Moreover, each element of \(H^{\prime}\) sends standard lines labeled by \(v\in V\Gamma_{2}\) to standard lines whose labels belong to the orbit of \(v\) under the action of elements in \(\{c_{2}(h,x)\}_{h\in H,x\in VX_{\Gamma_{1}}}\)._ Proof.: The inclusion of \(\Gamma_{1}\) as an induced subgraph of \(\Gamma_{2}\) yields an isometric embedding \(S_{\Gamma_{1}}\hookrightarrow S_{\Gamma_{2}}\) between the Salvetti complexes. By pre-composing this with the covering map \(X_{\Gamma_{1}}\to S_{\Gamma_{1}}\), we obtain a local isometric embedding \(X_{\Gamma_{1}}\to S_{\Gamma_{2}}\). While \(X_{\Gamma_{1}}\to S_{\Gamma_{2}}\) is not a covering map, we can "complete" it to a covering map as follows, using (as in the previous proof) a special case of a construction by Haglund-Wise [10, Section 6]. Consider the homomorphism \(G_{\Gamma_{2}}\to G_{\Gamma_{1}}\) fixing each generator in \(\Gamma_{1}\) and sending all generators in \(\Gamma_{2}\setminus\Gamma_{1}\) to identity. Let \(K\) be the kernel of this homomorphism and \(Z\) be the cover of \(S_{\Gamma_{2}}\) corresponding to \(K\). Then there is an embedding \(X_{\Gamma_{1}}\to Z\). Under such an embedding, \(X_{\Gamma_{1}}\) and \(Z\) have the same vertex set. Moreover, we can obtain the \(1\)-skeleton of \(Z\) from the \(1\)-skeleton of \(X_{\Gamma_{1}}\) by attaching a collection of edge loops to each of the vertices of \(X_{\Gamma_{1}}\), one edge loop for each vertex outside \(\Gamma_{2}\setminus\Gamma_{1}\). The complex \(Z\) is called the _canonical completion_ of \(X_{\Gamma_{1}}\) with respect to the local isometric embedding \(X_{\Gamma_{1}}\to S_{\Gamma_{2}}\). Now we define an action of \(H\) on the \(1\)-skeleton \(Z^{(1)}\) of \(Z\) extending the existing action of \(H\) on \(X_{\Gamma_{1}}^{(1)}\) as follows. For \(h\in H\), we let \(h\) send an edge loop based at \(x\in X_{\Gamma_{1}}\) labeled by \(v\in V\Gamma_{2}\setminus V\Gamma_{1}\) to an edge loop based at \(h(x)\) labeled by \(c_{2}(h,x)(v)\), moreover, we require that \(h\) respects the orientation of edges. It follows from the construction of the action \(H\curvearrowright Z^{(1)}\) that it sends a pair of edges with commuting labels based at the same vertex to another pair of edges with commuting labels. Thus \(H\curvearrowright Z^{(1)}\) extends to an action of \(H\) on the \(2\)-skeleton of \(Z\). One readily checks that the action extends to higher skeleta as well. This gives an edge orientation preserving action \(H\curvearrowright Z\) extending the existing action \(H\curvearrowright X_{\Gamma_{1}}\). Note that \(H\curvearrowright Z\) is also free and cocompact. Let \(H^{\prime}\) be the subgroup of \(\operatorname{Aut}(X_{\Gamma_{2}})\) consisting of all lifts of automorphisms of \(Z\) coming from the action \(H\curvearrowright Z\). Then \(H^{\prime}\) fits into an exact sequence \(1\to\pi_{1}(Z)\to H^{\prime}\to H\to 1\). As \(H\curvearrowright Z\) is free and cocompact, the same holds for the action \(H^{\prime}\curvearrowright X_{\Gamma_{2}}\). The moreover statement of the lemma follows from the construction of \(H^{\prime}\). Let \(j:X_{\Gamma_{1}}\to X_{\Gamma_{2}}\) be a lift of the embedding \(X_{\Gamma_{1}}\to Z\) with respect to the covering map \(X_{\Gamma_{2}}\to Z\). Now we define an injective homomorphism \(\phi:H\to H^{\prime}\) as follows. Given \(h\in H\), let \(\alpha_{h}\) be the automorphism of \(Z\) coming from the action \(H\curvearrowright Z\), and let \(\beta_{h}=(\alpha_{h})_{|X_{\Gamma_{1}}}\), an automorphism of \(X_{\Gamma_{1}}\). Let \(\beta^{\prime}_{h}\) be the map \(j\circ\beta_{h}\circ j^{-1}\) defined on \(j(X_{\Gamma_{1}})\). We define \(\phi(h):X_{\Gamma_{2}}\to X_{\Gamma_{2}}\) to be the unique lift of \(\alpha_{h}:Z\to Z\) with respect to the covering \(X_{\Gamma_{2}}\to Z\) such that \(\phi(h)\) is an extension of \(\beta^{\prime}_{h}\). Then \(\phi\) is indeed a group homomorphism, and \(j\) is equivariant with respect to \(\phi\). ### On local actions Given a factor-preserving action of a group \(H\) on \(T_{n}\times T_{n}\), and a vertex \(x\) in one of the factor trees, the _local action_ of \(H\) at \(x\) is the action of the \(H\)-stabilizer of \(x\) (with respect to the factor action) on the collection of edges of the factor tree containing \(x\). The _local group_ at \(x\) is the subgroup of the permutation group of edges containing \(x\) coming from the local action. The following can be deduced from work of Lazarovich-Levcovitz-Margolis [13], relying on earlier works by Burger-Mozes [12, 13] and Radu [14]. **Lemma 11.3**.: _For each \(n_{0}>0\), there exist \(n\geq n_{0}\) and a simple torsion-free group \(H\) which acts on \(T_{n}\times T_{n}\) freely, cocompactly, in a factor-preserving way, with even factor actions, such that for any vertex \(x\) in one of the tree factors, the local action of \(H\) at \(x\) has an orbit of size at least \(n/4\)._ Proof.: By [13, Theorem 4.2 and Lemma 4.3], there is a group \(H\) acting on \(T_{n}\times T_{n}\) as a uniform lattice such that all the local groups are the full symmetry groups on \(n\) letters. Moreover, \(H\) has an index \(4\) subgroup \(H^{+}\) which is simple and preserves the bipartition of each factor tree (see the beginning of [13, Section 2] for the definition of \(H^{+}\)). In addition \(H^{+}\) is torsion-free by [14, Lemma 3.1]. Thus the group \(H^{+}\) satisfies the requirement of the lemma. ### The family of examples Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be finite simplicial non-complete graphs - at this point we are not making any extra assumption, but in the next section our discussion will be applied to graphs \(\Gamma_{1},\Gamma_{2}\) such that \(|\operatorname{Out}(G_{\Gamma_{i}})|<+\infty\) for every \(i\in\{1,2\}\). For every \(i\in\{1,2\}\), we fix a choice of two vertices \(v_{i},w_{i}\in V\Gamma_{i}\), with \(w_{i}\notin\operatorname{st}(v_{i})\). For every \(n\geq 1\), let \(\Gamma_{n,i}\) be the finite simplicial graph obtained by gluing \(n\) copies of \(\Gamma_{i}\) along \(\operatorname{st}(v_{i})\). We will denote by \(v_{n,i}\) the image of \(v_{i}\) in \(\Gamma_{n,i}\). Let \(\Gamma_{n,i}^{w}\) be the totally disconnected subgraph of \(\Gamma_{n,i}\) whose vertices are the \(n\) copies of \(w_{i}\) in \(\Gamma_{n,i}\), denoted by \(w_{n,i}[1],\dots,w_{n,i}[n]\). Let \(\Lambda=\Gamma_{1}\circ\Gamma_{2}\) and \(\Lambda_{n}=\Gamma_{n,1}\circ\Gamma_{n,2}\). Let \(\Lambda_{n}^{w}=\Gamma_{n,1}^{w}\circ\Gamma_{n,2}^{w}\), a subgraph of \(\Lambda_{n}\). Notice that for every \(i\in\{1,2\}\) and every \(n\geq 1\), the group \(G_{\Gamma_{n,i}}\) is isomorphic to the kernel of the homomorphism \(\varphi_{n,i}:G_{\Gamma_{i}}\to\mathbb{Z}/n\mathbb{Z}\) that sends \(v_{i}\) to \(1\) and all other generators to \(0\). This gives an injective homomorphism \(q_{n,i}:G_{\Gamma_{n,i}}\to G_{\Gamma_{i}}\) with finite index image. More precisely, let \(\Gamma_{n,i}[1],\ldots,\Gamma_{n,i}[n]\) be the \(n\) copies of \(\Gamma_{i}\) in \(\Gamma_{n,i}\), where \(\Gamma_{n,i}[j]\) is the copy that contains \(w_{n,i}[j]\). Then (up to reordering the \(\Gamma_{n,i}[j]\), which we do once and for all), for any vertex \(u\in\Gamma_{n,i}[j]\) with \(u\neq v_{n,i}\), we have \(q_{n,i}(u)=v_{i}^{j-1}\bar{u}v_{i}^{-j+1}\), where \(\bar{u}\) denotes the vertex of \(\Gamma_{i}\) that corresponds to \(u\) (when writing the above equality, we identify vertices of \(\Gamma_{i}\) and \(\Gamma_{n,i}\) with the corresponding elements of \(G_{\Gamma_{i}}\) and \(G_{\Gamma_{n,i}}\)). And \(q_{n,i}(v_{n,i})=v_{i}^{n}\). In particular \(q_{n,i}(w_{n,i}[j])=v_{i}^{j-1}w_{i}v_{i}^{-j+1}\). For every \(i\in\{1,2\}\) and every \(n\geq 1\), the \(q_{n,i}\)-image of any standard line of \(G_{\Gamma_{n,i}}\) is Hausdorff close to a standard line of \(G_{\Gamma_{i}}\). In addition \(q_{n,i}\) sends parallel standard lines to parallel standard lines, up to finite Hausdorff distance. Thus \(q_{n,i}\) induces a map from the vertex set of the extension graph \(\Gamma_{n,i}^{e}\) to the vertex set of \(\Gamma_{i}^{e}\), which extends to a graph isomorphism \((q_{n,i})_{*}:\Gamma_{n,i}^{e}\to\Gamma_{i}^{e}\). Likewise we have a map \(q_{n}:G_{\Lambda_{n}}\to G_{\Lambda}\). For every \(i\in\{1,2\}\), let \(X_{n,i}\) be the universal cover of the Salvetti complex of \(G_{\Gamma_{n,i}}\). We orient edges of \(X_{n,i}\) and label them by vertices of \(\Gamma_{n,i}\). As \(\{w_{n,i}[1],\ldots,w_{n,i}[n]\}\) generates a free subgroup of \(G_{\Gamma_{n,i}}\), we have an embedding \(j_{n,i}:J_{n}\to X_{n,i}\) which preserves the orientation and the labeling of edges, which gives \(j_{n}:J_{n}\times J_{n}\to X_{n,1}\times X_{2n,2}\). We will often refer to the first product factor as the _horizontal_ factor, and the second factor as _vertical_. **Corollary 11.4**.: _There exists an infinite subset \(\mathcal{C}\subset\mathbb{Z}_{\geq 0}\) such that the following is true. For each \(n\in\mathcal{C}\), there exists a group \(V_{n}\) acting on \(T_{n}\times T_{n}\) satisfying the requirements of Lemma 11.3, a group \(U_{n}\) acting on \(X_{2n,1}\times X_{2n,2}\) freely and cocompactly sending standard flats to standard flats, an isometric embedding \(\theta_{n}:T_{n}\times T_{n}\to X_{2n,1}\times X_{2n,2}\), and an injective group homomorphism \(\phi_{n}:V_{n}\to U_{n}\) such that_ 1. \(\theta_{n}\) _sends each edge of_ \(T_{n}\times T_{n}\) _(of length 2) to a concatenation of two edges in_ \(X_{2n,1}\times X_{2n,2}\)_;_ 2. \(\theta_{n}\) _is equivariant with respect to_ \(\phi_{n}\)_;_ 3. _for each_ \(x\in T_{n}\)_, there exists_ \(y\in X_{2n,1}\) _such that_ \(\theta(\{x\}\times T_{n})\subset\{y\}\times X_{2n,2}\)_;_ 4. _for any vertex_ \(x\in T_{n}\times T_{n}\) _and any two different edges_ \(e_{1}\) _and_ \(e_{2}\) _containing_ \(x\)_, the edges_ \(\theta_{n}(e_{1})\) _and_ \(\theta_{n}(e_{2})\) _have different labels, both contained in_ \(\Lambda_{2n}^{w}\)_;_ 5. _for every_ \(i\in\{1,2\}\)_, each element of_ \(U_{n}\) _sends standard lines labeled by_ \(v_{2n,i}\) _to standard lines labeled by_ \(v_{2n,i}\)_._ Proof.: Let \(V_{n}\) be a group given by Lemma 11.3. By Lemma 11.1, there exist a group \(V_{n}^{\prime}\) acting geometrically on \(J_{2n}\times J_{2n}\), preserving the orientation of edges, and sending standard flats to standard flats, an injective homomorphism \(\varphi_{n}:V_{n}\to V_{n}^{\prime}\), and a \(\varphi_{n}\)-equivariant isometric embedding \(T_{n}\times T_{n}\to J_{2n}\times J_{2n}\). Since the action of \(V_{n}^{\prime}\) is factor-preserving, the type cocycle of \(V_{n}^{\prime}\curvearrowright J_{2n}\times J_{2n}\) is defined and takes its values in automorphisms of \(\Lambda_{2n}^{w}\) that preserves the two factors \(\Gamma_{2n,1}^{w}\) and \(\Gamma_{2n,2}^{w}\) in \(\Lambda_{2n}^{w}\). Any such automorphism of \(\Lambda_{2n}^{w}\) extends naturally to an automorphism of \(\Lambda_{2n}\), because any permutation of the vertex set of \(\Gamma_{2n,i}^{w}\) extends to an automorphism of \(\Gamma_{2n,i}\) permuting the copies \(\Gamma_{2n,i}[1],\ldots,\Gamma_{2n,i}[2n]\). This gives the required extension of cocycles as in the assumption of Lemma 11.2. Now the corollary follows. ### Auxiliary facts about star projections Given a finite simplicial graph \(\Gamma\) with \(|\operatorname{Out}(G_{\Gamma})|<+\infty\), we collect several facts about certain projections on \(G_{\Gamma}\) for later use. Recall that we identify \(G_{\Gamma}\) as the \(0\)-skeleton of \(X_{\Gamma}\). Given \(\mathsf{v}\in V\Gamma^{e}\), take a \(\mathsf{v}\)-line \(\ell\). Then \(\ell\) is the \(0\)-skeleton of a convex subcomplex of \(X_{\Gamma}\). Then there is nearest point projection \(\pi_{\ell}:G_{\Gamma}\to\ell\) sending each point to the nearest point in \(\ell\) with respect to the word distance [13, Lemma 13.8]. It is known that if \(\ell_{1},\ell_{2}\) are standard lines with \(\Delta(\ell_{1})=\Delta(\ell_{2})\notin\operatorname{st}(\mathsf{v})\), then \(\pi_{\ell}(\ell_{i})\) is a single point for every \(i\in\{1,2\}\), and \(\pi_{\ell}(\ell_{1})=\pi_{\ell}(\ell_{2})\) (see [14, Lemma 6.2]). This gives a well-defined map \(\pi_{\mathsf{v}}:V(\Gamma^{e}\setminus\operatorname{st}(\mathsf{v}))\to\ell\). Recall that the left action \(G_{\Gamma}\curvearrowright\Gamma^{e}\) induces a projection map \(\Gamma^{e}\to\Gamma\). We say a vertex \(\mathsf{v}\) of \(\Gamma^{e}\) is of _type_\(v\) with \(v\in\Gamma\), if the map \(\Gamma^{e}\to\Gamma\) sends \(\mathsf{v}\) to \(v\). Now we continue with the notations from the previous section, and assume in addition that \(\operatorname{Out}(G_{\Gamma_{1}})\) and \(\operatorname{Out}(G_{\Gamma_{2}})\) are finite. In particular, for \(i\in\{1,2\}\), let \(\Gamma_{n,i}\), \(q_{n,i}:G_{\Gamma_{n,i}}\to G_{\Gamma_{i}}\) and \((q_{n,i})_{*}:\Gamma_{n,i}^{e}\to\Gamma_{i}^{e}\) be as in the previous section. **Lemma 11.5**.: _Assume that \(\operatorname{Out}(G_{\Gamma_{1}})\) and \(\operatorname{Out}(G_{\Gamma_{2}})\) are finite. Let \(i\in\{1,2\}\). Let \(\ell\) be a standard line in \(G_{\Gamma_{n,i}}\) such that \(\mathsf{v}=\Delta(\ell)\) is of type \(v_{n,i}\). For \(1\leq j\neq k\leq n\), let \(\ell_{j}\) and \(\ell_{k}\) be two standard lines in \(G_{\Gamma_{n,i}}\) intersecting \(P_{\mathsf{v}}\) non-trivially such that \(\Delta(\ell_{j})=\mathsf{w}_{j}\) is of type \(w_{n,i}[j]\) and \(\Delta(\ell_{k})=\mathsf{w}_{k}\) is of type \(w_{n,i}[k]\). Let \(\mathsf{v}^{\prime}=(q_{n,i})_{*}(\mathsf{v})\). Then_ 1. \(\pi_{\mathsf{v}^{\prime}}((q_{n,i})_{*}(\mathsf{w}_{j}))\neq\pi_{\mathsf{v}^{ \prime}}((q_{n,i})_{*}(\mathsf{w}_{k}))\)_._ 2. _Write_ \(P_{\mathsf{v}}=gG_{\operatorname{st}(v_{n,i})}\) _and let_ \(\mathcal{C}\) _be the collection of standard lines_ \(\ell^{\prime}\) _which have non-trivial intersection with_ \(gG_{\operatorname{lk}(v_{n,i})}\) _and satisfy_ \(\Delta(\ell^{\prime})\notin\operatorname{st}(\mathsf{v})\)_. Let_ \(\mathcal{W}=\{\Delta(\ell^{\prime})\}_{\ell^{\prime}\in\mathcal{C}}\)_. Then_ \(\pi_{\mathsf{v}^{\prime}}((q_{i})_{*}(\mathcal{W}))\) _is a finite set of cardinality at most_ \(n\)_._ Proof.: As \((q_{n,i})_{*}\) is \(G_{\Gamma_{n,i}}\)-equivariant, up to translation, we can assume that \(P_{\mathsf{v}}=G_{\operatorname{st}(v_{n,i})}\), that \(\ell_{j}=\langle w_{n,i}[j]\rangle\), and that \(\ell_{k}=g_{1}v_{n,i}^{m}\langle w_{n,i}[k]\rangle\) for some \(m\in\mathbb{Z}\) and some \(g_{1}\in G_{\operatorname{lk}(v_{n,i})}\). Then \(q_{n,i}(\ell_{j})\) is Hausdorff close to the standard line \(v_{i}^{j-1}\langle w_{i}\rangle\), and \(q_{n,i}(\ell_{k})\) is Hausdorff close to the standard line \(g_{1}v_{i}^{mn+k-1}\langle w_{i}\rangle\). Identifying \(Z_{\mathsf{v}^{\prime}}\) with \(\langle v_{i}\rangle\), we get that \(\pi_{\mathsf{v}^{\prime}}((q_{n,i})_{*}(\mathsf{w}_{j}))=v_{i}^{j-1}\) and \(\pi_{\mathsf{v}^{\prime}}((q_{n,i})_{*}(\mathsf{w}_{k}))=v_{i}^{nm+k-1}\), and Assertion 1 follows. Assertion 2 follows from a similar computation. **Lemma 11.6**.: _Let \(\Gamma\) be a finite simplicial graph such that \(|\operatorname{Out}(G_{\Gamma})|<+\infty\). Suppose \(H\curvearrowright G_{\Gamma}\) is an action by flat-preserving bijections. Take \(\mathsf{v}\in V\Gamma^{e}\) and let \(\alpha_{\mathsf{v}}:H_{\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\) be the associated factor action. Let \(\ell\subset G_{\Gamma}\) be a \(\mathsf{v}\)-line. Take another standard line \(\ell^{\prime}\) with \(\Delta(\ell^{\prime})\notin\operatorname{st}(\mathsf{v})\)._ _Then \(\alpha_{\mathsf{v}}(h)(\pi_{\mathsf{v}}(\Delta(\ell^{\prime})))=\pi_{\mathsf{v} }(\Delta(h(\ell^{\prime})))\) for any \(h\in H_{\mathsf{v}}\)._ Proof.: As \(P_{\mathsf{v}}\) is the vertex set of a convex subcomplex \(C_{\mathsf{v}}\) of \(X_{\Gamma}\), by [13, Lemma 13.8], there is a nearest point projection map \(\pi_{P_{\mathsf{v}}}:G_{\Gamma}\to P_{\mathsf{v}}\). The lemma follows from the fact, established below, that the nearest point projection map \(\pi_{P_{\mathsf{v}}}:G_{\Gamma}\to P_{\mathsf{v}}\) is \(H_{\mathsf{v}}\)-equivariant. Now we prove the fact. Let \(x\in G_{\Gamma}\) and \(h\in H_{\mathsf{v}}\). Let \(y=\pi_{P_{\mathsf{v}}}(x)\). Let \(\omega\) be a shortest edge path in the \(1\)-skeleton of \(X_{\Gamma}\) connecting the vertices \(x\) and \(y\in P_{\mathsf{v}}\). Let \(x_{0},\ldots,x_{n}\) be vertices in \(\omega\) such that for \(0\leq i\leq n-1\), \([x_{i},x_{i+1}]\) is a maximal sub-segment of \(\omega\) that is contained in a standard line (\(x_{0}=x\) and \(x_{n}=y\)). Denote the corresponding standard line by \(\ell_{i}\). As \(y\) can be alternatively characterized as the unique point in \(P_{\mathsf{v}}\) such that any hyperplane separating \(x\) and \(y\) does not cross \(C_{\mathsf{v}}\) (see [13, Lemmas 13.1 and 13.8]), \(y=\pi_{P_{\mathsf{v}}}(x)\) if and only if for any \(\omega\) and \((\ell_{i})\) as above, \(\Delta(\ell_{i})\notin\operatorname{st}(\mathsf{v})\) for any \(0\leq i\leq n-1\) For \(0\leq i\leq n-1\), let \(\ell^{\prime}_{i}\) be the standard line with \(h(\ell_{i})=\ell^{\prime}_{i}\) and let \(\omega^{\prime}_{i}\) be a geodesic segment in \(\ell^{\prime}_{i}\) from \(h(x_{i})\) to \(h(x_{i+1})\). Let \(\omega^{\prime}\) be the concatenation of all the \(\omega^{\prime}_{i}\) with \(0\leq i\leq n-1\). By considering the automorphism \(h_{*}:\Gamma^{e}\to\Gamma^{e}\) induced by the flat-preserving bijection \(h\), we see that \(\Delta(\ell^{\prime}_{i})\notin\operatorname{st}(\mathsf{v})\) for any \(0\leq i\leq n-1\). Thus none of the hyperplanes that have nonempty intersection with \(\omega^{\prime}\) will intersect \(C_{\mathsf{v}}\). As any hyperplane separating \(h(x)\) and \(h(y)\) is dual to an edge in \(\omega^{\prime}\), this hyperplane does not cross \(C_{\mathsf{v}}\). Thus \(h(y)=\pi_{P_{\mathsf{v}}}(h(x))\) by the previous paragraph. ### Conclusion As usual, given an action \(\alpha\) of a group \(H\) by flat-preserving bijections on \(G=G_{\Gamma}\), for every \(\mathsf{v}\in V\Gamma^{e}\), we let \(\alpha_{\mathsf{v}}:H_{\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\) be the factor action. **Proposition 11.7**.: _Let \(G\) be a non-cyclic right-angled Artin group with \(|\operatorname{Out}(G)|<+\infty\). Let \(\rho\) be a quasi-action of \(H\) on \(G\), and let \(\alpha:H\curvearrowright G\) be the unique \(H\)-action on \(G\) by flat-preserving bijections which is equivalent to \(\rho\) (see Corollary 10.3)._ _Then there exists \(C\geq 0\) (depending on \(\rho\)) such that for every \(\mathsf{v}\in V\Gamma^{e}\) and every subgroup \(H^{\prime}_{\mathsf{v}}\subseteq H_{\mathsf{v}}\), if some \(H^{\prime}_{\mathsf{v}}\)-orbit of the factor action \(\alpha_{\mathsf{v}}\) is finite, then every \(H^{\prime}_{\mathsf{v}}\)-orbit for this action has cardinality at most \(C\)._ Proof.: Note that there are constants \(L,A>0\), depending on \(\rho\), such that for any \(\mathsf{v}\in V\Gamma^{e}\), the factor action \(\alpha_{\mathsf{v}}:H_{\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\) is by bijections which are \((L,A)\)-quasi-isometries. Then by [11, Proposition 6.3], there exist a constant \(D\) depending only on \(L\) and \(A\), an isometric action \(\alpha^{\prime}_{\mathsf{v}}:H_{\mathsf{v}}\curvearrowright\mathbb{Z}\) and a surjective equivariant map \(f:Z_{\mathsf{v}}\to\mathbb{Z}\) such that each point inverse of \(f\) has cardinality at most \(D\). If \((\alpha_{\mathsf{v}})_{|H^{\prime}_{\mathsf{v}}}\) has a finite orbit, then \((\alpha^{\prime}_{\mathsf{v}})_{|H^{\prime}_{\mathsf{v}}}\) has a finite orbit, hence a fixed point. Hence any orbit of \((\alpha^{\prime}_{\mathsf{v}})_{|H^{\prime}_{\mathsf{v}}}\) has cardinality at most \(2\), and therefore any orbit of \((\alpha_{\mathsf{v}})_{|H^{\prime}_{\mathsf{v}}}\) has cardinality at most \(2D\). Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be two finite simplicial graphs such that \(\operatorname{Out}(G_{\Gamma_{1}})\) and \(\operatorname{Out}(G_{\Gamma_{2}})\) are finite. Given this choice, let \(\mathcal{C}\subseteq\mathbb{Z}_{\geq 0}\) and the groups \(U_{n},V_{n}\) be as in Corollary 11.4. Recall that \(U_{n}\) acts properly and cocompactly on \(X_{2n,1}\times X_{2n,2}\), and so does \(G_{\Gamma_{2n,1}}\times G_{\Gamma_{2n,2}}\). In particular \(U_{n}\) and \(G_{\Lambda}=G_{\Gamma_{1}}\times G_{\Gamma_{2}}\) are strongly commable in the sense recalled before the statement of Theorem 4 in the introduction. Therefore, Theorem 4 is a consequence of the following statement - recall that \(U_{n}\) is torsion-free, so we can focus on lattice embeddings instead of lattice representations with finite kernel. **Theorem 11.8**.: _There does not exist a locally compact second countable topological group \(\mathfrak{G}\) such that for each \(n\in\mathcal{C}\), there exists a finite index subgroup \(U^{\prime}_{n}\) of \(U_{n}\) which admits an embedding \(U^{\prime}_{n}\to\mathfrak{G}\) whose image is a lattice._ Proof.: We argue by contradiction and assume such \(\mathfrak{G}\) exists. Recall that \(|\operatorname{Out}(G_{\Lambda})|<+\infty\). Since \(G_{\Lambda_{2n}}\) has finite index in \(G_{\Lambda}\), and since \(U_{n}\) and its finite-index subgroup \(U^{\prime}_{n}\) are uniform lattices in \(\operatorname{Aut}(X_{2n,1}\times X_{2n,2})\), we can apply Theorem 10.1 and deduce that the lattice embedding \(U^{\prime}_{n}\to\mathfrak{G}\) is cocompact. Fix \(n_{0}\in\mathcal{C}\). Since \(U^{\prime}_{n_{0}}\) is quasi-isometric to \(G_{\Lambda}\), we can apply Proposition 10.4 and get a proper, cobounded action \(\alpha_{1}\) of \(\mathfrak{G}\) on \(G_{\Lambda}\) by flat-preserving bijective quasi-isometries with uniform constants. This gives, for every \(n\in\mathcal{C}\), an action by flat-preserving bijections \(\alpha_{1,n}:U^{\prime}_{n}\curvearrowright G_{\Lambda}\) through the lattice embedding \(U^{\prime}_{n}\to\mathfrak{G}\). We apply Proposition 11.7 to the action \(\alpha_{1}:\mathfrak{G}\curvearrowright G_{\Lambda}\), and let \(C\) be the resulting constant. On the other hand, for every \(n\in\mathcal{C}\), the group \(U^{\prime}_{n}\) has a properly discontinuous cocompact action on \(X_{2n,1}\times X_{2n,2}\). Let \(q_{2n}:X_{2n,1}\times X_{2n,2}\to G_{\Lambda}\) be as in the previous section, after identifying the \(0\)-skeleton of \(X_{2n,1}\times X_{2n,2}\) with \(G_{\Lambda_{2n}}\). Then \(q_{2n}\) gives a quasi-action of \(U^{\prime}_{n}\) on \(G_{\Lambda}\), which is equivalent to a unique action by flat-preserving bijections \(\alpha_{2,n}:U^{\prime}_{n}\curvearrowright G_{\Lambda}\) (see Corollary 10.3). Note that both \(\alpha_{1,n}\) and \(\alpha_{2,n}\) are proper and cobounded actions of \(U^{\prime}_{n}\) on \(G_{\Lambda}\) by quasi-isometries with uniform constants. Thus \(\alpha_{1,n}\) and \(\alpha_{2,n}\) are quasi-conjugate (as any two proper and cobounded quasi-actions of the same finitely generated group on the same space are quasi-conjugate). By Theorem 10.2, \(\alpha_{1,n}\) and \(\alpha_{2,n}\) are actually conjugate via a flat-preserving bijection of \(G_{\Lambda}\). Through \(\alpha_{2,n}\), the group \(U^{\prime}_{n}\) acts on the extension graph \(\Lambda^{e}\). Since the action of \(U^{\prime}_{n}\) on \(X_{2n,1}\times X_{2n,2}\) sends standard flats to standard flats (Corollary 11.4), we also have an action of \(U^{\prime}_{n}\) on \(\Lambda^{e}_{2n}\). The map \(q_{2n}:X_{2n,1}\times X_{2n,2}\to G_{\Lambda}\) induces an isomorphism \((q_{2n})_{*}:\Lambda^{e}_{2n}\to\Lambda^{e}\), which is \(U^{\prime}_{n}\)-equivariant with respect to the above actions. Let \(\{\alpha_{2,n,\mathsf{v}}:U^{\prime}_{n,\mathsf{v}}\curvearrowright Z_{ \mathsf{v}}\}_{\mathsf{v}\in V\Lambda^{e}}\) be the collection of factor actions for \(\alpha_{2,n}:U^{\prime}_{n}\curvearrowright G_{\Lambda}\). We observe that for every subgroup \(U^{\prime\prime}_{n,\mathsf{v}}\subseteq U^{\prime}_{n,\mathsf{v}}\), every finite orbit of \((\alpha_{2,n,\mathsf{v}})_{|U^{\prime\prime}_{n,\mathsf{v}}}\) has cardinality at most \(C\): indeed, this follows from the same property for the action \(\alpha_{1,n,\mathsf{v}}\), which comes from our choice of constant \(C\), and from the fact that the actions \(\alpha_{1,n}\) and \(\alpha_{2,n}\) are conjugate via a flat-preserving bijection. In the rest of the proof, we will show that as \(n\) becomes larger and larger, we can find some \(\mathsf{v}\in V\Lambda^{e}\) and some subgroup \(U^{\prime\prime}_{n,\mathsf{v}}\) of \(U^{\prime}_{n,\mathsf{v}}\) acting on \(Z_{\mathsf{v}}\) with finite orbits of larger and larger size, which will be a contradiction. Consider the simple subgroup \(V_{n}\) of \(U_{n}\) as above. Then \(V_{n}\subseteq U^{\prime}_{n}\). We view \(T_{n}\times T_{n}\) as a \(V_{n}\)-invariant subcomplex of \(X_{2n,1}\times X_{2n,2}\) via the embedding \(\theta_{n}\) provided by Corollary 11.4. A _vertical_ (resp. _horizontal_) \(T_{n}\)_-copy_ in \(X_{2n,1}\times X_{2n,2}\) is the \(\theta_{n}\)-image of \(\{z\}\times T_{n}\) (resp. \(T_{n}\times\{z\}\)) for some vertex \(z\) in the first (resp. second) tree factor. Let \(\ell\subset X_{2n,1}\times X_{2n,2}\) be a standard line of type \(v_{2n,1}\) such that \(\ell\) intersects \(T_{n}\times T_{n}\) in a vertex \(x\in X_{2n,1}\times X_{2n,2}\). Let \(p_{1}:T_{n}\times T_{n}\to T_{n}\) be the projection to the first factor. Let \(V_{n,x}\) be the \(V_{n}\)-stabilizer of \(p_{1}(x)\) with respect to the action of \(V_{n}\) on the first factor of \(T_{n}\times T_{n}\). We claim that each element in \(V_{n,x}\) sends \(\ell\) to a standard line which is parallel to \(\ell\). Indeed, by Corollary 11.4(5), for any \(g\in V_{n,x}\), edges in \(g(\ell)\) are also labeled by \(v_{2n,1}\). On the other hand, \(x\) and \(g(x)\) are connected by an edge path in a vertical \(T_{n}\)-copy whose edge labels commute with \(v_{2n,1}\), thus \(\ell\) and \(g(\ell)\) are parallel. Let \(P_{\ell}\) be the parallel set of \(\ell\) in \(G_{\Lambda_{2n}}\). We write \(P_{\ell}=g_{0}G_{\operatorname{st}(v_{2n,1})}\) and assume without loss of generality that \(x\in g_{0}G_{\operatorname{lk}(v_{2n,1})}\). By the previous paragraph, \(V_{n,x}\) stabilizes both \(P_{\ell}\) and \(g_{0}G_{\operatorname{lk}(v_{2n,1})}\). Let \(T^{h}_{n,x}\) be the horizontal \(T_{n}\)-copy that contains \(x\). Let \(\{\ell_{1},\ldots,\ell_{k}\}\) be the set of standard lines in \(X_{2n,1}\times X_{2n,2}\) passing through \(x\) whose intersection with \(T^{h}_{n,x}\) contains at least one edge. By Corollary 11.4(4), we have \(k=n\), and the types of the lines \(\ell_{1},\ldots,\ell_{n}\) are contained in \(\{w_{2n,1}[1],\ldots,w_{2n,1}[2n]\}\). For every \(i\in\{1,\ldots,n\}\), let \(e_{i}\subseteq\ell_{i}\) be an edge based at \(x\) and contained in \(\ell_{i}\cap T^{h}_{n,x}\). Since the action of \(U_{n}\) on \(X_{2n,1}\times X_{2n,2}\) sends standard flat to standard flat (Corollary 11.4), for every \(g\in V_{n,x}\) and every \(i\in\{1,\ldots,n\}\), the image \(g\ell_{i}\) is a standard line, and \(g(e_{i})\) is an edge based at \(gx\) contained in \(g\ell_{i}\cap T^{h}_{n,gx}\). It thus follows from Lemma 11.3 and Corollary 11.4(4) that, up to permuting the lines \(\ell_{i}\), the set of all types of \(g\ell_{1}\) as \(g\) varies in \(V_{n,x}\), is a subset of \(\{w_{2n,1}[1],\ldots,w_{2n,1}[2n]\}\) of cardinality at least \(n/4\). Let \(\ell^{\prime}_{1}\) (resp. \(\ell^{\prime}\)) be a standard line in \(G_{\Lambda}\) at finite Hausdorff distance from \(q_{2n}(\ell_{1})\) (resp. \(q_{2n}(\ell)\)). Let \(\mathsf{v}=\Delta(\ell^{\prime})\). We consider the action \(\alpha_{2,n,\mathsf{v}}:U^{\prime}_{n,\mathsf{v}}\curvearrowright Z_{\mathsf{v}}\). The fact that every element of \(V_{n,x}\) sends \(\ell\) to a parallel line ensures that \(V_{n,x}\subset U^{\prime}_{n,\mathsf{v}}\). Notice that for every \(g\in V_{n,x}\), we have \((q_{2n})_{*}(\Delta(g\ell_{1}))=\Delta(g\ell^{\prime}_{1})\). Thus, using Lemma 11.6, we deduce that the projections \(\pi_{\mathsf{v}}((q_{2n})_{*}(\Delta(g\ell_{1})))\), as \(g\) varies in \(V_{n,x}\) form exactly one orbit of the action \(V_{n,x}\curvearrowright Z_{\nu}\) under \(\alpha_{2,n,\nu}\). As the types of the lines \(g\ell_{1}\), with \(g\) varying in \(V_{n,x}\), form a subset of \(\{w_{2n,1}[1],\ldots,w_{2n,1}[2n]\}\) of cardinality at least \(n/4\), Lemma 11.5(1) implies that the cardinality of the set \(\{\pi_{\nu}((q_{2n})_{*}(\Delta(g\ell_{1})))\}_{g\in V_{n,x}}\) is at least \(n/4\). In addition, since \(\Delta(\ell_{1})\) and \(\Delta(\ell)\) are not adjacent in \(\Lambda^{c}_{2n}\), and \((q_{2n})_{*}\) is an isomorphism, we have \(\Delta(\ell_{1}^{\prime})\notin\operatorname{st}(\Delta(\ell^{\prime}))\). We can thus apply Lemma 11.5(2) and deduce that the above set (which as we explained is exactly one orbit of \(V_{n,x}\) for the action \(\alpha_{2,n,\nu}\)) is finite. We have thus found a subgroup of \(U^{\prime}_{n,\nu}\) whose action on \(Z_{\nu}\) via \(\alpha_{2,n,\nu}\) has a finite orbit of size at least \(n/4\), as desired.
2310.00420
An Efficient Algorithm for Clustered Multi-Task Compressive Sensing
This paper considers clustered multi-task compressive sensing, a hierarchical model that solves multiple compressive sensing tasks by finding clusters of tasks that leverage shared information to mutually improve signal reconstruction. The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions. The main bottleneck involves repeated matrix inversion and log-determinant computation for multiple large covariance matrices. We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices. Our approach combines Monte Carlo sampling with iterative linear solvers. Our experiments reveal that compared to the existing baseline, our algorithm can be up to thousands of times faster and an order of magnitude more memory-efficient.
Alexander Lin, Demba Ba
2023-09-30T15:57:14Z
http://arxiv.org/abs/2310.00420v1
# An Efficient Algorithm for Clustered Multi-Task Compressive Sensing ###### Abstract This paper considers clustered multi-task compressive sensing, a hierarchical model that solves multiple compressive sensing tasks by finding clusters of tasks that leverage shared information to mutually improve signal reconstruction. The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions. The main bottleneck involves repeated matrix inversion and log-determinant computation for multiple large covariance matrices. We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices. Our approach combines Monte Carlo sampling with iterative linear solvers. Our experiments reveal that compared to the existing baseline, our algorithm can be up to thousands of times faster and an order of magnitude more memory-efficient. Alexander Lin + Demba Ba School of Engineering & Applied Sciences, Harvard University, Boston, MA, USA compressive sensing, multi-task learning Footnote †: This work was supported by a NDSEG fellowship and NSF Cooperative Agreement PHY-2019786. The code is at [https://github.com/al5250/multiles](https://github.com/al5250/multiles). ## 1 Introduction _Compressive sensing_ (CS) is a fundamental problem with applications in many areas of signal processing, such as medical imaging [1], astronomy [2], microscopy [3], and photography [4]. Given measurements \(\mathbf{y}\in\mathbb{R}^{N}\) and a sensing matrix \(\mathbf{\Phi}\in\mathbb{R}^{N\times D}\) with \(N<D\), the goal is to find a sparse signal \(\mathbf{z}\in\mathbb{R}^{D}\) that satisfies \(\mathbf{y}=\mathbf{\Phi}\mathbf{z}+\text{noise}\). Certain applications (e.g. multi-contrast MRI [5]) encounter multiple CS problems \((\mathbf{y}^{(1)},\mathbf{\Phi}^{(1)}),\dots,(\mathbf{y}^{(T)},\mathbf{\Phi}^{(T)})\), which require corresponding solutions \(\mathbf{z}^{(1)},\dots,\mathbf{z}^{(T)}\in\mathbb{R}^{D}\). _Multi-task compressive sensing_[6] is a popular approach for settings in which \(\{\mathbf{z}^{(t)}\}_{t=1}^{T}\) are known to have the same non-zero support. This model leverages shared information to jointly solve the \(T\) tasks, outperforming methods that solve each task separately. In many situations, it is unreasonable to assume that all tasks have the same sparsity structure. Using multi-task compressive sensing for unrelated tasks with differing supports among \(\{\mathbf{z}^{(t)}\}_{t=1}^{T}\) can lead to worse performance than solving each task separately [7]. Thus, given multiple CS problems, we would like to ideally learn the structure of task relationships and only share information between related tasks; this allows for improvement in overall performance for all tasks. We consider _clustered multi-task compressive sensing_, a hierarchical model that solves multiple CS problems in which the inter-relatedness of the solutions are apriori unknown. The model automatically determines how to divide the \(T\) tasks into \(C\) clusters, only sharing information between related tasks within a cluster. Although variations of this model have been studied before [7, 8], the standard inference algorithm is too computationally demanding, making it impractical for high dimensions \(D\) and large number of tasks \(T\). The main bottleneck is repeated matrix inversion and log-determinant calculation for many \(D\times D\) covariance matrices, requiring \(O(TCD^{2})\)-space and \(O(TCD^{3})\)-time per iteration. We propose a new algorithm that is more efficient, reducing the space complexity to \(O(TCD)\) and the time complexity to \(O(TCT_{D})\), where \(\tau_{D}\leq O(D^{2})\) is the time needed to multiply a vector by the sensing matrix \(\mathbf{\Phi}\). The key idea is to use techniques from Monte Carlo sampling and numerical linear algebra to circumvent the need to form the large covariance matrices. Our algorithm extends and generalizes the state-of-the-art method that we previously developed for Bayesian compressive sensing [9] to a more challenging setting involving mixture models [10]. In experiments, we show that our algorithm can be up to thousands of times faster than EM, reducing hours of computation to a few seconds. ## 2 Model and Background The _clustered multi-task compressive sensing_ model for \(T\) tasks and \(C\) clusters has the following generative structure: \[a^{(t)} \sim\mathrm{Categorical}(\pi^{(1)},\dots,\pi^{(C)}), \tag{1}\] \[\mathbf{z}^{(t)}\mid a^{(t)}=c,\mathbf{\alpha}^{(c)} \sim\mathcal{N}(\mathbf{0},\mathrm{diag}(\mathbf{\alpha}^{(c)})^{-1}),\] \[\mathbf{y}^{(t)}\mid\mathbf{z}^{(t)} \sim\mathcal{N}(\mathbf{\Phi}^{(t)}\mathbf{z}^{(t)},\tfrac{1}{\beta}), \quad t=1,\dots,T.\] This model posits that each CS task \(t\) has an unknown cluster assignment \(a^{(t)}\) that is one of \(C\) options. The \(C\) clusters have prior probabilities \(\pi^{(1)},\dots,\pi^{(C)}\in[0,1]\), which sum to one. For simplicity, we assume that \(\{\pi^{(k)}\}_{k=1}^{K}\) are known, but they can also be learned by the model [10]. All tasks within a particular cluster \(c\) are interrelated and share a common non-negative _regularization parameter_\(\mathbf{\alpha}^{(c)}\in\mathbb{R}^{D}\), which the model will learn through an inference algorithm. For a particular dimension \(d\), if \(\alpha_{d}^{(c)}\) is large, then latent vectors within cluster \(c\) have small variance \(1/\alpha_{d}^{(c)}\) around zero for that dimension; in the limiting case where \(\alpha_{d}^{(c)}\rightarrow\infty\), we have exact sparsity, i.e. \(z_{d}^{(t)}\to 0\) for all \(t\) such that \(a^{(t)}=c\). Finally, given each \(\mathbf{z}^{(t)}\), the model lets the measurements be \(\mathbf{y}^{(t)}=\mathbf{\Phi}^{(t)}\mathbf{z}^{(t)}+\mathbf{\varepsilon}^{(t)}\) for Gaussian noise \(\mathbf{\varepsilon}^{(t)}\sim\mathcal{N}(0,1/\beta)\). Eq. (1) is a combination of Bayesian compressive sensing [11] and mixture modeling [10]. For \(C=1\), it reduces to the well-known multi-task CS model of [6], in which every task has the same parameter. However, when \(C>1\), it allows for the more general case in which not all \(T\) tasks have the same sparsity structure. This model can be fit to data \(\{\mathbf{y}^{(t)}\}_{t=1}^{T}\) by maximizing the log-likelihood \(L(\mathbf{\theta}):=1/T\sum_{t=1}^{T}\log p(\mathbf{y}^{(t)}\mid\mathbf{\theta})\) of the parameters \(\mathbf{\theta}:=\{\mathbf{\alpha}^{(c)}\}_{c=1}^{C}\). The objective \(L\) is known to encourage some components of each \(\mathbf{\alpha}^{(c)}\) to diverge to \(\infty\), sparsifying \(\mathbf{z}^{(1)},\ldots,\mathbf{z}^{(T)}\mid\mathbf{\theta}\)[12]. The standard algorithm for optimizing \(L\) is _expectation-maximization_ (EM) [13, 8], which cycles between two steps. Given estimates \(\widehat{\mathbf{\theta}}:=\{\widehat{\mathbf{\alpha}}^{(c)}\}_{c=1}^{C}\) from the previous iteration, the E-step creates a surrogate \(Q(\mathbf{\theta}|\widehat{\mathbf{\theta}})\) that lowerbounds \(L(\mathbf{\theta})\). The M-step maximizes this surrogate to find new parameters \(\widehat{\mathbf{\theta}}:=\arg\max_{\mathbf{\theta}}Q(\mathbf{\theta}|\widehat{\mathbf{\theta }})\). EM theory guarantees that \(L(\widehat{\mathbf{\theta}})\geq L(\widehat{\mathbf{\theta}})\)[13], which means that we can optimize \(L\) by repeating these two steps. For the model in Eq. (1), the \(Q\)-function is \[Q(\mathbf{\theta}|\widehat{\mathbf{\theta}}) :=\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}_{a^{(t)},\mathbf{z}^{(t)}|\mathbf{y} ^{(t)},\widehat{\mathbf{\theta}}}[\log p(a^{(t)},\mathbf{z}^{(t)},\mathbf{y}^{(t)}|\mathbf{ \theta})] \tag{2}\] \[\cong\frac{1}{TC}\sum_{\begin{subarray}{c}t=1,\\ c=1\end{subarray}}^{T,C}q^{(t,c)}\!\left[\sum_{d=1}^{D}\frac{\log\alpha_{d}^{( c)}}{2}-\frac{\alpha_{d}^{(c)}}{2}\!\left[(\mu_{d}^{(t,c)})^{2}+\Sigma_{d,d}^{(t,c )}\right]\right]\] where \(q^{(t,c)}:=p(a^{(t)}=c\mid\mathbf{y}^{(t)},\widehat{\mathbf{\theta}})\in[0,1]\) is the posterior probability of assigning task \(t\) to cluster \(c\); \(\mathbf{\mu}^{(t,c)}\in\mathbb{R}^{D}\) and \(\mathbf{\Sigma}^{(t,c)}\in\mathbb{R}^{D\times D}\) are the mean and covariance of the Gaussian conditional posterior density \(p(\mathbf{z}^{(t)}\mid a^{(t)}=c,\mathbf{y}^{(t)},\widehat{\mathbf{\theta}})\); and \(\cong\) denotes equality up to additive constants with respect to \(\mathbf{\theta}\). The analytic forms of \(\mathbf{\Sigma}^{(c,\mathbf{\mu}^{(t,c)}},q^{(t,c)}\) are \[\mathbf{\Sigma}^{(t,c)} :=\left(\beta(\mathbf{\Phi}^{(t)})^{\top}\mathbf{\Phi}^{(t)}+\mathrm{diag} (\widehat{\mathbf{\alpha}}^{(c)})\right)^{-1},\quad\forall t,c \tag{3}\] \[\mathbf{\mu}^{(t,c)} :=\beta\mathbf{\Sigma}^{(t,c)}(\mathbf{\Phi}^{(t)})^{\top}\mathbf{y}^{(t)}, \quad\forall t,c\] \[\begin{bmatrix}q^{(t,1)}\\ \vdots\\ q^{(t,C)}\end{bmatrix}:=\mathrm{softmax}\left(\begin{bmatrix}\frac{1}{2}\ell^ {(t,1)}+\log\pi^{(1)}\\ \vdots\\ \frac{1}{2}\ell^{(t,C)}+\log\pi^{(C)}\end{bmatrix}\right),\quad\forall t\] \[\ell^{(t,c)} :=\log\det\mathbf{\Sigma}^{(t,c)}+\sum_{d=1}^{D}\log\widehat{\alpha}_{ d}^{(c)}+\beta(\mathbf{y}^{(t)})^{\top}\mathbf{\Phi}^{(t)}\mathbf{\mu}^{(t,c)}\] By differentiating Eq. (2) with respect to \(\mathbf{\theta}\), we derive the M-step update for new parameters \(\widehat{\mathbf{\theta}}:=\{\widetilde{\mathbf{\alpha}}^{(c)}\}_{c=1}^{C}\) as \[\widetilde{\alpha}_{d}^{(c)}:=\frac{\sum_{t=1}^{T}q^{(t,c)}}{\sum_{t=1}^{T}q^{ (t,c)}\cdot[(\mu_{d}^{(t,c)})^{2}+\Sigma_{d,d}^{(t,c)}]},\quad\forall c,d. \tag{4}\] EM repeats Eq. (3) and (4) until convergence. Eq. (3) is expensive because it involves matrix inversion and log-determinant calculation for the large covariance matrix \(\mathbf{\Sigma}^{(t,c)}\), which requires \(O(D^{3})\)-time and \(O(D^{2})\)-space. For \(T\) tasks and \(C\) clusters, \(TC\) different covariances must be computed. Thus, EM does not scale well for large \(D\). In the next section, we introduce a new method that makes EM more efficient. ## 3 Algorithm Our proposed algorithm is called _covariance-free expectation-maximization_ (CoFEM). It accelerates EM by avoiding explicit formation of the large covariance matrices \(\mathbf{\Sigma}^{(t,c)}\). This algorithm extends our previously developed method for single-task CS [9, 14] to the more complicated multi-task mixture model of Eq. (1). Our key observation is that Eq. (4) only requires three types of quantities that depend on \(\mathbf{\Sigma}^{(t,c)}\); the posterior means \(\mathbf{\mu}^{(t,c)}\), the posterior variances (i.e. diagonal elements of \(\mathbf{\Sigma}^{(t,c)}\)), and the posterior cluster probabilities \(q^{(t,c)}\). We will show how to estimate these quantities while avoiding the \(TC\) matrix inversions and log-determinant calculations of standard EM. Our approach combines advances in Monte Carlo sampling and numerical linear algebra. ### Estimating Posterior Means and Variances We begin by describing how to estimate \(\mathbf{\mu}\) and the diagonal elements of \(\mathbf{\Sigma}\), following [9] (for notational simplicity we omit the superscripts \((t)\) and \((c)\) in this section, though they are assumed). First, let \(\mathbf{p}^{(1)},\ldots,\mathbf{p}^{(K)}\in\{+1,-1\}^{D}\) be \(K\) random probe vectors, each containing \(D\) independently-drawn Rademacher entries (i.e. \(+1\) or \(-1\) with equal probability). Next, following the _diagonal estimation rule_[15], we define \[\mathbf{x}^{(k)}:=\mathbf{\Sigma}\mathbf{p}^{(k)},\forall k,\qquad\mathbf{s}:=\frac{1}{K}\sum_{ k=1}^{K}\mathbf{p}^{(k)}\odot\mathbf{x}^{(k)}, \tag{5}\] where \(\odot\) denotes element-wise product. It follows that \(\mathbf{s}\) is an unbiased Monte Carlo estimator of the diagonal of \(\mathbf{\Sigma}\) (i.e. \(\mathbb{E}[s_{d}]=\Sigma_{d,d}\) for all \(d\)) [15]. Observe that \(\mathbf{\mu}\) (Eq. (3)) and \(\{\mathbf{x}^{(k)}\}_{k=1}^{K}\) (Eq. (5)) are the results of multiplying vectors by \(\mathbf{\Sigma}\); equivalently, they are the solutions to the linear systems \[\mathbf{A}\mathbf{x}^{(k)}=\mathbf{p}^{(k)},\forall k,\qquad\mathbf{A}\mathbf{\mu}=\beta \mathbf{\Phi}^{\top}\mathbf{y}, \tag{6}\] for \(\mathbf{A}:=\mathbf{\Sigma}^{-1}=\beta\mathbf{\Phi}^{\top}\mathbf{\Phi}+\text{diag}( \widehat{\mathbf{\alpha}})\). We use \(U\) steps of the _conjugate gradient (CG) algorithm_ (Alg. 1) [16] to solve these systems without forming the matrix \(\mathbf{\Sigma}\). Thus, CoFEM efficiently estimates \(\mathbf{\mu}\) and the diagonal elements of \(\mathbf{\Sigma}\) while avoiding costly matrix inversions. As shown by [9], typically small values of \(K,U\ll D\) suffice for accurate estimation. ### Estimating Posterior Cluster Probabilities Unlike the single-task setting of [9], clustered multi-task CS also requires posterior cluster probabilities \(q^{(t,c)}\). As shown by Eq. (3), most of the quantities in the definition of \(q^{(t,c)}\) are straightforward to compute; the main remaining bottleneck is the \(\log\det\mathbf{\Sigma}^{(t,c)}\) term. Directly computing a log-determinant is an expensive operation costing \(O(D^{3})\)-time. Instead, we show how to leverage the probe vectors and CG outputs from diagonal estimation (Eq. (6)) to efficiently estimate \(\log\det\mathbf{\Sigma}^{(t,c)}\) in \(O(KU^{3})\)-time, where \(K,U\ll D\). Let \(\mathbf{A}:=\mathbf{\Sigma}^{-1}\) and \(\mathbf{L}:=\log\mathbf{A}\) be its matrix logarithm. Then, we have \(\log\det\mathbf{\Sigma}=-\log\det\mathbf{A}=-\mathrm{tr}(\mathbf{L})\)[17, Th. 2.12]. We propose to estimate \(-\mathrm{tr}(\mathbf{L})\) using the _Hutchinson trace estimator_\(\nu:=-1/K\sum_{q=1}^{K}(\mathbf{p}^{(k)})^{\top}\mathbf{L}\mathbf{p}^{(k)}\), where \(\{\mathbf{p}^{(k)}\}_{k=1}^{K}\) are Rademacher probe vectors [18]. This estimator is unbiased, meaning that \(\mathbb{E}[\nu]=-\mathrm{tr}(\mathbf{L})=\log\det\mathbf{\Sigma}\). The final question is how to obtain \(\mathbf{p}^{\top}\mathbf{L}\mathbf{p}\), where \(\mathbf{p}\) is a Rademacher probe (we drop the index \((k)\) for simplicity). We adapt the methods of Sec. 3.1 to compute a _Lanczos quadrature_[19, 20]: Recall that in Eq. (6), we use CG to solve the system \(\mathbf{A}\mathbf{x}=\mathbf{p}\). CG theory shows that \(D\) steps of Alg. 1 implicitly performs a Lanczos tridiagonalization \(\mathbf{A}=\mathbf{R}^{\top}\mathbf{\Gamma}\mathbf{R}\)[21, Sec. 6.7.3], where \(\mathbf{R}\in\mathbb{R}^{D\times D}\) is an orthonormal matrix whose rows are normalized CG residuals \(\{\mathbf{r}_{u}/\|\mathbf{r}_{u}\|\}_{u=1}^{D}\) (line 5) and \(\mathbf{\Gamma}\in\mathbb{R}^{D\times D}\) is a symmetric tridiagonal matrix assembled from CG step sizes \(\{\gamma_{u},\xi_{u}\}_{u=1}^{D}\) (lines 3 & 6), i.e. \[\Gamma_{1,1}:=\tfrac{1}{\gamma_{1}},\ \ \Gamma_{u,u}:=\tfrac{1}{\gamma_{u}}+ \tfrac{\xi_{u-1}}{\gamma_{u-1}},\ \ \Gamma_{u,u-1}:=\tfrac{\sqrt{\xi_{u-1}}}{\gamma_{u-1}} \tag{7}\] for \(1<u\leq D\). Let \(\mathbf{\Gamma}:=\mathbf{S}^{\top}\mathrm{diag}(\mathbf{\lambda})\mathbf{S}\) be the eigendecomposition of \(\mathbf{\Gamma}\) for eigenvalues \(\mathbf{\lambda}\in\mathbb{R}^{D}\) and eigenvectors stored as rows of \(\mathbf{S}\in\mathbb{R}^{D\times D}\). Since \(\mathbf{A}=(\mathbf{S}\mathbf{R})^{\top}\mathrm{diag}(\mathbf{\lambda})\mathbf{S}\) and \(\mathbf{L}=\log\mathbf{A}\), we have \(\mathbf{L}=(\mathbf{S}\mathbf{R})^{\top}\mathrm{diag}(\log\mathbf{\lambda})\mathbf{S}\mathbf{R}\). Thus, \[\mathbf{p}^{\top}\mathbf{L}\mathbf{p}=(\underbrace{\mathbf{S}\mathbf{R}\mathbf{p}}_{\sqrt {D}\cdot\mathbf{e}_{1}}^{\top}\mathrm{diag}(\log\mathbf{\lambda})\underbrace{\mathbf{ S}\mathbf{R}\mathbf{p}}_{\sqrt{D}\cdot\mathbf{e}_{1}}=D\sum_{u=1}^{D}S_{u,1}^{2}\log \lambda_{u} \tag{8}\] where \(\mathbf{e}_{1}:=[1\ 0\ \cdots\ 0]^{\top}\) and \(\mathbf{R}\mathbf{p}=\sqrt{D}\cdot\mathbf{e}_{1}\) because using CG to solve \(\mathbf{A}\mathbf{x}=\mathbf{p}\) with initial condition \(\mathbf{x}_{1}=\mathbf{0}\) means that the first scaled residual (i.e. first row of \(\mathbf{R}\)) is \(\mathbf{p}/\|\mathbf{p}\|\) and all subsequent residuals/rows of \(\mathbf{R}\) are orthogonal to \(\mathbf{p}\)[20, 21]. Though we can use Eq. (8) to compute \(\mathbf{p}^{\top}\mathbf{L}\mathbf{p}\), it requires eigendecomposition of the full \(D\times D\) matrix \(\mathbf{\Gamma}\), which is an expensive \(O(D^{3})\)-time operation. As shown by [9], typically \(U\ll D\) steps of CG suffice for accurate diagonal estimation. To save computation, we would like to only use these \(U\) steps to estimate \(\mathbf{p}^{\top}\mathbf{L}\mathbf{p}\), yet this means that only \(\bar{\mathbf{\Gamma}}\in\mathbb{R}^{U\times U}\), the upper-left submatrix of \(\mathbf{\Gamma}\), can be generated by Eq. (7). Let \(\mathbf{\bar{S}}^{\top}\mathrm{diag}(\mathbf{\bar{\lambda}})\mathbf{\bar{S}}\) be the eigendecomposition of \(\bar{\mathbf{\Gamma}}\), for \(\mathbf{\bar{S}}\in\mathbb{R}^{U\times U}\) and \(\mathbf{\bar{\lambda}}\in\mathbb{R}^{U}\). The extremal values of \(\mathbf{\lambda}\), which dominate Eq. (8), are well-approximated by those of \(\mathbf{\bar{\lambda}}\)[22]. This justifies the following estimator for \(\log\mathbf{\Sigma}\), which we compute using the byproducts of \(U\)-step CG for the \(K\) systems in Eq. (6), \[\bar{\nu}(\mathbf{\Sigma}):=-\frac{1}{K}\sum_{k=1}^{K}\sum_{u=1}^{U}D\cdot(\bar{S} _{u,1}^{(k)})^{2}\cdot\log\bar{\lambda}_{u}^{(k)} \tag{9}\] This estimator is exact for \(K=\infty\) probes and \(U=D\) CG steps, but has some error if \(K,U\) are small. For a desired error level, [19] has the following result on how to choose \(K,U\). **Theorem**[19]. _Let \(\kappa\) be the condition number of \(\mathbf{\Sigma}\). For \(\epsilon,\eta\in[0,1]\), if \(K\geq\frac{24}{\epsilon^{2}}\log(\frac{2}{\eta})\) and \(U\geq\frac{\sqrt{\kappa}}{4}\log(\frac{O(\kappa)}{\epsilon\sqrt{\kappa}})\), then \(\Pr(|\bar{\nu}(\mathbf{\Sigma})-\log\det\mathbf{\Sigma}|\leq\epsilon\cdot|\log\det \mathbf{\Sigma}|)>1-\eta\)._ ### Full Algorithm and Complexity Analysis Alg. 2 summarizes the full CoFEM algorithm. The main cost is solving \(K+1\) systems for each task \(t\) and cluster \(c\). We can parallelize CG to solve all \(TC(K+1)\) systems simultaneously on multi-core hardware, such as GPUs [9, 15]. In addition, when solving a system with \(\mathbf{A}:=\beta\mathbf{\Phi}^{\top}\mathbf{\Phi}+\mathrm{diag}(\mathbf{\alpha})\), the matrix \(\mathbf{A}\) never needs to be explicitly computed; CG (Alg. 1) only requires a method to compute the matrix-vector product \(\mathbf{A}\mathbf{v}\) for any \(\mathbf{v}\in\mathbb{R}^{D}\). Thus, each CG step costs \(\tau_{D}\), the time complexity of applying the sensing matrix \(\mathbf{\Phi}\) (and its transpose) to a vector \(\mathbf{v}\). Though \(\tau_{D}\) is upperbounded by \(O(D^{2})\), it can be \(O(D\log D)\) or \(O(D)\) in many signal processing applications in which \(\mathbf{\Phi}\) has special structure (e.g. Fourier operators, wavelets, convolutions, low-rank structure, sparsity). In summary, CoFEM reduces EM's time complexity from \(O(TCD^{3})\) to \(O(TCK(\tau_{D}U+U^{3}))\), where for each task, cluster, and probe, \(\tau_{D}U\) is the cost of CG and \(U^{3}\) is the cost of eigendecomposition for the log-determinant. Additionally, CoFEM reduces EM's space complexity from \(O(TCD^{2})\) to \(O(TCK(D+U))\). As shown in both [9] and Sec. 4, \(K\) and \(U\) can be held as small, constant values even as \(D\) becomes very large, allowing CoFEM to scale much better than EM. ## 4 Experiments We perform multi-task CS experiments to validate our new algorithm. For each task \(t\in\{1,\ldots,T\}\), we simulate a true sparse signal \(\widetilde{\boldsymbol{z}}^{(t)}\in\mathbb{R}^{D}\) with \(5\%\) of its components sampled from \(\mathcal{N}(0,1)\) and the rest equal to zero. Each sensing matrix is \(\boldsymbol{\Phi}^{(t)}:=\boldsymbol{\Omega}^{(t)}\mathbf{F}\), where \(\mathbf{F}\in\mathbb{C}^{D\times D}\) is the discrete Fourier transform and \(\boldsymbol{\Omega}^{(t)}\in[0,1]^{N\times D}\) is an undersampling mask (i.e. randomly-chosen row-wise subset of the identity matrix) with \(N=D/4\). This form of sensing matrix is common to many CS applications [5]. We generate the measurements \(\boldsymbol{y}^{(t)}=\boldsymbol{\Phi}^{(t)}\widetilde{\boldsymbol{z}}^{(t)} +\boldsymbol{\varepsilon}^{(t)}\), where \(\boldsymbol{\varepsilon}^{(t)}\sim\mathcal{N}(0,\sigma^{2})\) for \(\sigma=0.05\). Our first experiment demonstrates the utility of multi-task CS when there exists subsets of interrelated tasks. We simulate two groups of four tasks (i.e. \(T=8\) total tasks) in which all \(\{\widetilde{\boldsymbol{z}}^{(t)}\}\) within a group have the same non-zero support set (but still differ in their actual values). For \(f\in[0,1]\), we let the two groups share \(1-f\) of their support and differ in the remaining \(f\); thus, if \(f=0\), all tasks have the exact same support and if \(f=1\), there are two groups of tasks with disjoint supports. We compare how well three methods reconstruct \(\{\widetilde{\boldsymbol{z}}^{(t)}\}_{t=1}^{T}\) from the measurements \(\{\boldsymbol{y}^{(t)}\}_{t=1}^{T}\): (1) single-task Bayesian CS (which uses a different \(\boldsymbol{\alpha}\) per task) [11], multi-task CS (which shares a single \(\boldsymbol{\alpha}\) across all tasks) [6], and (3) clustered multi-task CS (which learns \(K=2\) clusters of tasks with cluster-level \(\boldsymbol{\alpha}\)'s and \(\pi_{k}=1/K\)). We measure success through normalized error \(\|\widetilde{\boldsymbol{z}}-\boldsymbol{\mu}\|/\|\widetilde{\boldsymbol{z}}\|\), where \(\widetilde{\boldsymbol{z}},\boldsymbol{\mu}\in\mathbb{R}^{TD}\) are the concatenations of the true signals \(\{\widetilde{\boldsymbol{z}}^{(t)}\}_{t=1}^{T}\) and reconstructions \(\{\boldsymbol{\mu}^{(t)}\}_{t=1}^{T}\). Fig. 1(a) shows that multi-task CS outperforms single-task CS when all tasks are interrelated (i.e. low \(f\)), but performs worse when some tasks are unrelated (i.e. high \(f\)). In contrast, clustered multi-task CS obtains low error for all \(f\), as it automatically learns subsets of related tasks and only shares information among each subset. In the remaining experiments, we show the scalability that CoFEM provides over EM for clustered multi-task CS. In all cases, we use 50 algorithm iterations, \(K=15\) probe vectors and \(U=50\) CG steps. Fig. 1(b) and 1(c) show how computation time and memory consumption scale in relation to the problem dimension \(D\). We observe that CoFEM is up to hundreds of times faster and 14 times more memory-efficient than EM. Furthermore, EM cannot be executed for \(D>5{,}000\) due to its high memory usage, yet CoFEM has low memory consumption even for \(D=100{,}000\). Since CoFEM is highly parallelizable, we can also deploy it on parallel hardware (i.e. GPUs). This brings further acceleration, making CoFEM up to thousands of times faster than EM and reducing hours of computation to a few seconds. We use a 16-GB, 2.5 GHz Intel Xeon CPU and a Nvidia T4 GPU. Fig. 1(d) shows that even though CoFEM makes approximations to EM, these approximations are accurate enough to allow for the same rate of convergence in practice. Finally, in Fig. 1(e) and 1(f), we compare how computation time scales in relation to the number of tasks \(T\) and the number of clusters \(C\) for EM and CoFEM (fixing \(D=2{,}000\)). For larger numbers of tasks and clusters, EM becomes too memory-intensive due to the presence of more covariance matrices. In contrast, CoFEM experiences no such issues and can process (4 clusters, 40 tasks) in the same time it takes for EM to process (2 clusters, 8 tasks). ## 5 Conclusion This paper proposed a new algorithm that substantially accelerates inference for multi-task CS mixtures. Our ideas can be extended to handle infinite mixtures, which place a prior over the number of clusters [8], as well as mixtures of other models (e.g. factor analysis models, time series models) [23]. Figure 1: (a) Comparison of CS algorithms; (b, c, d, e, f) comparison between EM and CoFEM. OOM stands for “out of memory”.
2309.04725
EPA: Easy Prompt Augmentation on Large Language Models via Multiple Sources and Multiple Targets
Large language models (LLMs) have shown promising performance on various NLP tasks via task prompting. And their performance can be further improved by appending task demonstrations to the head of the prompt. And usually, a better performance can be achieved with more demonstrations. However, asking the users to write the demonstrations can be cumbersome. As a simple yet cost-effective workaround, this paper proposes a novel method called EPA (\textbf{E}asy \textbf{P}rompt \textbf{A}ugmentation)\footnote{While this paper considers augmenting prompts via demonstrations, we name it EPA as the name EDA is already taken by a well-known NLP method \citep{wei-zou-2019-eda}.} that effectively minimizes user efforts in writing demonstrations while improving the model performance at the same time. EPA achieves these goals by automatically augmenting the demonstrations with multiple sources/targets, where each of them paraphrases each other. This is well motivated as augmenting data via paraphrasing effectively improves neural language models. EPA thus employs paraphrasing as an augmentation method for in-context learning. Extensive experiments indicate that EPA effectively improves both NLU and NLG tasks, covering from natural language inference to machine translation in translating tens of languages.\footnote{Code and data will be released upon publication.}
Hongyuan Lu, Wai Lam
2023-09-09T09:03:50Z
http://arxiv.org/abs/2309.04725v1
# EPA: Easy Prompt Augmentation on Large Language Models ###### Abstract Large language models (LLMs) have shown promising performance on various NLP tasks via task prompting. And their performance can be further improved by appending task demonstrations to the head of the prompt. And usually, a better performance can be achieved with more demonstrations. However, asking the users to write the demonstrations can be cumbersome. As a simple yet cost-effective workaround, this paper proposes a novel method called EPA (**E**asy **P**rompt **A**ugmentation)1 that effectively minimizes user efforts in writing demonstrations while improving the model performance at the same time. EPA achieves these goals by automatically augmenting the demonstrations with multiple sources/targets, where each of them paraphrases each other. This is well motivated as augmenting data via paraphrasing effectively improves neural language models. EPA thus employs paraphrasing as an augmentation method for in-context learning. Extensive experiments indicate that EPA effectively improves both NLU and NLG tasks, covering from natural language inference to machine translation in translating tens of languages.2 Footnote 1: While this paper considers augmenting prompts via demonstrations, we name it EPA as the name EDA is already taken by a well-known NLP method Wei and Zou (2019). Footnote 2: Code and data will be released upon publication. ## 1 Introduction Large language models (LLMs) possess the ability to carry out various understanding and generation tasks from natural language inference to machine translation Brown et al. (2020); Lin et al. (2022); Zhang et al. (2022); Wei et al. (2022); Wang et al. (2023). Such an ability is closely related to in-context learning Rubin et al. (2022); Zhang et al. (2022). In-context learning prepends one or more source/target pairs (namely demonstrations) to the head of the requests. This effectively improves the downstream task performance. However, most of the scientific research constraints to the situation where those demonstrations are always available Min et al. (2022); Dong et al. (2022). Yet, the above-mentioned situation is unrealistic, as human annotation is expensive, and there is no guarantee that there always exist enough demonstrations. This is an important consideration, especially for commercial products, where we would like to reduce user efforts in writing demonstrations. This subsequently leads to a better user experience. In contrast, this paper considers a realistic situation where only a few demonstrations are available. We propose a simple yet effective framework called EPA (**E**asy **P**rompt **A**ugmentation) that creates multiple sources and multiple targets respectively where they paraphrase each other. EPA is also well-motivated by the prior works Gao et al. (2020); Lu and Lam (2023) that doing data augmentation via paraphrasing improves the neural language model during the model training stage. EPA considers paraphrasing both sources and targets to enhance in-context learning. In summary, this paper makes three key contributions: * We propose a novel method that is easy to use called EPA, aiming at improving the performance of LLMs via in-context learning. * EPA yields promising performance on various NLP tasks, covering from natural language inference to machine translation. * In-depth analyses indicate that EPA brings good improvement, and naively copying the demonstration degrades the performance. EPA is a simple yet effective method for improving LLMs. Therefore, we hope that EPA can benefit our community not only in terms of research but also in developing commercial products. ## 2 Easy Prompt Augmentation MotivationEPA is inspired by the fact that paraphrasing can improve the evaluation of natural language generation tasks by enriching the target side of the test instances Thompson and Post (2020); Bawden et al. (2020); Tang et al. (2023). This is due to the fact that one meaning can usually be represented in several sentences in terms of natural language, and multiple targets can provide a precise evaluation of the actual meaning of the generations. Meanwhile, the rise of LLMs provides a unified solution to different language-related tasks via the interface of prompts. Prompting can be further enhanced by in-context learning, where source/target pairs can be prepended to the head of the prompts to serve as demonstrations. Yet, the construction of demonstrations usually requires human efforts, which can be clumsy. This is especially inconvenient for commercial products, where we would like to reduce user efforts as much as possible. Our ApproachMotivated by the need to automatically construct demonstrations instead of human efforts, this paper proposes and investigates the effectiveness of paraphrasing when a small set of demonstrations are already available. Specifically, EPA assumes there are a small set of demonstrations that are already. EPA then considers paraphrasing the demonstrations on both **source side** and **target side** to create new demonstrations for in-context learning. Formally, assuming we have one demonstration, the traditional in-context learning feeds the following text concatenation into LLMs: \[[x_{d},y_{d},x],\] where \(x_{d}\) and \(y_{d}\) represent the source and target sentence of the demonstration pair. \(x\) represents the actual prompt that we hope LLMs solve. EPA first paraphrases \(x_{d}\) and \(y_{d}\) into additional demonstrations \(x_{d1}\), \(x_{d2}\), \(x_{d3}\),..., \(x_{dn}\) paired with \(y_{d1}\), \(y_{d2}\), \(y_{d3}\),..., \(y_{dn}\) where \(x\)s and \(y\)s represents the same meaning. \(n\) represents the number of additional paraphrases. EPA then feeds the following text concatenation into LLMs: \[[x_{d},y_{d},x_{d2},y_{d2},x_{d3},y_{d3},...,,x_{dn},y_{dn},x]\] The whole EPA framework aims at improving the performance of LLMs while minimizing human efforts in writing demonstrations simultaneously. Therefore, EPA can reduce user efforts and improve the user experience for commercial products. ## 3 Experimental Setup ### Datasets We conduct extensive experiments to verify the effectiveness of EPA on a variety of both understanding and generation tasks: * **Machine Translation** We randomly select 45 high-resourced and low-resourced languages from FLORES-200 (NLLB-Team, 2022), which is an MT dataset that contains about 200 languages with 1,012 parallel sentences included in the dataset, which were extracted from English Wikipedia covering a variety of topics and domains. The sentences were curated manually by professional translators from English into other languages. We report on translating all the languages in FLORES-200 from English, with the complete dev-test that has about 1,000 instances. * **Dialogue Summarization** SAMSum Gliwa et al. (2019) is a dialogue summarization dataset created manually created by linguists who are fluent in English. We use the test set that contains 819 instances for evaluations. * **Paraphrasing** Quora Question Pairs (QQP) is a paraphrasing dataset that requires generating an alternative surface form in the same language that maintains the original semantic meaning. We use the preprocessed version from Gong et al. (2022) that contains about 4.65k test instances. * **Natural Language Inference** The SNLI corpus Bowman et al. (2015) is a collection of manually-written English sentence pairs labelled for balanced classification between entailment, contradiction, and neutral relationship. We use the test set that contains about 10k instances. The MNLI corpus Williams et al. (2018) is a collection of manually-written sentence pairs annotated with textual entailment information. It is modelled based on the SNLI corpus. MNLI covers a range of categories of spoken and written text. MNLI also supports a distinctive evaluation of cross-genre generalization. We use the matched validation set with about 9.82k instances. ### Evaluation Metrics Machine TranslationFor the evaluation on MT, we report chrF++ scores Popovic (2015) com puted by the sacreBLEU repository.3 We also use COMET Bosselut et al. (2019) with the model version eamt22-cometinho-da. Footnote 3: [https://github.com/mjpost/sacrebleu](https://github.com/mjpost/sacrebleu) Dialogue Summarization, ParaphrasingFor the evaluation on dialogue summarization and paraphrasing, we use F1 scores on ROUGE-L Lin (2004) and BLEU. Natural Language InferenceWe use accuracy to evaluate the performance on the task of NLI. ### Baselines and Prompt Design ChatGPTWe experiment with ChatGPT, a multilingual large language model that has shown strong abilities across various NLU and NLG tasks Wang et al. (2023). At the time of writing, this LLM is widely popular. We use the ChatGPT model versioned GPT-3.5-TURBO. We access ChatGPT via the official API through Python. Copy-9As one of the baselines, we repeat the original three demonstrations three times to get 9 demonstrations, which is equal in the number of demonstrations to EPA. Prompt DesignWe use the following prompts for each task we conducted in our experiments: * **Machine Translation**_Translate the following text from English into [target-lang]: [source]_ * **Dialogue Summarization**_Given the following dialogue: [source] Give the dialogue summarization:_ * **Paraphrasing**_Given the English input: [source] Give the paraphrase:_ * **Natural Language Inference**_Given the following two sentences: [source] Whether they are neutral, contradiction, or entailment:_ For the dataset of FLORES-200, we use three pairs of demonstrations for in-context learning on our baselines. For the remaining datasets except for paraphrasing, we use one or three pairs of demonstrations on our baselines. We randomly select the demonstrations from the training sets. We use one demonstration for paraphrasing. All results except for MT are averaged from three runs on three different sets of demonstrations. ### Easy Prompt Augmentation We use ChatGPT to create the paraphrases based on the randomly selected demonstrations. We use the following prompt to create the paraphrases: _Paraphrase the following text: [source]_. For FLORES-200, we create three paraphrases for each English instance and we use the NLLB translator4 to acquire the paraphrases in the target language, making it multiple sources and multiple targets. For the remaining tasks, we create one paraphrase for each pair of demonstrations using ChatGPT solely. EPA uses the newly created paraphrases as additional demonstrations for in-context learning. Footnote 4: [https://huggingface.co/spaces/Narrativaiai/NLLB-Translator](https://huggingface.co/spaces/Narrativaiai/NLLB-Translator) ## 4 Results ### Main Results Machine TranslationTable 1 presents the results in chrF++ on 45 languages for translating from English into these languages. These lan \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline **Language** & **GPT** & **EPA** & **Language** & **GPT** & **EPA** & **Language** & **GPT** & **EPA** & **Language** & **GPT** & **EPA** & **Language** & **GPT** & **EPA** \\ \hline acm\_Arab & 23.97 & **26.28** & dr\_Lamb & 51.97 & **6.16** & abs\_Lamb & 59.41 & **6.02** & ar\_Arab & 24.84 & **27.88** & sam\_Beng & 12.87 & **27.03** \\ ab\_Arab & 1.81 & **28.84** & kk\_Cyl & 18.99 & **9.03** & j\_Arab & 2.28 & **2.88** & c\_Lamb & 26.38 & **27.88** & **6.02** \\ en\_Len & 45.36 & **1.00** & _ex\_Lamb & 21.11 & **6.02** & _fa\_m\_G & 67.88 & kk\_Tam & 12.24 & **18.61** & g\_Lamb & 37.06 & **6.02** \\ g\_Gay & 23.12 & **16.02** & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & 24.17 & **24.17** \\ k\_Lamb & 20.44 & **35.99** & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & 25.47 & **27.19** & 45.80 \\ lin\_Lamb & 27.70 & **25.20** & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & 21.82 & **27.03** \\ mys\_Myr & 25.33 & **25.00** & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & j\_Beng & 21.82 & **27.03** \\ p\_Paz\_Lamb & 28.99 & **14.78** & cs\_Lamb & 20.87 & **4.79** & sk\_Lamb & 20.34 & **39.19** & **53.94** & sam\_Lamb & 53.78 & **53.94** & **53.95** \\ s\_Lamb & 33.62 & **34.18** & w\_Lamb & 39.00 & **45.92** & us\_Lamb & 28.13 & **6.29** & us\_Lamb & 39.36 & **66.24** & us\_Lamb & 32.16 & us\_Lamb & 32.15 & us\_Lamb \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between GPT-3.5-TURBO and EPA. Results in chrF++ for MT on the FLORES-200 dataset. The best results are bolded and highlighted. We report on translating from English into the languages (3-shot on GPT). \begin{table} \begin{tabular}{l c c|l c|l c|l c c|l c c|l c} \hline \hline **Language** & **Copy-9** & **EPA** & **Language** & **Copy-9** & **EPA** & **Language** & **Copy-9** & **EPA** & **Language** & **Copy-9** & **EPA** & **Language** & **Copy-9** & **EPA** & **Language** & **Copy-9** & **EPA** \\ \hline acm\_Arab & 15.99 & **26.28** & abs\_Lamb & 21.12 & **80.05** & ap\_Arab & 30.38 & **30.26** & j\_Arab & 1.77 & **2.88** & d\_Lamb & 56.59 & **64.03** \\ dy\_Lamb & 9.07 & **17.27** & dyn\_Lamb & 9.07 & 17.27 & z\_Tur & 2.44 & **34.60** & 61.1am & 28.31 & **31.76** & fin\_Lamb & 59.49 & **55.31** \\ h\_Lamb & 45.58 & **58.80** & h\_Lamb & 45.58 & **58.80** & id\_Lamb & 56.26 & **52.99** & k\_Car\_V & 52.09 & **58.80** & ke\_Lamb & 32.23 & **58.19** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison between Copy-9 and EPA. Results in chrF++ for MT on the FLORES-200 dataset. The best results are bolded and highlighted. We report on translating from English into the languages (3-shot on EPA). guages range from high-resource languages to low-resource languages. We compare ChatGPT (GPT-3.5-TURBO) with EPA. The results indicate that EPA brings clear improvements. The gains can be large, by up to about 6x chrF++ points on low-resource languages (1.58 to 9.09 for English to Kashmiri written in Devanagari script (kas_Deva)) and by up to about 3x chrF++ points on high-resource languages (20.34 to 59.3 for English to Nyanja written in Latin script (nya_Latin)). The improvement is also consistent for different languages written in different scripts. The results in Table 6 in the Appendix also show the same trend. EPA reports an average score of -0.557, which clearly exceeds the score of -0.994 reported by the baseline. This indicates that EPA is an effective approach for improving in-context learning on LLMs. Dialogue SummarizationTable 3 presents the results in BLEU and ROUGE-L on GPT and EPA, where EPA shows good improvement with up to 0.79 improvements in F1 scores on ROUGE-L. ### Limitations This paper presents an analysis of around 40 languages only on MT. However, there are more than thousands of languages around the world. We leave more investigations to our future work. EPA requires an automatic paraphraser to be effective. We do not perform an investigation on the situation where human paraphrasers are available. EPA is also less useful when there exist many demonstrations to be used. Finally, we also conduct experimentation on ChatGPT, which could affect reproducibility in the future. ## Ethical Statement We honour and support the ACL Code of Ethics. There is no ethical issue known to us in this work. A well-known and widely used LLM is used in our work, which is subjected to generating offensive context. Yet the above-mentioned issues are widely known to commonly exist for LLMs. Any content generated does not reflect the view of the authors.
2309.13259
WikiMT++ Dataset Card
WikiMT++ is an expanded and refined version of WikiMusicText (WikiMT), featuring 1010 curated lead sheets in ABC notation. To expand application scenarios of WikiMT, we add both objective (album, lyrics, video) and subjective emotion (12 emotion adjectives) and emo\_4q (Russell 4Q) attributes, enhancing its usability for music information retrieval, conditional music generation, automatic composition, and emotion classification, etc. Additionally, CLaMP is implemented to correct the attributes inherited from WikiMT to reduce errors introduced during original data collection and enhance the accuracy and completeness of our dataset.
Monan Zhou, Shangda Wu, Yuan Wang, Wei Li
2023-09-23T04:46:28Z
http://arxiv.org/abs/2309.13259v1
# Wikimt++ Dataset Card ###### Abstract **WikiMT++** is an expanded and refined version of WikiMusicText (WikiMT), featuring 1010 curated lead sheets in ABC notation. To expand application scenarios of WikiMT, we add both objective (album, lyrics, video) and subjective emotion (12 emotion adjectives) and emo_4q (Russell 4Q) attributes, enhancing its usability for music information retrieval, conditional music generation, automatic composition, and emotion classification, etc. Additionally, CLaMP is implemented to correct the attributes inherited from WikiMT to reduce errors introduced during original data collection and enhance the accuracy and completeness of our dataset. ## 1 Introduction WikiMT++ is an expansion and refinement of the original WikiMT [1], which consists of a curated set of 1010 lead sheets, encompassing melodies accompanied by harmonies represented in ABC notation. In addition to properties inherited from WikiMT, WikiMT++ augments each item with both objective and subjective attributes, thereby enhancing its utility as a comprehensive resource for music classification tasks. In terms of objective attributes, WikiMT++ incorporates descriptors such as the song's associated album name, complete lyrics, and accessible music video, thereby providing more aspects for learning the relationship between natural language and music. Additionally, WikiMT++ introduces two subjective labels with respect to two dimensions of emotions, namely the 'emotion' and 'emo_4q' [2]. These emotion labels allow for a deeper exploration of the music emotion classification. Compared to WikiMT, WikiMT++ has enriched information for music, prepared for all kinds of tasks, for example, music information retrieval (MIR), music generation, and emotion recognition, etc. ## 2 Data Field To facilitate the learning and evaluation of multiple tasks, WikiMT++ dataset encompasses 1010 songs, each item contains 17 attributes. Table 1 shows the specific names and number of classes of genre and emotion labels. ### Attributes from WikiMT or Information The titles, artists, genres, and descriptions are directly inherited from WikiMT. However, as they were originally curated from openly accessible sources, potential constraints and wrongs still exist. For better precision and completeness, we update these attributes through CLaMP [1]. Album names are also taken into consideration because songs from the same album sometimes have identical genre or emotion. ### Emotion Labels Music has the capacity to evoke, convey, and regulate emotions in individuals and groups. It serves as a powerful means of emotional expression, communication, and connection. Thus, music emotion classification is an important area to be explored. To increase the capacity of our dataset, we do both high dimension and low dimension labeling, respectively the emotion and emo_4q. * _emo_4q:_ The emo_4q labels 4 dimensions with respect to joy, anger, sadness and calmness. * _emotion:_ The emotion labels are the refinement of emo_4q, which contains 12 emotions with regard to happy, delighted, excited, tense, angry, frustrated, depressed, bored, tired, calm, relaxed, and content. ### Lyrics In terms of the textual content, lyrics are closely related to the theme and the name of a song. Furthermore, lyrics can significantly enhance the emotional and storytelling aspects of a song. Due to the massive information contained in lyrics, we consider them as one of the attributes, to increase the capacity for the task of semantic searching and emotion classification. ### Music Video To prepare for multi-modal tasks, music videos are added into This dataset. ## 3 Supported Tasks WikiMT++ is a dataset of 1010 songs covering 11 genres, where all music files are score-oriented, for example, MusicXML, ABC file, XML, midi, and the pdf. In temps of textual information, substantial contents are contained in WikiMT++, including title, artist, album name, genres, emotion labels, and lyrics. Even music videos are considered as one of the attributes for future multi-modal tasks. In this context, WikiMT++ offers a versatile platform with a wide spectrum of supported tasks that encompass various aspects of music analysis and generation. Now we list some of them. * _Music Information Retrieval:_ WikiMT++ provides a rich kind of source of data for MIR, enabling researchers and algorithms to retrieve and analyze music-related information, such as melody, lyrics, and genre, to enhance music recommendation systems and more. * _Music Generation:_ Researchers and enthusiasts can leverage WikiMT++ to explore music generation algorithms. By learning from the patterns and structures within the music files, algorithms can create new compositions and expand the creative possibilities in music production. * _Automatic Lyrics Generation:_ The dataset offers a valuable resource for developing automatic lyrics valuation. By learning from the pattern's and structures within the music files, algorithms can create new compositions and expand the creative possibilities in music production. * _Automatic Lyrics Generation:_ The dataset offers a valuable resource for developing automatic lyrics valuation. * _Genome Recognition:_ The genre information in this dataset can assist in genre recognition tasks, helping to accurately categorize songs into their respective genres, which is valuable for music recommendation and organization. * _Emotion Classification:_ Two different emotional labels in WikiMT++ make it a valuable resource for emotion classification tasks, allowing for the analysis and classification of songs based on their emotional content. ## 4 Creation of Dataset The most original source of the data is Wikifonia [3], which originally consisted of over 6,000 music scores in MusicXML format, all manually transcribed by humans. However, the descriptive information for these over 6,000 music scores was incomplete. Therefore, we developed a Python script to filter out the music scores with incomplete descriptions, resulting in a final selection of 1,010 music scores. Subsequently, based on this dataset of 1,010 records, we utilized the music21 package to extract the title, author, description, and music score data from the MusicXML files. The musical scores were further converted into ABC notation format. Following this data preparation step, we employed the CLaMP model to automatically label the genre information of the musical scores. This comprehensive process culminated in the creation of the WikiMT dataset. Presently, we have augmented the WikiMT dataset by incorporating additional multi-modal information into the existing attributes. This expanded dataset now includes audio, video link, mel spectrogram, music score images, album details, emotional annotations (emotion, emo_4q), lyrics, as well as music scores in the formats of mxl, midi, and pdf. This enhancement of the dataset enriches its content and provides a more comprehensive resource for research and analysis. The collection of this multi-modal data was sourced through surveys conducted on Amazon Mechanical Turk. The secondary verification of the data was carried out by students from the Central Conservatory of Music and Fudan University. ## 5 Evaluation To evaluate our dataset, MARBLE [4] benchmark is enrolled, which is a universal dataset evalutaion framework. This framework provides a standardized process for dataset evalutaion.
2309.12153
$a$-Numbers of Cyclic Degree $p^2$ Covers of the Projective Line
We investigate the $a$-numbers of $\mathbb{Z}/p^2\mathbb{Z}$-covers in characteristic $p>2$ and extend a technique originally introduced by Farnell and Pries for $\mathbb{Z}/p\mathbb{Z}$-covers. As an application of our approach, we demonstrate that the $a$-numbers of ``minimal'' $\mathbb{Z}/9\mathbb{Z}$-covers can be deduced from the associated branching datum.
Huy Dang, Steven R. Groen
2023-09-21T15:08:12Z
http://arxiv.org/abs/2309.12153v1
# \(a\)-numbers of cyclic degree \(p^{2}\) covers of the projective line ###### Abstract. We investigate the \(a\)-numbers of \(\mathbb{Z}/p^{2}\mathbb{Z}\)-covers in characteristic \(p>2\) and extend a technique originally introduced by Farnell and Pries for \(\mathbb{Z}/p\mathbb{Z}\)-covers. As an application of our approach, we demonstrate that the \(a\)-numbers of "minimal" \(\mathbb{Z}/9\mathbb{Z}\)-covers can be deduced from the associated branching datum. ## 1. Introduction Consider an algebraically closed field \(k\) of characteristic \(p>2\), and let \(\pi:Y\to X\) be a Galois branched cover of smooth, projective, connected curves. A prevalent research direction is to investigate to what extent invariants of \(Y\) are determined by the branching datum of the cover \(\pi\). See Section 2.1 for a precise definition of the branching datum. For instance, if \(Y\) is an elliptic curve, then the branching datum of \(\pi\) does not determine whether \(Y\) is ordinary or supersingular. Of particular interest is the case when \(p\) divides \(|G|\), i.e. when the cover is wildly ramified. Then the ramification of \(\pi\) is more complicated, due to the additional structure of the filtration of higher ramification groups. In this paper we restrict to the case when \(G\) is a cyclic \(p\)-group, and we write \(G=\mathbb{Z}/p^{n}\mathbb{Z}\). Some well-known invariants for curves in characteristic \(p\) include the genus, \(p\)-rank, and \(a\)-number. Of particular interest to us is the \(a\)-number, which is defined by \[a_{Y}:=\ker\left(\mathcal{C}:\mathrm{H}^{0}(Y,\Omega^{1}_{Y})\to\mathrm{H}^{0 }(Y,\Omega^{1}_{Y})\right),\] where \(\mathcal{C}\) is the \(p^{-1}\)-linear _Cartier operator_. The size of \(a_{Y}\) measures the failure of a curve to be _ordinary_. The genus of \(Y\) is determined by the branching datum of \(\pi\) via the Riemann-Hurwitz formula [10, IV. Corollary 2.4]. Similarly, since \(G\) is a \(p\)-group, the Deuring-Shafarevich formula [12, Theorem 4.1] dictates what the \(p\)-rank of \(Y\) is in terms of the branching datum of \(\pi\). In contrast, the \(a\)-number of \(Y\) is not determined by the branching datum of \(\pi\). Understanding the possibilities for the \(a\)-number of a curve has implications in the Schottky problems, as it helps illuminate which Abelian varieties are isomorphic to the Jacobian of a curve. In recent years, there has been a growing interest in studying the \(a\)-number of cyclic covers of \(p\)-power degree in characteristic \(p\). In particular, the authors of [11] focused on the special case where \(X=\mathbb{P}^{1}\) and \(G=\mathbb{Z}/p\mathbb{Z}\). They derived a formula for the \(a\)-number of \(Y\) when the ramification jumps divide \(p-1\). In particular, in this case the \(a\)-number is determined by the branching datum of \(\pi\). Booher and Cais expanded the study in [1] to include general \(X\) and established bounds for \(a_{Y}\) based on \(a_{X}\) and the branching datum of \(\pi\). In [1], the authors consider the (higher) \(a\)-numbers in towers of \(\mathbb{Z}/p^{n}\mathbb{Z}\)-covers of a base curve \(X\), which in key cases is the projective line. In particular, they formulate far-reaching conjectures for the behavior of the \(a\)-numbers as a function of \(n\) and provide theoretical and experimental evidence for these conjectures. In this paper, we investigate covers of \(\mathbb{P}^{1}\) with Galois group \(G=\mathbb{Z}/p^{2}\mathbb{Z}\) whose conductors are "minimal" (see Definition 2.14). This is analogous to the condition that the conductors divide \(p-1\), as assumed in [11]. Extending the approach in [11], we assign to each basis differential of \(\operatorname{H}^{0}(Y,\Omega^{1}_{Y})\) a _key term_, which is non-zero in \(\mathcal{C}(\omega)\). Using this, Theorem 3.18 proves a conditional lower bound for the rank of the Cartier operator, which is equivalent to giving an upper bound on the \(a\)-number. If we furthermore assume \(p=3\), we prove this upper bound is sharp, resulting in the following theorem. **Theorem 1.1**.: _The \(a\)-number of a minimal \(\mathbb{Z}/9\mathbb{Z}\)-cover in characteristic \(3\) is determined by its branching datum._ See Theorem 4.1 for a precise formula of \(a_{Y_{2}}\) in terms of the branching datum. As far as we are aware, this yields the only known example of a family of curves in odd characteristic that has constant genus, \(p\)-rank and \(a\)-number, apart from the Artin-Schreier curves treated in [11]. _Remark 1.2_.: In forthcoming work, Booher, Cais, Kramer-Miller, and Upton conduct parallel research on the \(a\)-numbers of \(\mathbb{Z}/p^{n}\mathbb{Z}\)-covers using a \(p\)-adic analytic approach, which differs from the method employed in this paper. ### Outline The rest of the paper is organized as follows. In Section 2, we provide background information on Artin-Schreier-Witt theory and the ramification of cyclic covers of the projective line in characteristic \(p\). Additionally, we review the Cartier-Manin matrix and its relationship with the \(a\)-number of a curve. We also discuss the motivation behind our question and make comparisons with existing results in this section. Section 3 introduces the concept of key terms for \(\mathbb{Z}/p^{2}\mathbb{Z}\)-covers. The key terms represent leading entries of the Cartier-Manin matrix, generalizing the concept introduced in [11] for \(\mathbb{Z}/p\mathbb{Z}\)-covers. It is shown in Theorem 3.18 that, under certain conditions, the number of key terms is a lower bound or the rank of the Cartier-Manin matrix, as differentials with a key term contribute to the rank. This yields a lower bound on the \(a\)-number. Finally, in Section 4, we demonstrate that the provided upper bound is sharp when \(p=3\), by showing the remaining differentials do not contribute to the rank of the Cartier operator. This results in Theorem 1.1. ### Acknowledgements The authors thank Jeremy Booher, Bryden Cais, Joe Kramer-Miller, Rachel Pries, and Damiano Testa for helpful discussions. ## 2. Background ### Artin-Schreier-Witt theory According to the Artin-Schreier-Witt theory (as discussed in references such as [1]), for any field \(K\) with characteristic \(p\), the following group isomorphism holds: \[\operatorname{H}^{1}(G_{K},\mathbb{Z}/p^{n}\mathbb{Z})\cong W_{n}(K)/\wp(W_{n} (K)).\] In the above equation, \(G_{K}\) represents the absolute Galois group of \(K\), \(W_{n}(K)\) denotes the ring of length-\(n\) Witt vectors over \(K\), and \(\wp(\underline{x})=F(\underline{x})-\underline{x}\) corresponds to the Artin-Schreier-Witt isogeny. We say two Witt vectors \(\underline{f}\) and \(\underline{g}\) in \(W_{n}(K)\) belong to the same Artin-Schreier-Witt class if there exists a Witt vector \(\underline{h}\in W_{n}(K)\) such that \(\underline{f}=\underline{g}+\wp(\underline{h})\). Therefore, any \(\mathbb{Z}/p^{n}\mathbb{Z}\)-cover \(\phi_{n}:Y_{n}\to\mathbb{P}^{1}\) can be represented by an Artin-Schreier-Witt equation of the form \[\wp(y_{1},\ldots,y_{n})=(f_{1},\ldots,f_{n}),\] where \((f_{1},\ldots,f_{n})\in W_{n}(k(x))\) and \(f_{1}\not\in\wp(k(x))\). _Remark 2.1_.: For \(1\leq i<n\), the unique \(\mathbb{Z}/p^{i}\)-sub-cover \(\phi_{i}:Y_{i}\to\mathbb{P}^{1}\) of \(\phi_{n}\) is given by the Artin-Schreier-Witt equation: \[\wp(y_{1},\ldots,y_{i})=(f_{1},\ldots,f_{i}).\] Alternatively, the cover \(\phi_{n}\) can be expressed as a series of Artin-Schreier extensions, which is advantageous for computational purposes, as demonstrated later in this paper. **Proposition 2.2** ([10]).: _The \(\mathbb{Z}/p^{n}\mathbb{Z}\)-cover associated with \((f_{1},\ldots,f_{n})\in W_{n}(K)\) can be represented by the following system of equations:_ \[\begin{split} y_{1}^{p}-y_{1}&=f_{1}\\ y_{2}^{p}-y_{2}&=g_{1}(y_{1})+f_{2}\\ &\vdots\\ y_{n}^{p}-y_{n}&=g_{n-1}(y_{1},\ldots,y_{n-1})+f_{n}, \end{split} \tag{2.1}\] _where \(g_{i}\in\mathbb{F}_{p}[f_{1},\ldots,f_{i},y_{1},\ldots,y_{i}]\) are explicitly defined. Specifically, \(g_{i}\) can be given recursively in \(\mathbb{Z}[f_{1},\ldots,f_{i},y_{1},\ldots,y_{i}]\) as follows:_ \[g_{i}(f_{1},\ldots,f_{i},y_{1},\ldots,y_{i})=f_{i}+y_{i}+\sum_{d=1}^{i-1}\frac{ 1}{p^{i-d}}(f_{d}^{p^{i-d}}+y_{d}^{p^{i-d}}-g_{d-1}^{p^{i-d}}). \tag{2.2}\] In the case where \(n=1\), we can eliminate all terms of \(f_{1}\) with degrees divisible by \(p\) by adding rational functions of the form \(h^{p}-h\). This process can be generalized to the case \(n>1\) using an induction technique similar to the proof presented in [11, SS26, Theorem 5]. For a more comprehensive proof, please refer to [1, Lemma A.2.3]. To summarize this result, we state the following proposition. **Proposition 2.3**.: _Every \(\underline{f}\in W_{n}(k(x))\) belongs to the same Artin-Schreier-Witt class as a vector \(\underline{h}=(h_{1},\ldots,h_{n})\), where none of the terms in the partial fraction decomposition of \(h_{i}\) has degree divisible by \(p\)._ We refer to such a vector \(\underline{h}\) as _reduced_. For the remainder of our discussion, we assume that \(\underline{f}\) is reduced. Let \(B_{n}=\overline{P}_{1},\ldots,P_{r}\) denote the set of poles of the \(f_{i}\)'s, which is also the branch locus of \(\phi_{n}\). At each ramified point \(Q_{j}\) over \(P_{j}\), \(\phi_{n}\) induces an exponent-\(p^{n}\) cyclic extension of the complete local ring \(\hat{\mathcal{O}}_{Y_{n},Q_{j}}/\hat{\mathcal{O}}_{\mathbb{P}^{1},P_{j}}\). Consequently, we can derive the upper ramification filtration of \(\phi_{n}\) at the branch point \(P_{j}\) in a canonical manner. Assuming that the inertia group of \(Q_{j}\) is \(\mathbb{Z}/p^{m}\mathbb{Z}\) (where \(n\leq m\)), for \(i\leq n-m\), we define the \(i\)_-th ramification break (jump)_ of \(\phi_{n}\) at \(P_{j}\) as \(-1\). For \(i>n-m\), the \(i\)-th ramification break of \(\phi_{n}\) at \(P_{j}\) corresponds to the \((i-n+m)\)-th break in \(\hat{\mathcal{O}}_{Y_{n},Q_{j}}/\hat{\mathcal{O}}_{\mathbb{P}^{1},P_{j}}\). We denote the \(i\)-th upper ramification break of \(\phi_{n}\) at \(P_{j}\) as \(u_{j,i}\). The \(i\)_-th conductor of \(\phi_{n}\) at \(P_{j}\)_ is defined as \(e_{j,i}:=u_{j,i}+1\). The following formula provides an explicit calculation of the ramification filtration of \(\phi_{n}\) in terms of \(\underline{f}\). **Theorem 2.4** ([1, Theorem 1]).: _With the assumptions and the notations as above, we have_ \[u_{j,i}=\max\{p^{i-l}\deg_{(x-P_{j})^{-1}}(f_{l})\mid l=1,\ldots,i\}, \tag{2.3}\] _for \(i>n-m\)._ **Proposition 2.5**.: _With the above settings, the genus of the curve \(Y_{i}\) is_ \[g_{Y_{i}}=1-p^{i}+\frac{\sum_{l=1}^{i}(\sum_{j=1}^{r}e_{j,l})(p^{j}-p^{j-1})}{ 2} \tag{2.4}\] Proof.: Applying the Grothendieck-Ogg-Shafarevich formula [11, Exp. X formula 7.2] to \(\phi_{i}:Y_{i}\to X\), we obtain the relation \[2g_{Y_{i}}-2=\deg(\phi_{i})(2g_{X}-2)+\sum_{j=1}^{r}\deg\big{(}\mathscr{D}_{Pj,i }\big{)},\] where is \(\mathscr{D}_{Pj,i}\) the different of \(\phi_{i}\) at \(P_{j}\)[10, IV]. Additionally, [12, Fact 2.3] asserts that \[\deg\big{(}\mathscr{D}_{Pj,i}\big{)}=\sum_{l=1}^{i}e_{j,l}(p^{l}-p^{l-1}).\] This and the fact that \(g_{X}=0\) immediately implies the claim about the genus of \(Y_{i}\). To record the branching datum of the cover, we use an \(r\times n\) matrix of the form \[\begin{bmatrix}e_{1,1}&e_{1,2}&\ldots&e_{1,n}\\ e_{2,1}&e_{2,2}&\ldots&e_{n,n}\\ \vdots&\vdots&\ddots&\vdots\\ e_{r,1}&e_{r,2}&\ldots&e_{r,n}\end{bmatrix}, \tag{2.5}\] which is referred to as the _branching datum_ of \(\phi_{n}\). This notion is used to stratify the moduli space of cyclic covers of curves in [1]. The following is an immediate consequence of Theorem 2.4. **Proposition 2.6**.: _A matrix of the form (2.5) is the branching datum of a \(\mathbb{Z}/p^{n}\mathbb{Z}\)-cover if and only if the following conditions hold:_ 1. \(e_{i,1}\not\equiv 1\pmod{p}\)_,_ 2. \(e_{i,j}\geq pe_{i,j-1}-p+1\)_, and_ 3. _if_ \(e_{i,j}>pe_{i,j-1}-p+1\)_, then_ \(e_{i,j}=pe_{i,j-1}-p+1+a_{j}\)_, where_ \(p\nmid a_{j}\)_._ _Remark 2.7_.: The branching datum of \(\phi_{i}\) is the \(r\times i\) matrix that contains the first \(i\) columns of (2.5). _Example 2.8_.: Suppose \(k\) is an algebraically closed field of characteristic \(3\), and \(\phi_{2}:Y_{2}\to\operatorname{Proj}k[x,z]\) is a \(\mathbb{Z}/9\mathbb{Z}\)-cover given by the following affine equation: \[\wp(y_{1},y_{2})=\left(\frac{1}{x}+x,\frac{1}{x^{5}}-\frac{1}{x-1}\right)=:(f_ {1},f_{2})=:\underline{f}.\] Since none of the terms in \(f_{i}\)'s are a power of \(3\), \(\underline{f}\) is considered reduced. According to Theorem 2.4, \(\phi\) branches at \(x=0\), \(x=1\), and \(x=\infty\), with the following branching datum: \[\begin{bmatrix}2&6\\ 0&2\\ 2&4\end{bmatrix}.\] The theorem also reveals that the ramification index of each ramified point above \(x=1\) is \(3\), while the unique point above \(x=1\) or \(x=\infty\) has a ramification index of \(9\). Finally, applying Proposition 2.2 allows us to write \(\phi_{2}\) as a system of Artin-Schreier equations as follows: \[y_{1}^{3}-y_{1} =\frac{1}{x}+x=f_{1},\] \[y_{2}^{3}-y_{2} =g_{1}(y_{1})+\frac{1}{x^{5}}-\frac{1}{x-1},\] where \(g_{1}(y_{1})=\frac{1}{3}(f_{1}^{3}+y_{1}^{3}-(f_{1}+y_{1})^{3}))=-y_{1}^{7}+y_ {1}^{5}\). ### A basis for the space of regular differentials We present a restatement of [16, Lemma 5] using our established conventions. Consider a \(\mathbb{Z}/p^{n}\mathbb{Z}\)-Galois cover \(Y_{n}\xrightarrow{\phi_{n}}\mathbb{P}^{1}\) whose branching datum is given by Equation 2.5. Let \(B_{n}=\{P_{1},\ldots,P_{r}\}\subset\mathbb{P}^{1}\) denote the branch locus of \(\phi_{n}\), where \(P_{i}\) corresponds to the \(i\)-th row of the branching datum. Without loss of generality, assume that \(P_{1}\) is the point at infinity and \(e_{1,1}>0\), so that \(P_{1}\) has index \(p^{n}\). We define \(x_{i}:=\frac{1}{x-P_{i}}\) for \(1<i\leq r\). **Definition 2.9**.: _We define \(N\in\mathbb{M}_{r\times n}(\mathbb{Z})\) as follows:_ \[N(i,j)=\begin{cases}e_{i,j}-1&\text{if }e_{i,j}>0\\ 0&\text{if }e_{i,j}=0\end{cases}\] _For each \(i=1,2,\ldots,r\) and \(j=1,\ldots,n\) such that \(N(i,j)>0\), let_ \[E_{i}^{j}:=j-\min\{l\mid e_{i,l}\neq 0\}+1.\] _Remark 2.10_.: The positive integer \(E_{i}^{j}\) represents the ramification index of each ramified point of \(Y_{j}\xrightarrow{\phi_{j}}\mathbb{P}^{1}\) above \(P_{i}\). **Definition 2.11**.: _We define \(\lambda(i,j)\) recursively as follows:_ \[\lambda(i,j)=p^{E_{i}^{j}}N(i,j)-(p-1)\sum_{l=1}^{j-1}p^{l+E_{i}^{j}-j-1}N(i,l).\] Now we can rephrase [16, Lemma 5] using our notations. **Proposition 2.12**.: _With the above settings, the \(k\)-vector space \(\mathrm{H}^{0}(Y_{n},\Omega^{1}_{Y_{n}/k})\) has a basis given by \(\bigcup_{i=1}^{r}W_{i}\), where_ \[W_{1}:=\begin{cases}y_{n}^{a_{n}}\ldots y_{1}^{a_{1}}x^{v}dx\mid 0\leq a_{i}<p,\\ 0\leq p^{n}v\leq(p-1)d_{1}-A_{1}-p^{n}-1\end{cases}\quad\Bigg{\}}, \tag{2.6}\] _and, for \(i>1\),_ \[W_{i}:=\begin{cases}y_{n}^{a_{n}}\ldots y_{1}^{a_{1}}x_{i}^{v}dx\mid 0\leq a_{i} <p,\\ 0<p^{e_{i}}v\leq(p-1)d_{i}-A_{i}+p^{e_{i}}-1\end{cases}\quad\Bigg{\}}, \tag{2.7}\] _where_ \[d_{i}:=\sum_{\nu=1}^{n}\lambda(i,\nu),\text{ and }A_{i}:=\sum_{\nu=1}^{n}a_{ \nu}\lambda(i,\nu).\] ### \(a\)-number of a curve Suppose \(C\) is a curve of genus \(g\) over field \(k\). We let \(\sigma\) denote the \(p\)-power Frobenius automorphism of \(k\), which induces a pull-back through the absolute Frobenius \(F_{X}^{*}:\mathrm{H}^{1}(X,\mathcal{O}_{X})\to\mathrm{H}^{1}(X,\mathcal{O}_{X})\). The Cartier operator is a \(\sigma^{-1}\)-linear map \[\mathcal{C}:\mathrm{H}^{0}(C,\Omega^{1}_{C/k})\to\mathrm{H}^{0}(C,\Omega^{1}_ {C/k}).\] It is the dual of \(F_{X}^{*}\) through Grothendieck-Serre duality. The \(a\)-number of a curve \(C\), denoted as \(a_{C}\), is defined as the dimension of the kernel of \(\mathcal{C}\). Assuming we have a basis \(\beta=\{\omega_{1},\ldots,\omega_{g}\}\) of \(\mathrm{H}^{0}(C,\Omega^{1}_{C/k})\), for each \(\omega_{j}\) we can find coefficients \(m_{j,k}\in k\) such that \[\mathcal{C}(\omega_{j})=\sum_{i=1}^{g}m_{i,j}\omega_{i}.\] The resulting \((g\times g)\)-matrix \(M=(m_{i,j})\) is known as the _Cartier-Manin matrix_. Consequently, the \(a\)-number can be computed as \(a_{C}=g_{C}-\text{rank}(M)\). _Example 2.13_.: 1. An ordinary elliptic curve has \(p\)-rank \(1\) and \(a\)-number \(0\). 2. A supersingular elliptic curve has \(p\)-rank \(0\) and \(a\)-number \(1\). Generalizing the case of elliptic curves, a curve is said to be ordinary if its \(a\)-number is zero. Therefore, the \(a\)-number of a curve can be seen as a measure of how far the curve is from being ordinary. Unlike the genus and the \(p\)-rank, the \(a\)-number of a curve cover is not solely determined by the base curve and the branching datum. An example of a pair of \(\mathbb{Z}/p\mathbb{Z}\)-covers with identical branching datum, but different \(a\)-number, is given in [1, Example 4.6]. ### \(\mathbb{Z}/p^{n}\mathbb{Z}\)-covers with minimal jumps **Definition 2.14**.: _A \(\mathbb{Z}/p^{n}\mathbb{Z}\)-cover with branching datum (2.5) is said to be minimal if it satisfies the following conditions:_ 1. _If_ \(j=\min\{k\mid e_{i,k}\neq 0\}\)_, then_ \((e_{i,j}-1)\mid(p-1)\)_, and_ 2. \(e_{i,l}-1=p^{l-j}(e_{i,j}-1)\) _for_ \(j<l\leq n\)_._ The \(a\)-number of a minimal \(\mathbb{Z}/p\mathbb{Z}\)-cover is known [13, Theorem 1.1]. Our computed examples lead to the following question: **Question 2.15**.: _Let \(\phi_{n}\) be a minimal branched cover of \(\mathbb{P}^{1}\) with Galois group \(\mathbb{Z}/p^{n}\mathbb{Z}\). Is the \(a\)-number of \(\phi_{n}\) uniquely determined by its branching datum?_ In this paper, we focus on the case \(n=2\). That allows us to only consider covers with branching data of the form \[\begin{bmatrix}2&\ldots&a_{i}&\ldots&0&\ldots&0&\ldots\\ p+1&\ldots&pa_{i}-p+1&\ldots&2&\ldots&a_{i}&\ldots\end{bmatrix}^{\intercal},\] where \((a_{i}-1)\mid(p-1)\). In Section 3, we generalize the methods in [13], based on key terms, for \(\mathbb{Z}/p^{2}\mathbb{Z}\)-covers. As an application of this new method, we prove that the answer is affirmative for the case when \(p=3\) and \(n=2\). **Theorem 2.16**.: _The answer to Question 2.15 is affirmative when \(p=3\) and \(n=2\) (see Theorem 4.1)._ _Remark 2.17_.: Note that the answer to Question 2.15 is certainly negative for covers that are not minimal, as is demonstrated in [1, Example 7.2]. ### Comparison with the known results In this section, we compare our results and conjectures with those that are already known for Artin-Schreier covers. Consider a \(\mathbb{Z}/p^{n}\mathbb{Z}\)-Galois cover \(\phi_{n}:Y_{n}\xrightarrow{}\ldots\xrightarrow{}Y_{1}\xrightarrow{}Y_{0}\). Since \(\phi_{n/(n-1)}:Y_{n}\xrightarrow{}Y_{n-1}\) is an Artin-Schreier cover, we can establish bounds on the \(a\)-number of \(Y_{n}\) inductively, building upon what is known for \(\mathbb{Z}/p\mathbb{Z}\)-covers. Suppose \(P\in Y_{0}\) is a branch point with ramification index \(p^{n}\). Let \((m_{1},\ldots,m_{n})\) denote the upper ramification jumps of \(\phi_{n}\) at \(P\). By employing a Hasse-Arf relation, it is possible to deduce the only jump of \(\phi_{n/(n-1)}\) at the branch point \(P_{n-1}\in Y_{n-1}\) that lies above \(P\). **Proposition 2.18**.: _With the above notations, the jump of the intermediate Artin-Schreier-extension \(Y_{n}\xrightarrow{}Y_{n-1}\) at \(P_{n-1}\) is_ \[\tilde{m}_{n}=p^{n-1}m_{n}-\sum_{i=1}^{n-1}m_{i}(p-1)p^{i-1}.\] Proof.: It follows from [1, Lemma 6.17]. Applying the above result to the case \(n=2\) and \(m_{2}=pm_{1}\) yields: **Corollary 2.19**.: _In the situation of Theorem 2.16, the jump of \(Y_{2}\to Y_{1}\) at \(P_{1}\) is given by_ \[\widetilde{m}=pm_{2}-m_{1}(p-1)=m_{1}(p^{2}-p+1).\] _Remark 2.20_.: The jump \(\tilde{m}\) does not divide \(p-1\), hence is not covered by [11]. Let's now recall the following results for the \(a\)-number of a \(\mathbb{Z}/p\mathbb{Z}\)-cover using the language of branching data. **Theorem 2.21** ([1, Theorem 1.1]).: _Suppose \(\phi:Y\to X\) is a \(\mathbb{Z}/p\mathbb{Z}\)-Galois cover with branching datum_ \[\left[d_{1,1}+1\ \ \ d_{2,1}+1\ \ \ldots\ \ d_{r,1}+1\right]^{\top}.\] _Then the \(a\)-number of \(Y\) is bounded as follows_ \[\sum_{l=1}^{r}\sum_{i=j}^{p-1}\left(\left\lfloor\frac{id_{l,1}}{p}\right\rfloor -\left\lfloor\frac{id_{l,1}}{p}-\left(1-\frac{1}{p}\right)\frac{id_{l,1}}{p} \right\rfloor\right)\leq a_{Y}\leq pa_{X}+\sum_{l=1}^{r}\sum_{i=1}^{p-1}\left( \left\lfloor\frac{id_{l,1}}{p}\right\rfloor-(p-i)\left\lfloor\frac{id_{l,1}} {p^{2}}\right\rfloor\right).\] **Proposition 2.22** ([11]).: _Suppose a \(\mathbb{Z}/p\mathbb{Z}\)-cover \(\phi:Y\to\mathbb{P}^{1}\) has branching datum_ \[\left[d_{1,1}+1\ \ \ d_{2,1}+1\ \ \ldots\ \ \ d_{r,1}+1\right]^{\top}.\] _Suppose moreover that \(d_{i,1}\mid(p-1)\) for all \(i=1,2,\ldots,r\). Then the \(a\)-number of \(Y\) is_ \[a_{Y}=\sum_{i=1}^{r}a_{i}\text{, where }a_{i}=\begin{cases}\frac{(p-1)d_{i,1}}{4}& \text{if }d_{i,1}\text{ is even}\\ \frac{(p-1)(d_{i,1}^{2}-1)}{4}&\text{if }d_{i,1}\text{ is odd.}\end{cases}\] We now can compute the lower bound of the \(a\)-number for the case \(Y_{2}\to Y_{0}\) is minimal-one-point cover whose first jump equals \(1\). **Proposition 2.23**.: _Suppose \(p=2q+1\). Then_ \[\sum_{i=q+1}^{p-1}\left(\left\lfloor\frac{i(p^{2}-p+1)}{p}\right\rfloor-\left \lfloor\frac{i(p^{2}-p+1)}{p}-\left(1-\frac{1}{p}\right)\frac{(q+1)(p^{2}-p+1) }{p}\right\rfloor\right)\] _is equal to \(\frac{p(p-1)^{2}}{4}\)._ Proof.: By a straightforward computation, we can indeed see that each summand on the left-hand side is equal to \(q(2q+1)\). Since there are \(q\) such summands, the result follows immediately. Combining Theorem 2.21 and Proposition 2.23, we obtain the following. **Corollary 2.24**.: _Suppose \(p=2q+1\) and \(X\) is ordinary. Suppose \(\phi_{2}:Y_{2}\to X\) is a minimal \(\mathbb{Z}/p^{2}\mathbb{Z}\)-cover branched at one point, with first jump \(d_{1,1}=1\). Then_ \[\sum_{i=q+1}^{p-1}\bigg{(}\bigg{|}\frac{i\tilde{d}_{1,1}}{p}\bigg{]}-\bigg{|} \frac{i\tilde{d}_{1,1}}{p}-\bigg{(}1-\frac{1}{p}\bigg{)}\frac{j\tilde{d}_{1,1} }{p}\bigg{|}\bigg{)}\leq a_{Y_{2}}\leq pa_{Y_{1}}+\sum_{i=1}^{p-1}\bigg{(} \bigg{|}\frac{i\tilde{d}_{1,1}}{p}\bigg{|}-(p-i)\bigg{|}\frac{i\tilde{d}_{1,1} }{p^{2}}\bigg{|}\bigg{)}.\] _In particular, when \(d_{1,1}=1\), hence \(\tilde{d}_{1,1}=p^{2}-p+1\), and \(X\cong\mathbb{P}^{1}\), hence \(a_{Y_{1}}=0\), we have_ \[\frac{p(p-1)^{2}}{4}\leq a_{Y_{2}}.\] Since we expect the \(a\)-number of a minimal \(\mathbb{Z}/p^{n}\mathbb{Z}\)-cover of \(\mathbb{P}^{1}\) to be determined by the branching datum, and this lower bound is attained in examples, we conjecture that this lower bound is always sharp for this class of curves. On the other hand, in the case \(d_{1,1}>1\) the lower bound for the \(a\)-numbers of minimal \(\mathbb{Z}/p^{2}\mathbb{Z}\)-covers obtained by applying 2.21 twice does not seem to be sharp. ## 3. Key terms Let \(\phi_{2}:Y_{2}\to\mathbb{P}^{1}\) be a minimal \(\mathbb{Z}/p^{2}\mathbb{Z}\) cover, with subcover \(\phi_{1}:Y_{1}\)\(to\)\(\mathbb{P}^{1}\). Using the identification \(k(\mathbb{P}^{1})\cong k(x)\) and Proposition 2.2, we obtain the models \[Y_{1} :y_{1}^{p}-y_{1}=f(x) \tag{3.2}\] \[Y_{2} :y_{2}^{p}-y_{2}=g_{1}(y_{1})+h(x). \tag{3.1}\] For brevity we write \(g(y_{1}):=g_{1}(y_{1})\). Explicitly this polynomial is given by \[g(y_{1}):=\sum_{i=1}^{p-1}(-1)^{i}\frac{(p-1)!}{i!(p-i)!}y_{1}^{p(p-i)+i}. \tag{3.3}\] Equivalently, the cover \(Y_{2}\to\mathbb{P}^{1}\) is represented by the Witt vector \((f,h)\in W_{2}(k(x))\), with \(f\) and \(h\) reduced in the sense of Proposition 2.3. In this section, we define for certain regular differentials \(\omega\in\mathrm{H}^{0}(Y_{2},\Omega^{1}_{Y_{2}})\) a _key term_\(\kappa(\mathcal{C}(\omega))\), analogous to [13, Definition 3.2]. An analysis of the key terms leads to a lower bound for the rank of the Cartier-Manin matrix, or equivalently an upper bound for the \(a\)-number \(a_{Y_{2}}\). This upper bound is given in Theorem 3.18. Denote by \(B_{2}\) the branch locus of the cover \(\phi_{2}:Y_{2}\to\mathbb{P}^{1}\). For \(P\in B_{2}\), let \(x_{P}\in k(x)\) be such that \(\text{ord}_{P}(x_{P})=-1\). Then the space \(\mathrm{H}^{0}(Y_{2},\Omega^{1}_{Y_{2}})\) is spanned by differentials of the form \(y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\), as described in Proposition 2.12. We consider two cases, depending on whether the cover \(\phi_{1}:Y_{1}\to\mathbb{P}^{1}\) is already branched at \(P\). If \(P\) is a branch point of \(\phi_{1}:Y_{1}\to\mathbb{P}^{1}\), with ramification jump \(d_{P}\), then \(f\) has a pole at \(P\) of order \(d_{P}\). To simplify proofs, we pick \(f\) to have no further poles. Since the cover \(\phi_{2}:Y_{2}\to\mathbb{P}^{1}\) is minimal, it follows that \(d_{P}|p-1\). This case is discussed in Section 3.1. If \(\phi_{1}:Y_{1}\to\mathbb{P}^{1}\) is etale at \(P\), then \(f\) is regular at \(P\). By the assumption that \(\phi_{2}:Y_{2}\to\mathbb{P}^{1}\) is branched at \(P\), the function \(h\in k(x)\) has a pole at \(P\). The order of this pole, which we denote by \(e_{P}\), equals the ramification jump at \(P\) of the cover \(Y_{2}\to Y_{1}\). This case is treated in Section 3.2. Finally, combining both types of key terms yields the conditional upper bound on the \(a\)-number proved in Theorem 3.18. This conditional upper bound is strong enough to answer Question 2.15 in the case \(p=3\) and \(n=2\). ### Key terms at poles of \(f\) Let \(B_{1}\subset\mathbb{P}^{1}\) be the branch locus of \(\phi_{1}:Y_{1}\to\mathbb{P}^{1}\) and let \(P\in B_{1}\). After using an automorphism of \(\mathbb{P}^{1}\) if necessary, we may assume \(\infty\in B_{1}\). Since the cover is minimal (see Definition 2.14), the pole orders at the points \(P\in B_{1}\), defined by \(d_{P}=-\text{ord}_{P}(f)\), all divide \(p-1\). Furthermore, since the upper jumps are minimal by Definition 2.14, one may assume the pole order of \(h\) at \(P\) is bounded: \(\text{ord}_{P}(h)>-pd_{P}\). We write \(x_{\infty}=x\) and \(x_{P}=(x-P)^{-1}\). Then partial fraction decomposition allows us to write \[f(x)=\sum_{P\in B}f_{P}(x_{P}), \tag{3.4}\] where \(f_{P}\in k[x_{P}]\) is a polynomial of degree \(d_{P}\). Let \(u_{P}\in k^{\times}\) denote the leading coefficient of \(f_{P}\). Without loss of generality, we assume \(u_{\infty}=1\). Using Proposition 2.12, we obtain the following formulas for \(W_{P}\). \[W_{\infty}:=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}}x^{v}dx\mid 0\leq a_{1},a_{2 }<p,\\ 0\leq p^{2}v\leq pd_{P}(p-1-a_{1})+d_{P}(p^{2}-p+1)(p-1-a_{2})-p^{2}-1\end{cases}. \tag{3.5}\] \[W_{P}:=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\mid 0\leq a_{1},a_{2 }<p,\\ 0<p^{2}v\leq pd_{P}(p-1-a_{1})+d_{P}(p^{2}-p+1)(p-1-a_{2})+p^{2}-1\end{cases}. \tag{3.6}\] Following [13], define \(\epsilon_{P}=-1\) if \(P=\infty\) and \(\epsilon_{P}=1\) otherwise. Then, as the Cartier operator on \(k(x)dx\) is well understood, we obtain \[\mathcal{C}(x_{P}^{ap+\epsilon_{P}}dx)=x_{P}^{a+\epsilon_{P}}dx.\] The approach we take in this paper relies essentially on reducing the action of the Cartier operator on \(\mathds{H}^{0}(Y_{2},\Omega^{1}_{Y_{2}})\) to the Cartier operator on \(k(x)dx\), through the models of \(Y_{2}\) and \(Y_{1}\). Using \(a_{1},a_{2}\geq 0\), we get \[v\leq\frac{1}{p^{2}}(pd_{P}(p-1)+d_{P}(p^{2}-p+1)(p-1)+\epsilon_{P}p^{2}-1)<pd_{P} +\epsilon_{P}. \tag{3.7}\] The following lemma allows us to reduce computations involving the Cartier operator on \(Y_{2}\) to computations involving the Cartier operator on \(Y_{1}\). **Lemma 3.1**.: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\) be an element of \(W\). Then we have_ \[\mathcal{C}(\omega)=\sum_{0\leq j+l\leq a_{2}}(-1)^{a_{2}-j}\binom{a_{2}}{j\,l }y_{2}^{j}\mathcal{C}(g(y_{1})^{l}h(x)^{a_{2}-j-l}y_{1}^{a_{1}}x_{P}^{v}dx). \tag{3.8}\] Proof.: Substituting \(y_{2}=y_{2}^{p}-g(y_{1})-h(x)\), from Equation (3.2) and expanding gives \[\omega =(y_{2}^{p}-g(y_{1})-h(x))^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\] \[=\sum_{0\leq j+l\leq a_{2}}(-1)^{a_{2}-j}\binom{a_{2}}{j\,l}y_{2 }^{pj}g(y_{1})^{l}h(x)^{a_{2}-j-l}y_{1}^{a_{1}}x_{P}^{v}dx.\] Then, using the \(p^{-1}\)-linearity of the Cartier operator, we can pull out the \(y_{2}\) to obtain \[\mathcal{C}(\omega) =\mathcal{C}\left(\sum_{0\leq j+l\leq a_{2}}(-1)^{a_{2}-j}\binom{ a_{2}}{j\,l}y_{2}^{pj}g(y_{1})^{l}h(x)^{a_{2}-j-l}y_{1}^{a_{1}}x_{P}^{v}dx\right)\] \[=\sum_{0\leq j+l\leq a_{2}}(-1)^{a_{2}-j}\binom{a_{2}}{j\,l}y_{2} ^{j}\mathcal{C}(g(y_{1})^{l}h(x)^{a_{2}-j-l}y_{1}^{a_{1}}x_{P}^{v}dx),\] as desired. In order to compute the \(a\)-number of \(Y_{2}\), we bound the rank of \(\mathcal{C}\) from below. We do so using a differential we'll call the _key term_. Recall that we have assumed that \(d_{P}|p-1\), allowing us to define \(\gamma_{P}:=\frac{p-1}{d_{P}}\in\mathbb{Z}\). **Definition 3.2**.: _Suppose \(P\in B_{1}\). Let_ \[\alpha_{P}(v) :=\left\lfloor\frac{\gamma_{P}(v-\epsilon_{P})}{p}\right\rfloor\] \[\beta_{P}(v) :=\gamma_{P}(v-\epsilon_{P})-p\alpha_{P}(v),\] _such that \(0\leq\beta_{P}(v)<p\). We define_ \[H_{P}:=\{y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in W_{P}\mid\beta_{P}(v)\leq a _{1}+pa_{2}\}\] _For \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H_{P}\), we define the associated key term \(\kappa(\mathcal{C}(\omega))\) as follows:_ \[\kappa(\mathcal{C}(\omega)):=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}-\beta_{P}( v)}x_{P}^{v-d_{P}\alpha_{P}(v)}dx&\text{if $a_{1}\geq\beta_{P}(v)$}\\ y_{2}^{a_{2}-1}y_{1}^{a_{1}-\beta_{P}(v)+p}x_{P}^{v-d_{P}\alpha_{P}(v)}dx&\text{ if $a_{1}<\beta_{P}(v)$}.\end{cases} \tag{3.9}\] Note that \(v-d_{P}\alpha_{P}(v)\) cannot be negative. In fact, it is straightforward to verify \(\kappa(\mathcal{C}(\omega))\in W_{P}\) for each \(\omega\in H_{P}\). Our goal is to show that in certain cases the coefficient of \(\kappa(\mathcal{C}(\omega))\) is non-zero in \(\mathcal{C}(\omega)\). The following lemma gives a formula for this coefficient. **Lemma 3.3**.: _Let \(\omega=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H_{P}\). Then the coefficient of \(\kappa(\mathcal{C}(\omega))\) in \(\mathcal{C}(\omega))\) is_ \[c_{\omega}:=\begin{cases}(-1)^{\beta_{P}(v)}u_{P}^{\beta_{P}(v)/p}\binom{a_{1} }{\beta_{P}(v)}&\text{if $a_{1}\geq\beta_{P}(v)$}\\ u_{P}^{\beta_{P}(v)/p}\sum_{i=\beta_{P}(v)-a_{1}}^{p-1-a_{1}}(-1)^{\beta_{P}(v) +i+1}\frac{(p-1)!}{i!(p-i)!}\binom{a_{1}+i}{\beta_{P}(v)}&\text{if $a_{1}<\beta_{P}(v)$.}\end{cases} \tag{3.10}\] Proof.: By Equation (3.8), we obtain \[\mathcal{C}(\omega)=\sum_{0\leq j+l\leq a_{2}}(-1)^{a_{2}-j}\binom{a_{2}}{j}y _{2}^{j}\mathcal{C}(g(y_{1})^{l}h(x)^{a_{2}-j-l}y_{1}^{a_{1}}x_{P}^{v}dx).\] We split the proof into the two cases of Definition 3.2: \(a_{1}\geq\beta(v)\) and \(a_{1}<\beta(v)\). First, suppose \(a_{1}\geq\beta(v)\). To obtain the exponent of \(y_{2}\) in \(\kappa(\mathcal{C}(\omega))\), we must specialize to \(j=a_{2}\) in Equation (3.8), yielding the term \[y_{2}^{a_{2}}\mathcal{C}(y_{1}^{a_{1}}x_{P}^{v}dx)=\sum_{m=0}^{a_{1}}(-1)^{m} \binom{a_{1}}{m}y_{2}^{a_{2}}y_{1}^{a_{1}-m}\mathcal{C}(f(x)^{m}x_{P}^{v}dx).\] For the right exponent of \(y_{1}\), we specialize to \(m=\beta_{P}(v)\). Then the leading term of \(f_{P}(x_{P})\) (recall Equation (3.4)) gives the term \[\mathcal{C}(u_{P}^{\beta_{P}(v)}x_{P}^{d_{P}\beta_{P}(v)+v}dx) =u_{P}^{\beta_{P}(v)/p}\mathcal{C}(x_{P}^{d_{P}(\gamma_{P}(v- \epsilon_{P})-p\alpha_{P}(v))+v}dx)\] \[=u_{P}^{\beta_{P}(v)/p}\mathcal{C}(x_{P}^{(p-1)(v-\epsilon_{P})+ v-pd_{P}\alpha_{P}(v)}dx)\] \[=u_{P}^{\beta_{P}(v)/p}\mathcal{C}(x_{P}^{p(v-d_{P}\alpha_{P}(v) -\epsilon_{P})+\epsilon_{P}}dx)\] \[=u_{P}^{\beta_{P}(v)/p}x_{P}^{v-d_{P}\alpha_{P}(v)}dx.\] This term is exactly \(\kappa(\mathcal{C}(\omega))\). It is clear that this is the only contribution in \(\mathcal{C}(\omega)\) to \(\kappa(\mathcal{C}(\omega))\). Therefore the coefficient of \(\kappa(\mathcal{C}(\omega))\) in this case is \[(-1)^{\beta(v)}u_{P}^{\beta_{P}(v)/p}\binom{a_{1}}{\beta(v)}.\] Now let us suppose \(a_{1}<\beta(v)\). In that case we must set \(j=a_{2}-1\) in Equation (3.8) to obtain the right exponent of \(y_{2}\). Note that \(a_{2}\geq 1\) since \(\omega\) is an element of \(H\). Now that \(j=a_{2}-1\) has been specified, \(l\) can still be \(0\) or \(1\), giving two terms: \[y_{2}^{a_{2}-1}\left(-\mathcal{C}(y_{1}^{a_{1}}h(x)x_{P}^{v}dx)-\mathcal{C}(g( y_{1})y_{1}^{a_{1}}x_{P}^{v}dx)\right).\] Since \(a_{1}<a_{1}-\beta(v)+p\) and \(\mathcal{C}\) cannot increase the exponent of \(y_{1}\), the first term cannot contribute to \(\kappa(\mathcal{C}(\omega))\). For the second term, recall Equation (3.3) for \(g(y_{1})\) We substitute to get \[\sum_{i=1}^{p-1}(-1)^{i+1}\frac{(p-1)!}{i!(p-i)!}\mathcal{C}(y_{1}^ {p(p-i)+i+a_{1}}x_{P}^{v}dx)\] \[= \sum_{i=1}^{p-1}(-1)^{i+1}\frac{(p-1)!}{i!(p-i)!}y_{1}^{p-i} \mathcal{C}(y_{1}^{i+a_{1}}x_{P}^{v}dx)\] \[= \sum_{i=1}^{p-1}(-1)^{i+1}\frac{(p-1)!}{i!(p-i)!}\sum_{m=0}^{i+a_{ 1}}(-1)^{m}\binom{a_{1}+i}{m}y_{1}^{p-m+a_{1}}\mathcal{C}(f(x)^{m}x_{P}^{v}dx).\] For the exponent of \(y_{1}\), we must set \(m=\beta_{P}(v)\). For the binomial coefficients to be non-zero, we must have \(\beta_{P}(v)-a_{1}\leq i\leq p-1-a_{1}\). This choice of \(m\) and the leading term of \(f_{P}(x_{P})\) yields the term \[\sum_{i=\beta_{P}(v)-a_{1}}^{p-1-a_{1}}\frac{(p-1)!}{i!(p-i)!}(-1)^{\beta_{P} (v)+i+1}\binom{a_{1}+i}{\beta_{P}(v)}y_{2}^{a_{2}-1}y_{1}^{a_{1}-\beta(v)+p} \mathcal{C}(u_{P}^{\beta_{P}(v)}x_{P}^{dp\beta(v)+v}dx).\] As before, \(\mathcal{C}(x_{P}^{dp\beta(v)+v}dx)=x^{v-d_{P}\alpha(v)},\) which finishes the proof that this is a contribution to \(\kappa(\mathcal{C}(\omega)).\) Again, one goes through the same steps to show that there is no other contribution to \(\kappa(\mathcal{C}(\omega))\). To make future calculations easier, we give a sufficient criterion for \(c_{\omega}\) to be non-zero. **Lemma 3.4**.: _Let \(P\in B_{1}\) and let \(\omega=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\) be a differential in \(H_{P}\). If \(\beta_{P}(v)\leq a_{1}\) or \(\beta_{P}(v)\geq p-2\), then \(c_{\omega}\) is non-zero._ Proof.: If \(\beta_{P}(v)\leq a_{1}\), it is clear from Lemma 3.3 that \(c_{\omega}\neq 0\). If \(\beta_{P}(v)=p-1\), then \(c_{\omega}\) is a sum consisting of only one non-zero term, so again \(c_{\omega}\neq 0\). In the case \(\beta_{P}(v)=p-2\), \(c_{\omega}\) is a sum with two terms. To be precise, \(c_{\omega}\) vanishes if and only if \[\sum_{i=p-2-a_{1}}^{p-1-a_{1}}\frac{(-1)^{i}(a_{1}+i)!}{i!(p-i)!\beta_{P}(v)!( a_{1}+i-\beta_{P}(v))!}=0.\] This sum vanishes precisely when its two terms are each other's negative: \[\frac{(p-2)!}{(p-2-a_{1})!(a_{1}+2)!\beta_{P}(v)!(p-2-\beta_{P}(v ))!}\] \[= \frac{(p-1)!}{(p-1-a_{1})!(a_{1}+1)!\beta_{P}(v)!(p-1-\beta_{P}(v ))!},\] whence \[(p-2)!(p-1-a_{1})!(a_{1}+1)!\beta_{P}(v)!(p-1-\beta_{P}(v))!\] \[= (p-1)!(p-2-a_{1})!(a_{1}+2)!\beta_{P}(v)!(p-2-\beta_{P}(v))!.\] Using the assumption \(\beta_{P}(v)=p-2\) yields \[(p-2-\beta_{P}(v))!=(p-1-\beta_{P}(v))!=1.\] Then dividing both sides by \((p-2)!(p-2-a_{1})!(a_{1}+1)\beta_{P}(v)!\) gives \[(p-1-a_{1}) =(p-1)(a_{1}+2)\] \[-a_{1}-1 =-(a_{1}+2)\] \[-1 =-2.\] From this contradiction we infer that \(c_{\omega}\neq 0\) in the case \(\beta_{P}(v)=p-2\). ### Key terms at poles of \(h\) Let \(B_{2}\subset\mathbb{P}^{1}\) denote the branch locus of the cover \(\phi_{2}:Y_{2}\to\mathbb{P}^{1}\). We consider a point \(P\in B_{2}\setminus B_{1}\). This implies that \(f\) is regular at \(P\) but \(h\) has a pole at \(P\). We denote the order of this pole by \(e_{P}\). By minimality, we have \(e_{P}|p-1\) and define \(\delta_{P}=\frac{p-1}{e_{P}}\). Proposition 2.12 yields the set \[W_{P}:=\begin{Bmatrix}y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\mid 0\leq a_{1},a_{ 2}<p,\\ 0<pv\leq e_{P}(p-1-a_{2})+p-1\end{Bmatrix}. \tag{3.11}\] For a differential \(\omega=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in W_{P}\), Lemma 3.1 holds, as the proof is independent of whether or not \(f\) has a pole at \(P\). This allows us to define the key term of \(\omega\). **Definition 3.5**.: _Let \(\alpha_{p}(v):=\left\lfloor\frac{\delta_{P}(v-1)}{p}\right\rfloor\) and \(\beta_{P}(v):=\delta_{P}(v-1)-p\alpha_{P}(v)\). Define_ \[H_{P}:=\{y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in W_{P}\mid\beta_{P}(v)\leq a _{2}\}\] _For \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H_{P}\) we define the associated key term \(\kappa(\mathcal{C}(\omega))\) as_ \[\kappa(\mathcal{C}(\omega)):=y_{2}^{a_{2}-\beta_{P}(v)}y_{1}^{a_{1}}x_{P}^{v-e _{P}\alpha_{P}(v)}dx.\] **Lemma 3.6**.: _Let \(\omega=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}\in H_{P}\). Then the coefficient \(c_{\omega}\) of \(\kappa(\mathcal{C}(\omega))\) in \(\mathcal{C}(\omega))\) is non-zero._ Proof.: Recall Equation (3.8): \[\mathcal{C}(\omega)=\sum_{0\leq j+l\leq a_{2}}(-1)^{a_{2}-j}\binom{a_{2}}{j \,l}y_{2}^{j}\mathcal{C}(g(y_{1})^{l}h(x)^{a_{2}-j-l}y_{1}^{a_{1}}x_{P}^{v}dx).\] In order to obtain the exponent \(a_{2}-\beta_{p}(v)\) of \(y_{2}\), we must pick \(j=a_{2}-\beta_{P}(v)\). Picking \(l=0\) results in a term \[y_{2}^{a_{2}-\beta_{P}(v)}\mathcal{C}(y_{1}^{a_{1}}h(x)^{\beta_{P}(v)}x_{P}^{v} dx)=\sum_{m=0}^{a_{1}}(-1)^{m}\binom{a_{1}}{m}y_{2}^{a_{2}-\beta_{P}(v)}y_{1}^{a_ {1}-m}\mathcal{C}(h(x)^{\beta_{P}(v)}f(x)^{m}x_{P}^{v}dx).\] We pick \(m=0\). The differential \(\mathcal{C}(h(x)^{\beta_{P}(v)}x_{P}^{v}dx)\) has a non-zero term \[\mathcal{C}(x_{P}^{e_{P}\beta_{P}(v)+v}dx) =\mathcal{C}(x_{P}^{e_{P}(\delta_{P}(v-1)-p\alpha_{P}(v))+v}\] \[=x_{P}^{v-e_{P}\alpha_{P}(v)}dx.\] We do not get such a term if we pick \(l>0\). This shows that there is a single non-zero contribution to the coefficient of \(\kappa(\mathcal{C}(\omega))\) in \(\mathcal{C}(\omega)\), implying that the coefficient is non-zero. ### An upper bound on the \(a\)-number We now use the machinery of key terms to provide a conditional upper bound on the \(a\)-number \(a_{Y_{2}}\). We do this by showing the Cartier operator is injective when restricted to a suitable subspace of \(\mathrm{H}^{0}(Y_{2},\Omega_{Y_{2}}^{1})\), whose dimension bounds the rank of the Cartier operator from below. First we define \(W:=\bigcup_{P\in B_{2}}W_{P}\), such that \(W\) is a basis of \(\mathrm{H}^{0}(Y_{2},\Omega_{Y_{2}}^{1})\). Let \(H:=\bigcup_{P\in B_{2}}H_{P}\) be the subset of \(W\) consisting of differentials that have a key term. Finally, let \(K\) be the set of key terms. As opposed to the case described in [11], it is now possible for two differentials to have the same key term. The following lemma describes exactly when this happens. **Lemma 3.7**.: _If two differentials \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H_{P}\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{Q}^{w}dx\in H_{Q}\) have the same key term, then \(P=Q\). In the case \(P\in B_{1}\), then \(\omega\) and \(\omega^{\prime}\) have the same key term if and only if the following equations are satisfied:_ \[v+d_{P}(a_{1}+pa_{2}) =w+d_{P}(b_{1}+pb_{2}) \tag{3.13}\] \[v-d_{P}\alpha_{P}(v) =w-d_{P}\alpha_{P}(w). \tag{3.12}\] _In the case \(P\in B_{2}\setminus B_{1}\), \(\omega\) and \(\omega^{\prime}\) have the same key term if and only if the following equations are satisfied:_ \[v+e_{P}a_{2} =w+e_{P}b_{2} \tag{3.15}\] \[a_{1} =b_{1}\] (3.16) \[v-e_{P}\alpha_{P}(v) =w-e_{P}\alpha_{P}(w). \tag{3.14}\] Proof.: Assume \(\kappa(\mathcal{C}(\omega))=\kappa(\mathcal{C}(\omega^{\prime}))\). First we show that \(P=Q\). Note that, if \(P\neq Q\) then the key terms of \(\omega\) and \(\omega^{\prime}\) must not involve \(x_{P}\) and \(x_{Q}\), such that \[v-d_{P}\alpha_{P}(v)=w-d_{Q}\alpha_{Q}(v)=0.\] However, for \(P\neq\infty\) we have \[v-d_{P}\alpha_{P}(v)=v-d_{P}\left\lfloor\frac{\gamma_{P}(v-1)}{p}\right\rfloor\geq v -\frac{d_{P}\gamma_{P}(v-1)}{p}=v-\frac{(p-1)(v-1)}{p}>0.\] This implies that \(P=Q=\infty\) and hence \(P\) and \(Q\) are not distinct. Assume \(P\in B_{1}\). We show that Equations (3.12) and (3.13) hold. Equating the exponents of \(x_{P}\) in \(\kappa(\mathcal{C}(\omega))\) and \(\kappa(\mathcal{C}(\omega^{\prime}))\) immediately gives Equation (3.13), so the rest of the proof revolves around showing Equation (3.12). By equating the exponent of \(y_{2}\) in the key terms, we obtain \(b_{2}\in\{a_{2}-1,a_{2},a_{2}+1\}\). Without loss of generality, we assume \(b_{2}\geq a_{2}\), such that only the cases \(b_{2}=a_{2}\) and \(b_{2}=a_{2}+1\) remain. In the case \(b_{2}=a_{2}\), equating the exponent of \(x_{P}\) in the key terms gives \[v=w+d_{P}(\alpha_{P}(v)-\alpha_{P}(w)). \tag{3.17}\] Substituting this into the equation for the exponents of \(y_{1}\) yields \[a_{1}-\beta_{P}(v) =b_{1}-\beta_{P}(w)\] \[a_{1}-\gamma_{P}(v-\epsilon_{P})+p\alpha_{P}(v) =b_{1}-\gamma_{P}(w-\epsilon_{P})+p\alpha_{P}(w)\] \[a_{1}-\gamma_{P}(w+d_{P}(\alpha_{P}(v)-\alpha_{P}(w)-\epsilon_{P })+p\alpha_{P}(v) =b_{1}-\gamma_{P}(w-\epsilon_{P})+p\alpha_{P}(w)\] \[a_{1}-(p-1)(\alpha_{P}(v)-\alpha_{P}(w))+p\alpha_{P}(v) =b_{1}+p\alpha_{P}(w)\] \[\alpha_{P}(v)-\alpha_{P}(w) =b_{1}-a_{1}.\] Substituting this back into Equation (3.17) then gives \(v=w+d_{P}(b_{1}-a_{1})\) and hence \(v+d_{P}a_{1}=w+d_{P}b_{1}\), as desired. On the other hand, suppose \(b_{2}=a_{2}+1\). In that case, Equation (3.17) still follows from equating the exponent of \(x_{P}\). Equating the exponent of \(y_{1}\) yields \(a_{1}-\beta_{P}(v)=b_{1}-\beta_{P}(w).\) Following the same steps as in the case \(b_{2}=a_{2}\) gives \(\alpha_{P}(v)-\alpha_{P}(w)=b_{1}-a_{1}+p\) and hence \(v=w+d_{P}(b_{1}-a_{1}+p)\). Rearranging this and using \(b_{2}-a_{2}=1\) gives the desired Equation (3.12). For the opposite direction, assume Equations (3.12) and (3.13) hold. By Equation (3.13), \(\kappa(\mathcal{C}(\omega))\) and \(\kappa(\mathcal{C}(\omega^{\prime}))\) have the same exponent of \(x_{P}\). Substituting \(v=w+d_{P}(\alpha_{P}(v)-\alpha_{P}(w))\) in Equation (3.12) yields \[\alpha_{P}(v)-\alpha_{P}(w)=b_{1}-a_{1}+p(b_{2}-a_{2}).\] This implies \[b_{1}-\beta_{P}(w)+p(b_{2}-a_{2}) =a_{1}+\alpha_{P}(v)-\alpha_{P}(w)-\beta_{P}(w)\] \[=a_{1}+\alpha_{P}(v)-\alpha_{P}(w)-\gamma_{P}(w-\epsilon_{P})+p \alpha_{P}(w)\] \[=a_{1}+\alpha_{P}(v)-\alpha_{P}(w)\] \[\quad-\gamma_{P}(v+d_{P}(\alpha_{P}(w)-\alpha_{P}(v))-\epsilon_{P })+p\alpha_{P}(w)\] \[=a_{1}+\alpha_{P}(v)-\alpha_{P}(w)-\gamma_{P}(v-\epsilon_{P})\] \[\quad-(p-1)(\alpha_{P}(w)-\alpha_{P}(v))+p\alpha_{P}(w)\] \[=a_{1}-\gamma_{P}(v-\epsilon_{P})+p\alpha_{P}(v)\] \[=a_{1}-\beta_{P}(v).\] Using this, one verifies that in all cases for \(b_{2}\), the exponents of \(y_{1}\) and \(y_{2}\) in \(\kappa(\mathcal{C}(\omega))\) and \(\kappa(\mathcal{C}(\omega^{\prime}))\) agree. Thus \(\omega\) and \(\omega^{\prime}\) have the same key term. We now consider the case \(P\in B_{2}\setminus B_{1}\). Equation (3.16) follows immediately from equating the exponent of \(x_{P}\) in the key term. Similarly, Equation (3.15) follows from equating the exponents of \(y_{1}\). For Equation (3.14), we write \(v=w+e_{P}r\) and \(\alpha_{P}(v)=\alpha_{P}(w)-r\) for some integer \(r\). By equating the exponents of \(y_{2}\) in the key term, we obtain \[a_{2}-b_{2} =\beta_{P}(v)-\beta_{P}(w)\] \[=\delta_{P}(v-1)-p\alpha_{P}(v)-(\delta(w-1)-p\alpha_{P}(w))\] \[=\delta_{P}(v-w)+p(\alpha_{P}(v)-\alpha_{P}(w))\] \[=\delta_{P}e_{P}r+pr=-r.\] Thus Equation (3.14) also holds. For the opposite direction, assume Equations (3.14), (3.15) and (3.16) hold. Then we only need to show that \(y_{2}\) has the same exponent in both key terms. This is achieved by writing \(a_{2}=b_{2}-r\) and \(v=w+e_{P}r\). Then a similar calculation as above shows \[a_{2}-\beta_{P}(v)=b_{2}-\beta_{P}(w),\] as desired. To display a minor of the Cartier-Manin matrix in a convenient form, we define a partial order on \(W\). We first order the branch points in such a way that \(\infty\) is the smallest. **Definition 3.8**.: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{Q}^{w}dx\). We say \(\omega^{\prime}\prec\omega\) in any of the following cases:_ 1. \(Q\neq P\) _and_ \(b_{1}+pb_{2}<a_{1}+pa_{2}\)_;_ 2. \(Q<P\) _and_ \(b_{1}+pb_{2}=a_{1}+pa_{2}\) _._ * \(Q=P\in B_{1}\) _and_ \(w+d_{P}(b_{1}+pb_{2})<v+d_{P}(a_{1}+pa_{2})\)_;_ * \(Q=P\in B_{1}\) _and_ \(w+d_{P}(b_{1}+pb_{2})=v+d_{P}(a_{1}+pa_{2})\) _and_ \(w-d_{P}\alpha_{P}(w)>v-d_{P}\alpha_{P}(v)\)_._ * \(Q=P\in B_{2}\setminus B_{1}\) _and_ \(w+e_{P}a_{2}<v+e_{P}a_{2}\)_._ * \(Q=P\in B_{2}\setminus B_{1}\) _and_ \(w+e_{P}b_{2}=v+e_{P}a_{2}\) _and_ \(b_{1}<a_{1}\)_._ * \(Q=P\in B_{2}\setminus B_{1}\) _and_ \(w+e_{P}b_{2}=c+e_{P}a_{2}\) _and_ \(b_{1}=a_{1}\) _and_ \(w-e_{P}\alpha_{P}(w)>v-d_{P}\alpha_{P}(v)\)_._ _Remark 3.9_.: By comparing Definition 3.8 to Lemma 3.7, note that two differentials \(\omega\) and \(\omega^{\prime}\) are incomparable by the order \(\prec\) if and only if they have the same key term. The goal of introducing the partial order \(\prec\) is to prove the following lemma. **Lemma 3.10**.: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{Q}^{w}dx\in W\) with \(\omega^{\prime}\prec\omega\). Then the coefficient of the key term \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\)._ In order to prove Lemma 3.10, we prove seven lemmas, corresponding to the cases of Definition 3.8. In each case, we have \[\mathcal{C}(\omega^{\prime})=\sum_{0\leq j+l\leq b_{2}}(-1)^{b_{2}-j}{b_{2} \choose j\,l}y_{2}^{j}\mathcal{C}(g(y_{1})^{l}h(x)^{b_{2}-j-l}y_{1}^{b_{1}}x_{ Q}^{w}dx). \tag{3.18}\] Recall also the formula for the key term \(\kappa(\mathcal{C}(\omega)):\) \[\kappa(\mathcal{C}(\omega)):=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}-\beta_{P}( v)}x_{P}^{v-d_{P}\alpha_{P}(v)}dx&\text{if }\,P\in B_{1}\,\text{ and }\,a_{1}\geq\beta_{P}(v)\\ y_{2}^{a_{2}-1}y_{1}^{a_{1}-\beta_{P}(v)+p}x_{P}^{v-d_{P}\alpha_{P}(v)}dx&\text {if }\,P\in B_{1}\,\text{ and }\,a_{1}<\beta_{P}(v)\\ y_{2}^{a_{2}-\beta_{P}(v)}y_{1}^{a_{1}}x_{P}^{v-e_{P}\alpha_{P}(v)}dx&\text{ if }\,P\in B_{2}\setminus B_{1}.\end{cases} \tag{3.19}\] **Lemma 3.11** (Case (i) of Definition 3.8).: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{Q}^{w}dx\in W\) with \(Q\neq P\) and \(b_{1}+pb_{2}<a_{1}+pa_{2}\). Then the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\)._ Proof.: Since \(0\leq a_{1},b_{1}<p\), the assumption implies \(b_{2}\leq a_{2}\). First assume that \(P\in B_{1}\). Then it is clear from Equation (3.18) and Equation (3.19) that we are done in the case \(b_{2}<a_{2}-1\). In the case \(b_{2}=a_{2}-1\), we must have \(a_{1}<\beta_{P}(v)\). In Equation (3.18), we need to specialize to \(j=a_{2}-1\) and \(l=0\) to get the right exponent of \(y_{2}\). This leaves us with a term \[y_{2}^{a_{2}-1}\mathcal{C}(y_{1}^{b_{1}}x_{Q}^{w}dx)=\sum_{m=1}^{b_{1}}y_{2}^{ a_{2}-1}y_{1}^{b_{1}-m}\mathcal{C}(f(x)^{m}x_{Q}^{w}dx).\] To get the right exponent of \(y_{1}\), we must pick \(m=b_{1}-a_{1}+\beta_{P}(v)-p\). Then the differential \(f(x)^{b_{1}-a_{1}+\beta_{P}(v)-p}x_{Q}^{w}dx\) has a pole of order at most \(d_{P}(b_{1}-a_{1}+\beta_{P}(v)-p)\) at \(P\). However, a pole of order \(d_{P}\beta_{P}(v)+v\) is needed to produce the exponent \(v-d_{P}\alpha_{P}(v)\) of \(x_{P}\). Since \(b_{1}-a_{1}-p<0\), it follows that this exponent is not attained and hence \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\). For \(b_{2}=a_{2}\) the proof is similar. In the case \(a_{1}\geq\beta_{P}(v)\), we must pick \(j=a_{2}\) and \(l=0\) in Equation (3.18), yielding the term \[y_{2}^{a_{2}}\mathcal{C}(y_{1}^{b_{1}}x_{Q}^{w}dx)=\sum_{m=0}^{b_{1}}(-1)^{m} \binom{b_{1}}{m}y_{2}^{a_{2}}y_{1}^{b_{1}-m}\mathcal{C}(f(x)^{m}x_{Q}^{w}dx).\] In order to achieve the right exponent of \(y_{1}\), we must specialize \(m=b_{1}-a_{1}+\beta_{P}(v)\). Since \(\omega^{\prime}\prec\omega\), we have \(b_{1}<a_{1}\) and it follows that the right exponent of \(x_{P}\) is not achieved. In the case \(a_{1}<\beta_{P}(v)\), we must specialize to \(j=a_{2}-1\) in Equation (3.18). The possibilities \(l=0\) and \(l=1\) result in the terms \(y_{2}^{a_{2}-1}\mathcal{C}(g(y_{1})y_{1}^{b_{1}}x_{Q}^{w}dx)\) and \(y_{2}^{a_{2}-1}\mathcal{C}(y_{1}^{b_{1}}h(x)x_{Q}^{w}dx).\) The second term could never attain the right exponent of \(y_{1}\), since \(b_{1}<a_{1}\). On the other hand, for the first term we substitute the definition of \(g(y_{1})\) to obtain \[\mathcal{C}(g(y_{1})y_{1}^{b_{1}}x_{Q}^{w}dx) =\sum_{i=1}^{p-1}(-1)^{i}\frac{(p-1)!}{i!(p-i)!}y_{1}^{p-i} \mathcal{C}(y_{1}^{b_{1}+i}x_{Q}^{w}dx)\] \[=\sum_{i=1}^{p-1}\frac{(p-1)!}{i!(p-i)!}\sum_{m=0}^{b_{1}+i}(-1)^ {m+i+1}\binom{b_{1}+i}{m}y_{1}^{p-m+b_{1}}\mathcal{C}(f(x)^{m}x_{Q}^{w}dx).\] For any \(i\), we must pick \(m=b_{1}-a_{1}+\beta_{P}(v)\) to obtain the right exponent of \(y_{1}\). Then the differential \(f(x)^{b_{1}-a_{1}+\beta_{P}(v)}x_{Q}^{w}dx\) has a pole of order at most \(d_{P}(b_{1}-a_{1}+\beta_{P}(v))\) at \(P\). Since \(b_{1}<a_{1}\), this is smaller than the required pole order \(d_{P}\beta_{P}(v)+v\) to obtain the right exponent of \(x_{P}\). Hence the coefficient of \(\kappa(\mathcal{C}(\omega))\) is again zero in \(\mathcal{C}(\omega^{\prime})\). Now, assume that \(P\in B_{2}\setminus B_{1}\). In Equation (3.18), we must pick \(j=a_{2}-\beta_{P}(v)\). In the case \(b_{1}<a_{1}\), the required exponent of \(y_{1}\) can never be obtained. On the other hand, in the case \(b_{2}<a_{2}\), the required exponent \(\beta_{P}(v)\) of \(h(x)\) cannot be obtained. In both cases, \(\mathcal{C}(\omega^{\prime})\) cannot have a non-zero term \(y_{2}^{a_{2}-\beta_{P}(v)}y_{1}^{a_{1}}x_{P}^{v-e_{P}\alpha_{P}(v)}dx\), which finishes the proof. **Lemma 3.12** (Case (ii) of Definition 3.8).: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{Q}^{w}dx\in W\) with \(Q<P\) and \(b_{1}+pb_{2}=a_{1}+pa_{2}\). Then the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\)._ Proof.: The proof is similar to the proof of Lemma 3.11. Note that we have \(b_{2}=a_{2}\) and \(b_{1}=a_{1}\). First suppose \(P\in B_{1}\). After attaining the right exponents of \(y_{2}\) and \(y_{1}\), we are left with a differential on \(\mathbb{P}^{1}\) that has a pole of order at most \(d_{P}\beta_{P}(v)\) at \(P\), whereas a pole of order \(d_{P}\beta_{P}(v)+v\) is needed. Then, it follows from the assumption \(Q<P\) that \(P\neq\infty\) (since \(\infty\) is the smallest branch point in our ordering) and therefore \(v>0\). Thus the required pole order is not attained and therefore the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\). Similarly, suppose \(P\in B_{2}\setminus B_{1}\). In Equation (3.18), we must choose \(j=a_{2}-\beta_{P}(v)\), such that the resulting differential of \(\mathbb{P}^{1}\) has a pole of order at most \(e_{P}\beta_{P}(v)\), whereas a pole of order \(e_{P}\beta_{P}(v)+v\) is needed. Again, noting \(v>0\) finishes the proof. **Lemma 3.13** (Case (iii) of Definition 3.8).: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{P}^{w}dx\in W\) with \(P\in B_{1}\) and_ \[w+d_{P}(b_{1}+pb_{2})<v+d_{P}(a_{1}+pa_{2}). \tag{3.20}\] _Then the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\)._ Proof.: The same principle as before is used: we assume there is a term in \(\mathcal{C}(\omega^{\prime})\) that contributes to \(\kappa(\mathcal{C}(\omega))\). We first specialize to the terms that have the required exponent of \(y_{2}\) and \(y_{1}\). Then we are left with a differential on \(\mathbb{P}^{1}\) whose pole order at \(P\) is too small to give the required exponent of \(x_{P}\). Equation (3.20) and Equation (3.7) together imply that \(b_{2}\leq a_{2}+1\). Moreover, by Equations (3.18) and (3.19) it follows that we are done if \(b_{2}<a_{2}-1\). We split the rest of the proof up into the two possibilities for \(\kappa(\mathcal{C}(\omega))\): either \(a_{1}\geq\beta_{P}(v)\) or \(a_{1}<\beta_{P}(v)\). In the case \(a_{1}\geq\beta_{P}(v)\), we have \[\kappa(\mathcal{C}(\omega))=y_{2}^{a_{2}}y_{1}^{a_{1}-\beta_{P}(v)}x^{v-d_{P} \alpha_{P}(v)}dx.\] This forces \(b_{2}\geq a_{2}\), so \(b_{2}\) equals either \(a_{2}\) or \(a_{2}+1\). In the case \(b_{2}=a_{2}\), possible contributions to \(\kappa(\mathcal{C}(\omega))\) of \(\mathcal{C}(\omega^{\prime})\) come from setting \(j=a_{2}\) and \(l=0\) in Equation (3.18). This leaves the term \[y_{2}^{a_{2}}\mathcal{C}(y_{1}^{b_{1}}x_{Q}^{w}dx)=\sum_{m=0}^{b_{1}}(-1)^{m+1} \binom{b_{1}}{m}y_{2}^{a_{2}}y_{1}^{b_{1}-m}\mathcal{C}(f(x)^{m}x_{P}^{w}dx).\] For the right exponent of \(y_{1}\), we need \(m=b_{1}-a_{1}+\beta_{P}(v)\). This results in the term \(\mathcal{C}(f(x)^{b_{1}-a_{1}+\beta_{P}(v)}x_{P}^{w})\), which has a pole of order \(d_{P}(b_{1}-a_{1}+\beta_{P}(v))+w\), which is smaller than the required \(d_{P}\beta_{P}(v)+v\) by Equation (3.20). The case \(b_{2}=a_{2}+1\) is handled similarly. In Equation (3.18), possible contributions come from \(j=a_{2}\) and \(l\in\{0,1\}\). In both cases for \(l\), the resulting differential \(f(x)^{m}x_{P}^{w}dx\) does not have sufficiently large pole order at \(P\) to contribute to \(\kappa(\mathcal{C}(\omega))\). In the case \(a_{1}<\beta_{P}(v)\), we have \[\kappa(\mathcal{C}(\omega))=y_{2}^{a_{2}-1}y_{1}^{a_{1}-\beta_{P}(v)+p}x_{P}^{ v-d_{P}\alpha_{P}(v)}dx.\] This leaves three possibilities for \(b_{2}\): \(a_{2}-1\), \(a_{2}\) and \(a_{2}+1\). We treat only the case \(b_{2}=a_{2}\), as the other cases are similar. In Equation (3.18) we must specialize to \(j=a_{2}-1\). The possibilities \(l=0\) and \(l=1\) give the two terms \(y_{2}^{a_{2}-1}\mathcal{C}(g(y_{1})y_{1}^{b_{1}}x_{P}^{w}dx)\) and \(y_{2}^{a_{2}-1}\mathcal{C}(y_{1}^{b_{1}}h(x)x_{P}^{w}dx).\) For the first term, we expand \[\mathcal{C}(g(y_{1})y_{1}^{b_{1}}x_{P}^{w}dx) =\sum_{i=1}^{p-1}(-1)^{i}\frac{(p-1)!}{i!(p-i)!}y_{1}^{p-i} \mathcal{C}(y_{1}^{b_{1}+i}x_{P}^{w}dx)\] \[=\sum_{i=1}^{p-1}\frac{(p-1)!}{i!(p-i)!}\sum_{m=0}^{b_{1}+i}(-1)^ {m+i+1}\binom{b_{1}+i}{m}y_{1}^{p-m+b_{1}}\mathcal{C}(f(x)^{m}x_{P}^{w}dx).\] To obtain the right exponent of \(y_{1}\), setting \(m=b_{1}-a_{1}+\beta_{P}(v)\) is needed. Then Equation (3.20) implies that \(\mathcal{C}(f(x)^{m}x_{P}^{w}dx)\) does not have a sufficiently large pole order at \(P\). For the second term, we expand \[\mathcal{C}(y_{1}^{b_{1}}h(x)x_{P}^{w}dx)=\sum_{m=1}^{b_{1}}(-1)^{i}\binom{b_ {1}}{m}y_{1}^{b_{1}-m}\mathcal{C}(h(x)f(x)^{m}x_{P}^{w}dx).\] The right exponent of \(y_{1}\) results only from \(m=b_{1}-a_{1}+\beta_{P}(v)-p\). Then since the pole order of \(h(x)\) at \(P\) is smaller than \(pd_{P}\) (this is implied by the assumption that the upper jumps are minimal), the differential \(h(x)f(x)^{m}x_{P}^{w}dx\) has pole order smaller than \(d_{P}(b_{1}-a_{1}+\beta_{P}(v))+w\). By Equation (3.20), this is smaller than the required \(v+d_{P}\beta_{P}(v)\). This implies that the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\). **Lemma 3.14** (Case (iv) of Definition 3.8).: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{P}^{w}dx\in W\) with \(P\in B_{1}\) and \(w+d_{P}(b_{1}+pb_{2})=v+d_{P}(a_{1}+pa_{2})\) and \(w-d_{P}\alpha_{P}(w)>v-d_{P}\alpha_{P}(v)\). Then the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\)._ Proof.: From the first condition, we infer \(w\equiv v\bmod d_{P}\), so we write \(w=v+ed_{P}\) and \(b_{1}+pb_{2}=a_{1}+pa_{2}-e\). By the second condition, we must have \[\alpha_{P}(w)=\alpha_{P}(v+ed_{P})=\alpha_{P}(v)+e-1.\] This implies \[\beta_{P}(w)=\beta_{P}(v+ed_{P})=\beta_{P}(v)-e+p.\] Note that this forces \(e>\beta_{P}(v)\). We have \[\omega^{\prime}=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}-e}x_{P}^{v+ed_{P}}dx& \text{if }a_{1}\geq e\\ y_{2}^{a_{2}-1}y_{1}^{a_{1}-e+p}x_{P}^{v+ed_{P}}dx&\text{if }a_{1}<e.\end{cases} \tag{3.21}\] We again consider the two possibilities determining the key term of \(\omega\): \(a_{1}\geq\beta_{P}(v)\) and \(a_{1}<\beta_{P}(v)\). We treat the case \(a_{1}\geq\beta_{P}(v)\) first, in which the key term is \[\kappa(\mathcal{C}(\omega))=y_{2}^{a_{2}}y_{1}^{a_{1}-\beta_{P}(v)}x_{P}^{v-d_{P }\alpha_{P}(v)}dx.\] To achieve this as a term in \(\mathcal{C}(\omega^{\prime})\), we need \(b_{2}\geq a_{2}\). Appealing to Equation (3.21), only the case \(a_{1}\geq e\) remains. In Equation (3.18) we need \(j=a_{2}\) and \(l=0\) to obtain the required exponent of \(y_{2}\). This gives the term \[y_{2}^{a_{2}}\mathcal{C}(y_{1}^{a_{1}-e}x_{P}^{v+ed_{P}}dx)=\sum_{m=0}^{a_{1}-e }(-1)^{m}\binom{a_{1}-e}{m}y_{2}^{a_{2}}y_{1}^{a_{1}-e-m}\mathcal{C}(f(x)^{m}x_ {P}^{w}dx).\] In order to obtain the exponent of \(y_{1}\), setting \(m=\beta_{P}(v)-e\) is required, but this is negative as we have ascertained \(e>\beta_{P}(v)\). Thus the right exponent of \(y_{1}\) cannot be achieved. On the other hand, assume \(a_{1}<\beta_{P}(v)\). Note that this implies \(a_{1}<e\), since \(\beta_{P}(v)<e\). By Equation (3.21), this determines \(\omega^{\prime}\). In Equation (3.18), we have to specialize to \(j=a_{2}-1\) and \(l=0\), yielding the term \[y_{2}^{a_{2}-1}\mathcal{C}(y_{1}^{a_{1}-e+p}x_{P}^{w}dx)=\sum_{m=0}^{a_{1}-e+p} (-1)^{m}\binom{a_{1}-e+p}{m}y_{2}^{a_{2}-1}y_{1}^{a_{1}-e+p-m}\mathcal{C}(f(x)^ {m}x_{P}^{w}dx).\] Again the required exponent of \(y_{1}\) can only be attained if \(m=\beta_{P}(v)-e\), which is negative. **Lemma 3.15** (Case (v) of Definition 3.8).: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{P}^{w}\) with \(P\in B_{2}\setminus B_{1}\) and \(w+e_{P}b_{2}<v+e_{P}a_{2}\). Then the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\)._ Proof.: In order to obtain the key term \(\kappa(\mathcal{C}(\omega))\), we have to take \(j=a_{2}-\beta_{P}(v)\). Then the exponent of \(h(x)\) can be at most \(b_{2}-a_{2}+\beta_{P}(v)\), making the exponent of \(x_{P}\) at most \(e_{P}(b_{2}-a_{2}+\beta_{P}(v))+w\), which by our assumption is smaller than the required \(e_{P}\beta_{P}(v)+v\). **Lemma 3.16** (Case (vi) of Definition 3.8).: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{P}^{w}\) with \(P\in B_{2}\setminus B_{1}\), \(w+e_{P}b_{2}=v+e_{P}a_{2}\) and \(b_{1}<a_{1}\). Then the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\)._ Proof.: In Equation (3.18), we must take \(j=a_{2}-\beta_{P}(v)\) and \(l=0\) to get the right exponents of \(y_{2}\) and \(x_{P}\). Then the differential \[\mathcal{C}(y_{1}^{b_{1}}h(x)^{b_{2}-a_{2}+\beta_{P}(v)}x_{P}^{w}dx)\] cannot have a term in which exponent of \(y_{1}\) is \(a_{1}\). This finishes the proof. **Lemma 3.17** (Case (vi) of Definition 3.8).: _Let \(\omega:=y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\in H\) and \(\omega^{\prime}:=y_{2}^{b_{2}}y_{1}^{b_{1}}x_{P}^{w}\) with \(P\in B_{2}\setminus B_{1}\), \(w+e_{P}b_{2}=v+e_{P}a_{2}\), \(b_{1}=a_{1}\) and \(w-e_{P}\alpha_{P}(w)>v-e_{P}\alpha_{P}(v)\). Then the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\)._ Proof.: The proof is similar to the proof of Lemma 3.14. Write \(b_{2}=a_{2}-r\) and \(w=v+re_{P}\) for some integer \(r\). From the inequality, we infer \[\alpha_{P}(w) =\alpha_{P}(v+re_{P})=\alpha_{P}(v)+r-1\] \[\beta_{P}(w) =\beta_{P}(v+re_{P})=\beta_{P}(v)-r+p.\] This forces \(r>\beta_{P}(v)\). Then in Equation (3.18), we need to take \(j=a_{2}-\beta_{P}(v)\) in order to obtain the right exponent of \(y_{2}\). But this is impossible as it exceeds \(b_{2}=a_{2}-r\). Therefore there can be no contribution to \(\kappa(\mathcal{C}(\omega))\) in \(\mathcal{C}(\omega^{\prime})\). Lemma 3.10 now follows. Proof of Lemma 3.10.: We have treated all seven cases of \(\omega^{\prime}\prec\omega\) given in Definition 3.8 in Lemmas 3.11, 3.12, 3.13, 3.14, 3.15, 3.16 and 3.17. In every case, the coefficient of \(\kappa(\mathcal{C}(\omega))\) is zero in \(\mathcal{C}(\omega^{\prime})\). We now finally use key terms to prove a lower bound on the rank of the Cartier operator. Recall that \(K\) is the set of key terms and that \(H\subset W\) is the set of basis regular differentials for which key terms are defined. **Theorem 3.18**.: _Assume that for each \(\omega^{\prime}\in K\), there is an \(\omega\in H\) such that \(\kappa(\mathcal{C}(\omega))=\omega^{\prime}\) and \(c_{\omega}\neq 0\). Then we have \(\text{rk}(\mathcal{C})\geq\#K\)._ Proof.: By the assumptions, we can find a subset \(H^{\prime}\subseteq H\) such that every \(\omega^{\prime}\in K\) has a unique \(\omega\in H^{\prime}\) with the properties \(\kappa(\mathcal{C}(\omega))=\omega^{\prime}\) and \(c_{\omega}\neq 0\). Note that \(H^{\prime}\) and \(K\) have the same cardinality. By Lemma 3.7 and Remark 3.9, the partial order \(\prec\) is a linear order when restricted to \(H^{\prime}\). We restrict the Cartier operator to \(\text{span}_{k}H^{\prime}\) and then project to \(\text{span}_{k}K\): \[\mathcal{C}|_{H^{\prime}}:\text{span}_{k}H^{\prime}\to\text{span}_{k}K.\] By Lemma 3.10, the matrix respresenting \(C|_{H^{\prime}}\) with respect to the basis \(H^{\prime}\) ordered by \(\prec\) has only zeros below the diagonal. Furthermore, by Lemma 3.3, the diagonal entries are \(c_{\omega}\) in the case \(P\in B_{1}\). These are non-zero by assumption. In the case \(P\in B_{2}\setminus B_{1}\), \(c_{\omega}\) is non-zero unconditionally by Lemma 3.6. Hence \(\mathcal{C}|_{H^{\prime}}\) is invertible, implying that \(\text{rk}(\mathcal{C})\geq\#K\). _Remark 3.19_.: Optimistically one might hope to prove the equality \(\text{im}(\mathcal{C})=\text{span}_{k}K\). However, both inclusions fail already in the case \(p=3\). See Remark 4.2. ## 4. The case \(p=3\) In this section we use the machinery from the preceding sections to answer Question 2.15 for \(\mathbb{Z}/9\mathbb{Z}\)-covers of \(\mathbb{P}^{1}\) in characteristic \(3\). In Theorem 4.1, we prove that the upper bound from Theorem 3.18 is sharp for \(\mathbb{Z}/9\mathbb{Z}\) covers of \(\mathbb{P}^{1}\). Let \(\phi_{2}:Y_{2}\to\mathbb{P}^{1}\) be such a cover, with branching datum \[\left[\begin{array}{ccccccccc}3&\ldots&2&\ldots&0&\ldots&0&\ldots\\ 7&\ldots&4&\ldots&3&\ldots&2&\ldots\\ \end{array}\right]^{\intercal}.\] Recall the models \[Y_{1}:y_{1}^{3}-y_{1} =f(x) \tag{4.2}\] \[Y_{2}:y_{2}^{3}-y_{2} =g(y_{1})+h(x), \tag{4.1}\] where \[g(y_{1})=\sum_{i=1}^{2}(-1)^{i}\frac{(2)!}{i!(3-i)!}y_{1}^{3(3-i)+i}=-y_{1}^{7 }+y_{1}^{5}.\] As before, let \(B_{1}\) be the branch locus of \(\phi_{1}:Y_{1}\to\mathbb{P}^{1}\). For \(P\in B_{1}\), let \(d_{P}\) be the unique break in the lower-numbering ramification filtration at \(P\). Minimality of the cover (see Definition 2.14) implies \(d_{P}|p-1\). We may assume \(f\) has a pole of order \(d_{P}\) at every \(P\in B_{1}\) and no poles elsewhere. By Theorem 2.4, the pole order of \(h\) at \(P\) can be assumed to be less than \(pd_{P}\). Without loss of generality, we assume \(\infty\in B_{1}\). Similarly, let \(B_{2}\) be the branch locus of the cover \(\phi_{2}:Y_{2}\to\mathbb{P}^{1}\). For \(P\in B_{2}\setminus B_{1}\), let \(e_{P}\) be the ramification jump of the cover \(Y_{2}\to Y_{1}\) at \(P\). By minimality, we have \(e_{P}|p-1\). In Theorem 4.1 we prove that the \(a\)-number of \(Y_{2}\) depends only on the ramification jumps \(d_{P}\) and \(e_{P}\). We do this by showing that the lower bound provided by Theorem 3.18 is sharp. We compute the basis of differentials \(W_{P}\) given in Equation (3.5) and Equation (3.6). This results in six cases. We distinguish between \(P=\infty\), \(P\in B_{1}\setminus\{\infty\}\) and \(P\in B_{2}\setminus B_{1}\). Moreover, we distinguish different cases depending on whether the ramification jump (\(d_{P}\) or \(e_{P}\)) equals \(1\) or \(2\). These cases are treated in the following subsections. All of this information is combined to prove Theorem 4.1. In the case \(p=3\), Lemma 3.4 guarantees that the coefficients \(c_{\omega}\) are non-zero. Note that by construction we have \(0\leq\beta_{P}(v)\leq 2\). In the case \(\beta_{P}(v)=0\), the condition \(\beta_{P}(v)\leq a_{1}\) is satisfied. Otherwise, the condition \(\beta_{P}(v)\geq p-2=1\) is satisfied, so Lemma 3.4 always applies. This is used throughout this section. ### A pole of \(f\) of order \(1\) at infinity This section concerns the action of \(\mathcal{C}\) on \(W_{\infty}\), assuming \(f\) has a pole at \(\infty\) of order \(d_{\infty}=1\). The definition of \(W_{\infty}\), from Equation (3.5), becomes \[W_{\infty} =\begin{Bmatrix}y_{2}^{a_{2}}y_{1}^{a_{1}}x^{v}dx\mid 0\leq a_{1},a_{2} <3,\\ 0\leq 9v\leq 3(2-a_{1})+7(2-a_{2})-10\end{Bmatrix}\] \[=\{dx,xdx,y_{1}dx,y_{1}^{2}dx,y_{2}dx,y_{2}y_{1}dx\}.\] We study the restricted Cartier operator \(\mathcal{C}_{W_{\infty}}:\text{span}_{k}W_{\infty}\to\mathrm{H}^{0}(Y_{2}, \Omega^{1}_{Y_{2}})\), using machinery from Section 3. For each element of \(W_{\infty}\), we check whether it is in \(H_{\infty}\). For the differentials \(\omega\) that lie in \(H_{\infty}\), we compute the key term \(\kappa(\mathcal{C}(\omega))\) and the coefficient \(c_{\omega}\). This results in Table 4.1. There are three different key terms, which contribute \(3\) to the rank of \(\mathcal{C}\) via Theorem 3.18. In order to show later that the remainig elements of \(W_{\infty}\) do not contribute to the rank, we apply the Cartier operator to the three elements of \(W_{\infty}\setminus H_{\infty}\). \[\mathcal{C}(dx) =0\] \[\mathcal{C}(xdx) =0\] \[\mathcal{C}(y_{1}dx) =\mathcal{C}((y_{1}^{p}-f(x))dx)=y_{1}\mathcal{C}(dx)-\mathcal{C }(f(x)dx)\] \[=-\sum_{Q\in B}\mathcal{C}(f_{Q}(x_{Q})dx)\in\text{span}_{k}\{x_{ Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}.\] ### A pole of \(f\) of order \(1\) away from infinity We now consider the case \(d_{P}=1\), where \(P\neq\infty\). We compute the basis \[W_{P}=\begin{Bmatrix}y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\mid 0\leq a_{1},a_{2} <3,\\ 0<9v\leq 3(2-a_{1})+7(2-a_{2})+8\end{Bmatrix}.\] Table 4.2 shows the \(14\) differentials and their key terms, if they exist. The coefficients \(c_{\omega}\) are omitted, as Lemma 3.4 provides that they are non-zero. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \(a_{2}\) & \(a_{1}\) & \(v\) & \(\omega\) & \(\alpha_{\infty}(v)\) & \(\beta_{\infty}(v)\) & \(\kappa(\mathcal{C}(\omega))\) & \(c_{\omega}\) \\ \hline 0 & 0 & 0 & \(dx\) & & & & \\ \hline 0 & 0 & 1 & \(xdx\) & & & & \\ \hline 0 & 1 & 0 & \(y_{1}dx\) & & & & \\ \hline 0 & 2 & 0 & \(y_{1}^{2}dx\) & 0 & 2 & \(dx\) & \(\binom{2}{2}=1\) \\ \hline 1 & 0 & 0 & \(y_{2}dx\) & 0 & 2 & \(y_{1}dx\) & \(-\frac{2!}{2!1!}\binom{2}{2}=-1\) \\ \hline 1 & 1 & 0 & \(y_{2}y_{1}dx\) & 0 & 2 & \(y_{1}^{2}dx\) & \(\frac{2!}{1!2!}\binom{2}{2}=1\) \\ \hline \end{tabular} \end{table} Table 1. The computation of key terms in the case \(P=\infty\) and \(d_{\infty}=1\). The table shows that all the \(11\) key terms are different. The basis elements without a key term require more attention. \[\mathcal{C}(x_{P}^{2}dx) =0\] \[\mathcal{C}(x_{P}^{3}dx) =0\] \[\mathcal{C}(y_{1}x_{P}^{2}dx) =y_{1}\mathcal{C}(x_{P}^{2}dx)-\mathcal{C}(x_{P}^{2}f(x)dx)\] \[\in\begin{cases}\text{span}_{k}\{x_{P}dx,x_{Q}dx\mid Q\in B_{1} \setminus\{P,\infty\}\}&\text{if }d_{\infty}=1\\ \text{span}_{k}\{dx,x_{P}dx,x_{Q}dx\mid Q\in B_{1}\setminus\{P,\infty\}\}& \text{if }d_{\infty}=2.\end{cases}\] ### A pole of \(f\) of order \(2\) at infinity Assume \(f\) has a pole of order \(2\) at \(P=\infty\). We then compute the basis \[W_{\infty}=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}}x^{v}dx\mid 0\leq a_{1},a_{2}<3,\\ 0\leq 9v\leq 6(2-a_{1})+14(2-a_{2})-10\end{cases}.\] Table 4.3 shows these \(16\) basis differentials and their key terms, if they exist. Note that \(12\) differentials have a key term, but there are only \(9\) distinct key terms. The differentials outside \(H_{\infty}\) and the pairs of differentials with the same key term \begin{table} \begin{tabular}{|c|c|c|l|l|l|} \(a_{2}\) & \(a_{1}\) & \(v\) & \(\omega\) & \(\alpha_{P}(v)\) & \(\beta_{P}(v)\) & \(\kappa(\mathcal{C}(\omega))\) \\ \hline 0 & 0 & 1 & \(x_{P}dx\) & 0 & 0 & \(x_{P}dx\) \\ \hline 0 & 0 & 2 & \(x_{P}^{2}dx\) & 0 & 2 & \\ \hline 0 & 0 & 3 & \(x_{P}^{3}dx\) & 1 & 1 & \\ \hline 0 & 1 & 1 & \(y_{1}x_{P}dx\) & 0 & 0 & \(y_{1}x_{P}dx\) \\ \hline 0 & 1 & 2 & \(y_{1}x_{P}^{2}dx\) & 0 & 2 & \\ \hline 0 & 2 & 1 & \(y_{1}^{2}x_{P}dx\) & 0 & 0 & \(y_{1}^{2}x_{P}dx\) \\ \hline 0 & 2 & 2 & \(y_{1}^{2}x_{P}^{2}dx\) & 0 & 2 & \(x_{P}^{2}dx\) \\ \hline 1 & 0 & 1 & \(y_{2}x_{P}dx\) & 0 & 0 & \(y_{2}x_{P}dx\) \\ \hline 1 & 0 & 2 & \(y_{2}x_{P}^{2}dx\) & 0 & 2 & \(y_{1}x_{P}^{2}dx\) \\ \hline 1 & 1 & 1 & \(y_{2}y_{1}x_{P}dx\) & 0 & 0 & \(y_{2}y_{1}x_{P}dx\) \\ \hline 1 & 1 & 2 & \(y_{2}y_{1}x_{P}^{2}dx\) & 0 & 2 & \(y_{1}^{2}x_{P}^{2}dx\) \\ \hline 1 & 2 & 1 & \(y_{2}y_{1}^{2}x_{P}dx\) & 0 & 0 & \(y_{2}y_{1}^{2}x_{P}dx\) \\ \hline 2 & 0 & 1 & \(y_{2}^{2}x_{P}dx\) & 0 & 0 & \(y_{2}^{2}x_{P}dx\) \\ \hline 2 & 1 & 1 & \(y_{2}^{2}y_{1}x_{P}dx\) & 0 & 0 & \(y_{2}^{2}y_{1}x_{P}dx\) \\ \hline \end{tabular} \end{table} Table 2. The computation of key terms in the case \(P\in B_{1}\setminus\{\infty\}\) and \(d_{P}=1\). require extra attention. \[\mathcal{C}(dx) =0\] \[\mathcal{C}(xdx) =0\] \[\mathcal{C}(x^{2}dx) =dx\] \[\mathcal{C}(x^{3}dx) =0\] \[\mathcal{C}(y_{1}dx) =-\mathcal{C}(f(x)dx)\in\text{span}_{k}\{dx,x_{Q}dx\mid Q\in B_{1} \setminus\{\infty\}\}\] \[\mathcal{C}(y_{1}xdx) =-\mathcal{C}(xf(x)dx)\in\text{span}_{k}\{dx,x_{Q}dx\mid Q\in B_{1 }\setminus\{\infty\}\}\] \[\mathcal{C}(y_{1}x^{2}dx) =y_{1}dx-\mathcal{C}(x^{2}f(x)dx)\in\text{span}_{k}\{dx,y_{1}dx, x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\] \[\mathcal{C}(y_{1}^{2}dx) =y_{1}\mathcal{C}(f(x)dx)+\mathcal{C}(f(x)^{2}dx)\in\text{span}_{ k}\{dx,y_{1}dx,x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\] \[\quad\oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{ \infty\},d_{Q}=2\}\] \[\mathcal{C}(y_{1}^{2}x^{2}dx) =y_{1}^{2}\mathcal{C}(x^{2}dx)+y_{1}\mathcal{C}(x^{2}f(x)dx)+ \mathcal{C}(x^{2}f(x)^{2}dx)\] \[\quad\in\text{span}_{k}\{dx,xdx,y_{1}dx,y_{1}^{2}dx,x_{Q}dx,y_{1} x_{Q}dx,\mid Q\in B_{1}\setminus\{\infty\}\}\] \[\quad\oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{ \infty\},d_{Q}=2\}\] \[\mathcal{C}(y_{2}dx) =\mathcal{C}(y_{1}^{7}dx)-\mathcal{C}(y_{1}^{5}dx)-\mathcal{C}(h( x)dx)\] \[=y_{1}^{2}\mathcal{C}(y_{1}dx)-y_{1}\mathcal{C}(y_{1}^{2}dx)- \mathcal{C}(h(x)dx)\] \[\in\text{span}_{k}\{dx,xdx,y_{1}dx,y_{1}^{2}dx,x_{Q}dx,y_{1}x_{Q} dx,y_{1}^{2}x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\] \[\quad\oplus\text{span}_{k}\{x_{Q}^{2}dx,y_{1}x_{Q}^{2}dx\mid Q\in B _{1}\setminus\{\infty\},d_{Q}=2\}\] \[\quad\oplus\text{span}_{k}\{x_{R}dx\mid R\in B_{2}\setminus B_{1 }\}.\] \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \(a_{2}\) & \(a_{1}\) & \(v\) & \(\omega\) & \(\alpha_{\infty}(v)\) & \(\beta_{\infty}(v)\) & \(\kappa(\mathcal{C}(\omega))\) \\ \hline 0 & 0 & 0 & \(dx\) & 0 & 1 & \\ \hline 0 & 0 & 1 & \(xdx\) & 0 & 2 & \\ \hline 0 & 0 & 2 & \(x^{2}dx\) & 1 & 0 & \(dx\) \\ \hline 0 & 0 & 3 & \(x^{3}dx\) & 1 & 1 & \\ \hline 0 & 1 & 0 & \(y_{1}dx\) & 0 & 1 & \(dx\) \\ \hline 0 & 1 & 1 & \(y_{1}xdx\) & 0 & 2 & \\ \hline 0 & 1 & 2 & \(y_{1}x^{2}dx\) & 1 & 0 & \(y_{1}dx\) \\ \hline 0 & 2 & 0 & \(y_{1}^{2}dx\) & 0 & 1 & \(y_{1}dx\) \\ \hline 0 & 2 & 1 & \(y_{1}^{2}xdx\) & 0 & 2 & \(xdx\) \\ \hline 0 & 2 & 2 & \(y_{1}^{2}x^{2}dx\) & 1 & 0 & \(y_{1}^{2}dx\) \\ \hline 1 & 0 & 0 & \(y_{2}dx\) & 0 & 1 & \(y_{1}^{2}dx\) \\ \hline 1 & 0 & 1 & \(y_{2}xdx\) & 0 & 2 & \(y_{1}xdx\) \\ \hline 1 & 1 & 0 & \(y_{2}y_{1}dx\) & 0 & 1 & \(y_{2}dx\) \\ \hline 1 & 1 & 1 & \(y_{2}y_{1}xdx\) & 0 & 2 & \(y_{1}^{2}xdx\) \\ \hline 1 & 2 & 0 & \(y_{2}y_{1}^{2}dx\) & 0 & 1 & \(y_{2}y_{1}dx\) \\ \hline 2 & 0 & 0 & \(y_{2}^{2}dx\) & 0 & 1 & \(y_{2}y_{1}^{2}dx\) \\ \hline \end{tabular} \end{table} Table 3. The computation of key terms in the case \(P=\infty\) and \(d_{\infty}=2\). ### A pole of \(f\) of order \(2\) away from infinity We now consider the case when \(f\) has a pole at \(P\neq\infty\) of order \(d_{P}=2\). The basis is \[W_{P}=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\mid 0\leq a_{1},a_{2}<3, \\ 0<9v\leq 6(2-a_{1})+14(2-a_{2})+8\end{cases}\Biggr{\}}.\] The \(24\) differentials and their key terms, if they exist, are compiled in Table 4.4. The table displays that \(20\) differentials have a key term, but there are only \(17\) distinct key terms. The differentials without a key term and the pairs of differentials \begin{table} \begin{tabular}{|c|c|c|l|l|l|l|} \(a_{2}\) & \(a_{1}\) & \(v\) & \(\omega\) & \(\alpha_{P}(v)\) & \(\beta_{P}(v)\) & \(\kappa(\mathcal{C}(\omega))\) \\ \hline 0 & 0 & 1 & \(x_{P}dx\) & 0 & 0 & \(x_{P}dx\) \\ \hline 0 & 0 & 2 & \(x_{P}^{2}dx\) & 0 & 1 & \\ \hline 0 & 0 & 3 & \(x_{P}^{3}dx\) & 0 & 2 & \\ \hline 0 & 0 & 4 & \(x_{P}^{4}dx\) & 1 & 0 & \(x_{P}^{2}dx\) \\ \hline 0 & 0 & 5 & \(x_{P}^{5}dx\) & 1 & 1 & \\ \hline 0 & 1 & 1 & \(y_{1}x_{P}dx\) & 0 & 0 & \(y_{1}x_{P}dx\) \\ \hline 0 & 1 & 2 & \(y_{1}x_{P}^{2}dx\) & 0 & 1 & \(x_{P}^{2}dx\) \\ \hline 0 & 1 & 3 & \(y_{1}x_{P}^{3}dx\) & 0 & 2 & \\ \hline 0 & 1 & 4 & \(y_{1}x_{P}^{4}dx\) & 1 & 0 & \(y_{1}x_{P}^{2}dx\) \\ \hline 0 & 2 & 1 & \(y_{1}^{2}x_{P}dx\) & 0 & 0 & \(y_{1}^{2}x_{P}dx\) \\ \hline 0 & 2 & 2 & \(y_{1}^{2}x_{P}^{2}dx\) & 0 & 1 & \(y_{1}x_{P}^{2}dx\) \\ \hline 0 & 2 & 3 & \(y_{1}^{2}x_{P}^{3}dx\) & 0 & 2 & \(x_{P}^{3}dx\) \\ \hline 0 & 2 & 4 & \(y_{1}^{2}x_{P}^{4}dx\) & 1 & 0 & \(y_{1}^{2}x_{P}^{2}dx\) \\ \hline 1 & 0 & 1 & \(y_{2}x_{P}dx\) & 0 & 0 & \(y_{2}x_{P}dx\) \\ \hline 1 & 0 & 2 & \(y_{2}x_{P}^{2}dx\) & 0 & 1 & \(y_{1}^{2}x_{P}^{2}dx\) \\ \hline 1 & 0 & 3 & \(y_{2}x_{P}^{3}dx\) & 0 & 2 & \(y_{1}x_{P}^{3}dx\) \\ \hline 1 & 1 & 1 & \(y_{2}y_{1}x_{P}dx\) & 0 & 0 & \(y_{2}y_{1}x_{P}dx\) \\ \hline 1 & 1 & 2 & \(y_{2}y_{1}x_{P}^{2}dx\) & 0 & 1 & \(y_{2}x_{P}^{2}dx\) \\ \hline 1 & 1 & 3 & \(y_{2}y_{1}x_{P}^{3}dx\) & 0 & 2 & \(y_{1}^{2}x_{P}^{3}dx\) \\ \hline 1 & 2 & 1 & \(y_{2}y_{1}^{2}x_{P}dx\) & 0 & 0 & \(y_{2}y_{1}^{2}x_{P}dx\) \\ \hline 1 & 2 & 2 & \(y_{2}y_{1}^{2}x_{P}^{2}dx\) & 0 & 1 & \(y_{2}y_{1}x_{P}^{2}dx\) \\ \hline 2 & 0 & 1 & \(y_{2}^{2}x_{P}dx\) & 0 & 0 & \(y_{2}^{2}x_{P}dx\) \\ \hline 2 & 0 & 2 & \(y_{2}^{2}x_{P}^{2}dx\) & 0 & 1 & \(y_{2}y_{1}^{2}x_{P}^{2}dx\) \\ \hline 2 & 1 & 1 & \(y_{2}^{2}y_{1}x_{P}dx\) & 0 & 0 & \(y_{2}^{2}y_{1}x_{P}dx\) \\ \hline \end{tabular} \end{table} Table 4. The computation of key terms in the case \(P\in B_{1}\setminus\{\infty\}\) and \(d_{P}=2\). with the same key term require extra attention. \[\mathcal{C}(x_{P}^{2}dx) =0\] \[\mathcal{C}(x_{P}^{3}dx) =0\] \[\mathcal{C}(x_{P}^{4}dx) =x_{P}^{2}dx\] \[\mathcal{C}(x_{P}^{5}dx) =0\] \[\mathcal{C}(y_{1}x_{P}^{2}dx) =y_{1}\mathcal{C}(x_{P}^{2}dx)-\mathcal{C}(x_{P}^{2}f(x)dx)\] \[\in\begin{cases}\text{span}_{k}\{x_{P}dx,x_{P}^{2}dx,x_{Q}dx\mid Q \in B_{1}\setminus\{P,\infty\}\}&\text{if }d_{\infty}=1\\ \text{span}_{k}\{dx,x_{P}dx,x_{P}^{2}dx,x_{Q}dx\mid Q\in B_{1}\setminus\{P, \infty\}\}&\text{if }d_{\infty}=2\\ \end{cases}\] \[\mathcal{C}(y_{1}x_{P}^{3}dx)=y_{1}\mathcal{C}(x_{P}^{3}dx)-\mathcal{ C}(x_{P}^{3}f(x)dx)\] \[\in\begin{cases}\text{span}_{k}\{x_{P}dx,x_{P}^{2}dx,x_{Q}dx\mid \in B_{1}\setminus\{P,\infty\}\}&\text{if }d_{\infty}=1\\ \text{span}_{k}\{dx,x_{P}dx,x_{P}^{2}dx,x_{Q}dx\mid Q\in B_{1}\setminus\{P, \infty\}\}&\text{if }d_{\infty}=2\\ \end{cases}\] \[\mathcal{C}(y_{1}x_{P}^{4}dx)=y_{1}\mathcal{C}(x_{P}^{4}dx)-\mathcal{ C}(x_{P}^{4}f(x)dx)\] \[\in\begin{cases}\text{span}_{k}\{y_{1}x_{P}^{2}dx,x_{P}dx,x_{P}^{2 }dx,x_{Q}dx\mid Q\in B_{1}\setminus\{P,\infty\}\}&\text{if }d_{\infty}=1\\ \text{span}_{k}\{y_{1}x_{P}^{2}dx,dx,x_{P}dx,x_{P}^{2}dx,x_{Q}dx\mid Q\in B_{1} \setminus\{P,\infty\}\}&\text{if }d_{\infty}=2\\ \end{cases}\] \[\mathcal{C}(y_{1}x_{P}^{4}dx)=y_{1}\mathcal{C}(x_{P}^{4}dx)-\mathcal{ C}(x_{P}^{4}f(x)dx)\] \[\in\begin{cases}\text{span}_{k}\{y_{1}x_{P}^{2}dx,x_{P}dx,x_{P}^{ 2}dx,x_{Q}dx\mid Q\in B_{1}\setminus\{P,\infty\}\}&\text{if }d_{\infty}=1\\ \text{span}_{k}\{y_{1}x_{P}^{2}dx,dx,x_{P}dx,x_{P}^{2}dx,x_{Q}dx\mid Q\in B_{1} \setminus\{P,\infty\}\}&\text{if }d_{\infty}=2\\ \end{cases}\] \[\mathcal{C}(y_{1}^{2}x_{P}^{2}dx)=y_{1}^{2}\mathcal{C}(x_{P}^{2}dx)+y _{1}\mathcal{C}(x_{P}^{2}f(x)dx)+\mathcal{C}(x_{P}^{2}f(x)^{2}dx)\] \[\in\begin{cases}\text{span}_{k}\{y_{1}x_{P}^{2}dx,dx,x_{P}dx,x_{P} ^{2}dx,y_{1}x_{P}dx,x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B_{1}\setminus\{P,\infty\}\} \\ \oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{P,\infty\},d_{Q}= 2\}&\text{if }d_{\infty}=1\\ \end{cases}\] \[\mathcal{C}(y_{1}^{2}x_{P}^{4}dx)=y_{1}^{2}\mathcal{C}(x_{P}^{4}dx)+y _{1}\mathcal{C}(x_{P}^{4}f(x)dx)+\mathcal{C}(x_{P}^{4}f(x)^{2})dx\] \[\in\begin{cases}\text{span}_{k}\{y_{1}^{2}x_{P}^{2}dx,dx,x_{P}dx,x_{P}^{2}dx,x_{P}^{3}dx,y_{1}x_{P}^{2}dx,x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B_{1} \setminus\{P,\infty\}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{P,\infty\},d_{Q}= 2\}&\text{if }d_{\infty}=1\\ \end{cases}\] \[\in\begin{cases}\text{span}_{k}\{y_{1}^{2}x_{P}^{2}dx,dx,y_{1}dx,x_{P} dx,x_{P}^{2}dx,x_{P}^{3}dx,y_{1}x_{P}^{2}dx,y_{1}x_{P}^{2}dx,y_{1}^{2}x_{P}dx\}\\ \oplus\text{span}_{k}\{x_{Q}dx,y_{1}x_{Q}dx,y_{1}^{2}x_{Q}dx\mid Q\in B_{1} \setminus\{P,\infty\}\}\oplus\text{span}_{k}\{x_{R}dx\mid R\in B_{2}\setminus B_ {1}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx,y_{1}x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{P, \infty\},d_{Q}=2\}&\text{if }d_{\infty}=1\\ \end{cases}\] \[\in\begin{cases}\text{span}_{k}\{y_{1}^{2}x_{P}^{2}dx,dx,y_{1}dx,x_{P} dx,x_{P}^{2}dx,x_{P}^{3}dx,y_{1}x_{P}dx,y_{1}x_{P}^{2}dx,y_{1}^{2}x_{P}dx\}\\ \oplus\text{span}_{k}\{x_{Q}dx,y_{1}x_{Q}dx,y_{1}^{2}x_{Q}dx\mid Q\in B_{1} \setminus\{P,\infty\}\}\oplus\text{span}_{k}\{x_{R}dx\mid R\in B_{2}\setminus B_ {1}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx,y_{1}x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{P, \infty\},d_{Q}=2\}&\text{if }d_{\infty}=2\\ \end{cases}\] ### A pole of \(h\) of order \(1\) Assume \(P\in B_{2}\setminus B_{1}\) and \(e_{P}=1\), meaning \(h(x)\) has a pole of order \(1\) at \(P\). The basis \(W_{P}\) is given by \[W_{P}=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\mid 0\leq a_{1},a_{2}<3, \\ 0<3v\leq 4-a_{2}\end{cases}\quad\Bigg{\}}.\] The value of \(a_{1}\) plays no role in whether \(\omega\) is regular and in the key term \(\kappa(\mathcal{C}(\omega))\) as defined in Definition 3.5. In Table 4.5, the three possibilities \(0\leq a_{1}\leq 2\) are combined into one line. Observe that these six differentials all have a unique key term. ### A pole of \(h\) of order \(2\) Assume \(P\in B_{2}\setminus B_{1}\) and \(e_{P}=2\), meaning \(h(x)\) has a pole of order \(2\) at \(P\). The basis is \[W_{P}=\begin{cases}y_{2}^{a_{2}}y_{1}^{a_{1}}x_{P}^{v}dx\mid 0\leq a_{1},a_{2}<3,\\ 0<3v\leq 6-2a_{2}\end{cases}\quad\Bigg{\}}.\] The differentials in \(W_{P}\) with their key terms, if they exist, are presented in Table 4.6. The differentials without a key term require extra attention. \[\mathcal{C}(x_{P}^{2}dx) =0\] \[\mathcal{C}(y_{1}x_{P}^{2}dx) =y_{1}\mathcal{C}(x_{P}^{2}dx)-\mathcal{C}(f(x)x_{P}^{2}dx)\] \[\in\begin{cases}\text{span}_{k}\{x_{P}dx\}\oplus\text{span}_{k}\{ x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}&\text{if }d_{\infty}=1\\ \text{span}_{k}\{dx,x_{P}dx\}\oplus\text{span}_{k}\{x_{Q}dx\mid Q\in B_{1} \setminus\{\infty\}\}&\text{if }d_{\infty}=2\end{cases}\] \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \(a_{2}\) & \(v\) & \(\omega\) & \(\alpha_{P}(v)\) & \(\beta_{P}(v)\) & \(\kappa(\mathcal{C}(\omega))\) \\ \hline 0 & 1 & \(y_{1}^{a_{1}}x_{P}dx\) & 0 & 0 & \(y_{1}^{a_{1}}x_{P}dx\) \\ \hline 0 & 2 & \(y_{1}^{a_{1}}x_{P}^{2}dx\) & 0 & 1 & \\ \hline 1 & 1 & \(y_{2}y_{1}^{a_{1}}x_{P}dx\) & 0 & 0 & \(y_{2}y_{1}^{a_{1}}x_{P}dx\) \\ \hline \end{tabular} \end{table} Table 6. The computation of key terms in the case \(P\in B_{2}\setminus B_{1}\) and \(e_{P}=2\). \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \(a_{2}\) & \(v\) & \(\omega\) & \(\alpha_{P}(v)\) & \(\beta_{P}(v)\) & \(\kappa(\mathcal{C}(\omega))\) \\ \hline 0 & 1 & \(y_{1}^{a_{1}}x_{P}dx\) & 0 & 0 & \(y_{1}^{a_{1}}x_{P}dx\) \\ \hline 1 & 1 & \(y_{2}y_{1}^{a_{1}}x_{P}dx\) & 0 & 0 & \(y_{2}y_{1}^{a_{1}}x_{P}dx\) \\ \hline \end{tabular} \end{table} Table 5. The computation of key terms in the case \(P\in B_{2}\setminus B_{1}\), and \(e_{P}=1\). \[\mathcal{C}(y_{1}^{2}x_{P}^{2}dx) =y_{1}^{2}\mathcal{C}(x_{P}^{2}dx)+y_{1}\mathcal{C}(f(x)x_{P}^{2}dx) +\mathcal{C}(f(x)^{2}x_{P}^{2}dx)\] \[\in\begin{cases}\text{span}_{k}\{dx,x_{P}dx,y_{1}x_{P}dx\}\oplus \text{span}_{k}\{x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{\infty\},d_{Q}=2\} \text{if }d_{\infty}=1\\ \text{span}_{k}\{dx,y_{1}dx,x_{P}dx,y_{1}x_{P}dx\}\oplus\text{span}_{k}\{x_{Q} dx,y_{1}x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{\infty\},d_{Q}=2 \}\text{if }d_{\infty}=2.\end{cases}\] ### The main theorem Let \(Y_{2}\to\mathbb{P}^{1}\) be a \(\mathbb{Z}/9\mathbb{Z}\)-cover which is minimal in the sense of Definition 2.14. We now prove a formula for the \(a\)-number of \(Y_{2}\), implying that the answer to Question 2.15 is "yes" under the assumptions \(p=3\) and \(n=2\). **Theorem 4.1**.: _Suppose \(Y_{2}\to\mathbb{P}^{1}\) has branching datum_ \[\left[\begin{array}{cccccc}2&\dots&3&\dots&0&\dots&0&\dots\\ 4&\dots&7&\dots&2&\dots&3&\dots\\ \end{array}\right]^{\intercal}\] _Then the \(a\)-number of \(Y_{2}\) satisfies the formula_ \[a_{Y_{2}}=3n_{1}+7n_{2}+0n_{3}+3n_{4}.\] Proof.: We apply Theorem 3.18 to obtain a lower bound for \(\operatorname{rk}(\mathcal{C})\) and hence an upper bound for \(a_{Y_{2}}\). Combining Sections 4.1, 4.2, 4.3 and 4.4, we find the number of distinct key terms is \[\#K=11n_{1}+17n_{2}+6n_{3}+6n_{4}-8.\] The correcting factor \(-8\) comes from the pole at infinity. Applying Theorem 3.18, using \(g_{Y_{2}}=\#W\), yields \[a_{Y_{2}} \leq g_{Y_{2}}-\#K\] \[=3n_{1}+7n_{2}+0n_{3}+3n_{4}.\] The remainder of the proof revolves around showing that this upper bound is sharp. We do so by showing two things. First, we show that differentials which have no key term do not contribute to the rank of \(\mathcal{C}\). Second, we show that differentials with the same key term together contribute \(1\) to the rank of \(\mathcal{C}\), such that the rank of \(\mathcal{C}\) is exactly the number of key terms. We do this locally at the points in \(B_{2}\), by treating the six cases corresponding to Sections 4.1, 4.2, 4.3, 4.4, 4.5 and 4.6. We begin with the differentials in \(W_{\infty}\) in the case \(d_{\infty}=1\), as described in Section 4.1. All the key terms occur uniquely, so we treat only the differentials that have no key term. As the differentials \(dx\) and \(xdx\) are killed by \(\mathcal{C}\), they clearly don't contribute to the rank. Next, we consider \(y_{1}dx\), which also does not have a key term. We have \[\mathcal{C}(y_{1}dx)\in\text{span}_{k}\{x_{Q}dx\mid Q\neq\infty\}.\] However, for \(Q\neq\infty\), we have \(\mathcal{C}(x_{Q}dx)=x_{Q}dx\) and \(x_{Q}dx\) does have a key term. Hence \(y_{1}dx\) also does not contribute to the rank of \(\mathcal{C}\). Thus we have exhibited \(3\) linearly independent differentials that don't contribute to \(\text{rk}(\mathcal{C})\), meaning they contribute to \(a_{Y_{2}}\). Second, we consider the case \(P\in B_{1}\setminus\{\infty\}\) and \(d_{P}=1\). Section 4.2 shows that all the key terms are unique. The differentials \(x_{P}^{2}dx\), \(x_{P}^{3}dx\) and \(y_{1}x_{P}^{2}dx\) have no key term. The differentials \(x_{P}^{2}dx\) and \(x_{P}^{3}dx\) are killed by \(\mathcal{C}\) and therefore don't contribute to \(\text{rk}(\mathcal{C})\). Furthermore, we have computed \[\mathcal{C}(y_{1}x_{P}^{2}dx)\in\begin{cases}\text{span}_{k}\{x_{P}dx,x_{Q}dx \mid P\neq Q\neq\infty\}&\text{if }d_{\infty}=1\\ \text{span}_{k}\{dx,x_{P}dx,x_{Q}dx\mid P\neq Q\neq\infty\}&\text{if }d_{ \infty}=2\end{cases}\] In the case \(d_{\infty}=1\), we have already shown that this does not contribute to \(\text{rk}(\mathcal{C})\). In the case \(d_{\infty}=2\), we have \(\mathcal{C}(x^{2}dx)=dx\). In both cases \(y_{1}x_{P}^{2}dx\) does not contribute to \(\text{rk}(\mathcal{C})\). Again, the differentials in \(W_{P}\) contribute exactly \(3\) to \(a_{Y_{2}}\). Third, we consider the differentials in \(W_{\infty}\) in the case \(d_{\infty}=2\). As shown in Section 4.3, the differentials \(dx\), \(xdx\) and \(x^{3}dx\) are killed by \(\mathcal{C}\). \(x^{2}dx\) and \(y_{1}dx\) both have the key term \(dx\). We've seen that \(\mathcal{C}(x^{2}dx)=dx\) and \[\mathcal{C}(y_{1}dx)\in\text{span}_{k}\{dx,x_{Q}dx\mid Q\in B_{1}\setminus\{ \infty\}\}.\] We have already established that \[\text{span}_{k}\{x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\subset\text{im} (\mathcal{C}),\] so the differentials \(x^{2}dx\) and \(y_{1}dx\) together contribute \(1\) to \(\text{rk}(\mathcal{C})\). It then immediately follows that \(y_{1}xdx\), which has no key term, does not contribute to \(\text{rk}(\mathcal{C})\), since we have computed \[\mathcal{C}(y_{1}xdx)\in\text{span}_{k}\{dx,x_{Q}dx\mid Q\in B_{1}\setminus\{ \infty\}\}.\] Next, the two differentials \(y_{1}x^{2}dx\) and \(y_{1}^{2}dx\) both have the key term \(y_{1}dx\). We've computed \[\mathcal{C}(y_{1}x^{2}dx) \in\text{span}_{k}\{dx,y_{1}dx,x_{Q}dx\mid Q\in B_{1}\setminus\{ \infty\}\}\] \[\mathcal{C}(y_{1}^{2}xdx) \in\text{span}_{k}\{dx,y_{1}dx,x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B_{1} \setminus\{\infty\}\}\] \[\quad\oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{ \infty\},d_{Q}=2\}.\] We show that they contribute \(1\) to \(\text{rk}(\mathcal{C})\) together, by showing that \(y_{1}x_{Q}dx\) and \(x_{Q}^{2}dx\) (for \(d_{Q}=2\)) are already in the image of \(\mathcal{C}\). We compute \[\mathcal{C}(y_{1}x_{Q}dx) =y_{1}x_{Q}dx-\mathcal{C}(x_{Q}f(x)dx)\] \[\in\text{span}_{k}\{dx,y_{1}x_{Q}dx,x_{P}dx\mid P\in B_{1}\setminus \{\infty\}\}\] \[\mathcal{C}(x_{Q}^{4}dx) =x_{Q}^{2}dx.\] The second computation assumes \(d_{Q}=2\), so that \(x_{Q}^{4}dx\) is indeed regular. These computations together imply that the differentials \(y_{1}x^{2}dx\) and \(y_{1}^{2}dx\), which have the same key term, together contribute \(1\) to \(\text{rk}(\mathcal{C})\). Finally, we consider the differentials \(y_{1}^{2}x^{2}dx\) and \(y_{2}dx\), which both have the key term \(y_{1}^{2}dx\). In Section 4.3, we have established \[\mathcal{C}(y_{1}^{2}x^{2}dx) \in\text{span}_{k}\{dx,xdx,y_{1}dx,y_{1}^{2}dx,x_{Q}dx,y_{1}x_{Q} dx,\mid Q\in B_{1}\setminus\{\infty\}\}.\] \[\quad\oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{ \infty\},d_{Q}=2\}\] \[\mathcal{C}(y_{2}dx) \in\text{span}_{k}\{dx,xdx,y_{1}dx,y_{1}^{2}dx,x_{Q}dx,y_{1}x_{Q} dx,y_{1}^{2}x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\] \[\quad\oplus\text{span}_{k}\{x_{Q}^{2}dx,y_{1}x_{Q}^{2}dx\mid Q\in B _{1}\setminus\{\infty\},d_{Q}=2\}\] \[\quad\oplus\text{span}_{k}\{x_{R}dx\mid R\in B_{2}\setminus B_{1}\}.\] It remains to be shown that \(xdx\), \(y_{1}^{2}x_{Q}dx\), \(y_{1}x_{Q}^{2}dx\) (in the case \(d_{Q}=2\)) and \(x_{R}dx\) (with \(R\in B_{2}\setminus B_{1}\)) already lie in the image of \(\mathcal{C}\). We compute \[\mathcal{C}(y_{1}^{2}xdx) =y_{1}^{2}\mathcal{C}(xdx)+y_{1}\mathcal{C}(xf(x)dx)+\mathcal{C}( xf(x)^{2}dx)\] \[\in\text{span}_{k}\{dx,xdx,y_{1}dx,x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B _{1}\setminus\{\infty\}\}\] \[\quad\oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{ \infty\},d_{Q}=2\}\] \[\mathcal{C}(y_{1}^{2}x_{Q}dx) =y_{1}^{2}x_{Q}dx+y_{1}\mathcal{C}(x_{Q}f(x)dx)+\mathcal{C}(x_{Q} f(x)^{2}dx)\] \[\in\text{span}_{k}\{y_{1}^{2}x_{Q}^{2}dx,dx,y_{1}dx,x_{P}dx,y_{1} x_{P}dx\mid P\in B_{1}\setminus\{\infty\}\}\] \[\quad\oplus\text{span}_{k}\{x_{P}^{2}dx\mid P\in B_{1}\setminus \{\infty\},d_{P}=2\}\] \[\mathcal{C}(y_{1}x_{Q}^{4}dx) =y_{1}x_{Q}^{2}dx-\mathcal{C}(x_{Q}^{4}f(x)dx)\] \[\in\text{span}_{k}\{y_{1}x_{Q}dx,dx,x_{Q}^{2}dx,x_{P}dx\mid P\in B _{1}\setminus\{\infty\}\}.\mathcal{C}(x_{R}dx)\ \ =x_{R}dx.\] These computations verify that indeed \(xdx\), \(y_{1}^{2}x_{Q}dx\), \(y_{1}x_{Q}^{2}dx\) and \(x_{R}dx\) already lie in the image of \(\mathcal{C}\), without the help of \(y_{1}^{2}x^{2}dx\) and \(y_{2}dx\). Thus the differentials \(y_{1}^{2}x^{2}dx\) and \(y_{2}dx\) together contribute \(1\) to \(\text{rk}(\mathcal{C})\). As the fourth step, we treat the case where \(P\in B_{1}\setminus\{\infty\}\) and \(d_{P}=2\). In Section 4.4, we have seen that \(x_{P}^{2}dx\), \(x_{P}^{3}dx\) and \(x_{P}^{5}dx\) are killed by the Cartier operator. The differentials \(x_{P}^{4}dx\) and \(y_{1}x_{P}^{2}\) have the key term \(x_{P}^{2}dx\), but the computations in Section 4.4, together with computations earlier in this proof, imply that they together contribute \(1\) to \(\text{rk}(\mathcal{C})\). It then follows immediately that \(y_{1}x_{P}^{3}\), which has no key term, does not contribute to \(\text{rk}(\mathcal{C})\). Next we consider the differentials \(y_{1}x_{P}^{4}dx\) and \(y_{1}^{2}x_{P}^{2}dx\), which both have the key term \(y_{1}x_{P}^{2}dx\). The computations in Section 4.4 show that they together contribute \(1\) to \(\text{rk}(\mathcal{C})\); they both extend the image of \(\mathcal{C}\) only by \(\text{span}_{k}\{y_{1}x_{P}^{2}dx\}\) and nothing else. Finally, we consider the differentials \(y_{1}^{2}x_{P}^{4}dx\) and \(y_{2}x_{P}^{2}dx\), which both have the key term \(y_{1}^{2}x_{P}^{2}dx\). They introduce not only \(y_{1}^{2}x_{P}^{2}dx\), but also \(x_{P}^{3}dx\) and \(y_{1}dx\). We show that these differentials already lie in the image of \(\mathcal{C}\) independently. For \(x_{P}^{3}dx\), we compute \[\mathcal{C}(y_{1}^{2}x_{P}^{3}) =y_{1}^{2}\mathcal{C}(x_{P}^{3}dx)+y_{1}\mathcal{C}(x_{P}^{3}f(x)dx )+\mathcal{C}(x_{P}^{3}f(x)^{2}dx)\] \[\in\begin{cases}\text{span}_{k}\{x_{P}^{3}dx,dx,x_{P}dx,x_{P}^{2} dx,y_{1}x_{P}dx,y_{1}x_{P}^{2}dx\}\\ \oplus\text{span}_{k}\{x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B_{1}\setminus\{P,\infty \}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{\infty\},d_{Q}=2 \}\hskip 56.905512pt\text{if }d_{\infty}=1\\ \text{span}_{k}\{x_{P}^{3}dx,dx,y_{1}dx,x_{P}dx,x_{P}^{2}dx,y_{1}x_{P}dx,y_{1} x_{P}^{2}dx\}\\ \oplus\text{span}_{k}\{x_{Q}dx,y_{1}x_{Q}dx,y_{1}^{2}x_{Q}dx\mid Q\in B_{1} \setminus\{P,\infty\}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx,y_{1}x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{ \infty\},d_{Q}=2\}\hskip 56.905512pt\text{if }d_{\infty}=2\end{cases}\] Hence \(x_{P}^{3}dx\) was already in the image of \(\mathcal{C}\). It remains to be shown that \(y_{1}dx\) is already in the image of \(\mathcal{C}\) in the case \(d_{\infty}=1\). We compute \[\mathcal{C}(y_{1}^{2}dx) =y_{1}^{2}\mathcal{C}(dx)+y_{1}\mathcal{C}(f(x)dx)+\mathcal{C}(f( x)^{2}dx)\] \[\in \{dx,x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\} \oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{\infty\},d_{Q}=2\}\] \[\mathcal{C}(y_{2}dx) =y_{1}^{2}\mathcal{C}(dx)-y_{1}\mathcal{C}(y_{1}^{2}dx)-\mathcal{ C}(h(x)dx)\] \[\in\text{span}_{k}\{dx,y_{1}dx,x_{Q}dx,y_{1}x_{Q}dx,y_{1}^{2}x_{Q }dx\mid Q\in B_{1}\setminus\{\infty\}\}\] \[\oplus\text{span}_{k}\{x_{Q}^{2}dx,y_{1}x_{Q}^{2}dx\mid Q\in B_{1 }\setminus\{\infty\},d_{Q}=2\}.\] This shows that \(y_{1}dx\) was already in the image of \(\mathcal{C}\). It follows that the differentials \(y_{1}^{2}x_{P}^{4}dx\) and \(y_{2}x_{P}^{2}dx\) together contribute \(1\) to \(\text{rk}(\mathcal{C})\), which concludes the fourth step. As the fifth step, we treat the case \(P\in B_{2}\setminus B_{1}\) and \(e_{P}=1\), meaning \(h\) has a pole of order \(1\). In that case, Section 4.5 shows that the differentials in \(W_{P}\) all have different key terms. Hence these differentials do not contribute to the \(a\)-number. As the sixth and last step, we treat the case \(P\in B_{2}\setminus B_{1}\) and \(e_{P}=2\), meaning \(h\) has a pole of order \(2\). Section 4.6 shows that \(6\) out of the \(9\) differentials in \(W_{P}\) have a key term, and these are all unique. This implies that the rank of the Cartier operator restricted to \(\text{span}_{k}W_{P}\) is at least \(6\). We now show that it's exactly \(6\), by analyzing the basis differentials that do not have a key term. Since \(\mathcal{C}(x_{P}^{2}dx)=0\), the differential \(x_{P}^{2}dx\) does not contribute to the rank of \(\mathcal{C}\). Moving on to \(y_{1}x_{P}^{2}dx\), recall \[\mathcal{C}(yx_{P}^{2}dx) =y\mathcal{C}(x_{P}^{2}dx)-\mathcal{C}(f(x)x_{P}^{2}dx)\] \[\in\begin{cases}\text{span}_{k}\{x_{P}dx\}\oplus\text{span}_{k} \{x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}&\text{if }d_{\infty}=1\\ \text{span}_{k}\{dx,x_{P}dx\}\oplus\text{span}_{k}\{x_{Q}dx\mid Q\in B_{1} \setminus\{\infty\}\}&\text{if }d_{\infty}=2\end{cases}\] Since \(\mathcal{C}(x_{P}dx)=x_{P}dx\), these differentials are already in the image of \(\mathcal{C}\). Hence \(y_{1}x_{P}^{2}dx\) does not contribute to the rank of \(\mathcal{C}\). Finally, we treat \(y_{1}^{2}x_{P}^{2}dx\): \[\mathcal{C}(y_{1}^{2}x_{P}^{2}dx) =y_{1}^{2}\mathcal{C}(x_{P}^{2}dx)+y_{1}\mathcal{C}(f(x)x_{P}^{2} dx)+\mathcal{C}(f(x)^{2}x_{P}^{2}dx)\] \[\in\begin{cases}\text{span}_{k}\{dx,x_{P}dx,y_{1}x_{P}dx\}\oplus \text{span}_{k}\{x_{Q}dx,y_{1}x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{\infty\},d_{Q}=2\} \text{if }d_{\infty}=1\\ \text{span}_{k}\{dx,y_{1}dx,x_{P}dx,y_{1}x_{P}dx\}\oplus\text{span}_{k}\{x_{Q }dx,y_{1}x_{Q}dx\mid Q\in B_{1}\setminus\{\infty\}\}\\ \oplus\text{span}_{k}\{x_{Q}^{2}dx\mid Q\in B_{1}\setminus\{\infty\},d_{Q}=2\} \text{if }d_{\infty}=2.\end{cases}\] To show that this does not contribute to the rank of \(\mathcal{C}\), we only need to show that \(y_{1}x_{P}dx\) was already in the image of \(\mathcal{C}\). This is clear since \[\mathcal{C}(y_{1}x_{P}dx)=y_{1}x_{P}dx.\] Thus \(P\) contributes \(3\) to the \(a\)-number of \(Y_{2}\). We have treated all six cases of \(W_{P}\), depending on the ramification at \(P\) and on whether \(P=\infty\). We have shown that a pole of \(f\) of order \(2\) contributes \(7\) to the \(a\)-number, while a pole of \(f\) of order \(1\) and a pole of \(h\) of order \(2\) both contribute \(3\) to the \(a\)-number. A pole of \(h\) of order \(1\) contributes nothing to the \(a\)-number. In conclusion, we've proved the formula \[a_{Y_{2}}=3n_{1}+7n_{2}+3n_{4}.\] _Remark 4.2_.: Seeing this proof, it would be tempting to think there is an inclusion \(\text{im}(\mathcal{C})\subseteq\text{span}_{k}K\), where \(K\) is the set of key terms. However, this is not true, even in the case \(p=3\). To show this, take the differential \(y_{2}^{2}dx\), assuming \(d_{\infty}=2\). The differential \(\mathcal{C}(y_{2}^{2}dx)\) has a term \(\mathcal{C}(h(x)^{2}dx)\). The rational function \(h(x)\) has a pole of order at most \(5\) at infinity. This means for a suitable choice of \(h(x)\), we have a term \(\mathcal{C}(x^{8}dx)=x^{2}dx\), which does not occur as a key term. Since \(\text{im}(\mathcal{C})\) and \(\text{span}_{k}K\) have the same dimension, by virtue of Theorem 4.1, the opposite inclusion also fails to hold.
2309.04731
Quantum-enhanced super-sensitivity of Mach-Zehnder interferometer using squeezed Kerr state
We study the phase super-sensitivity of a Mach-Zehnder interferometer (MZI) with the squeezed Kerr and coherent states as the inputs. We discuss the lower bound in phase sensitivity by considering the quantum Fisher information (QFI) and corresponding quantum Cramer-Rao bound (QCRB). With the help of single intensity detection (SID), intensity difference detection (IDD) and homodyne detection (HD) schemes, we find that our scheme gives better sensitivity in both the lossless as well as in lossy conditions as compared to the combination of well-known results of inputs as coherent plus vacuum, coherent plus squeezed vacuum and double coherent state as the inputs. Because of the possibility of generation of squeezed Kerr state (SKS) with the present available quantum optical techniques, we expect that SKS may be an alternative nonclassical resource for the improvement in the phase super-sensitivity of the MZI under realistic scenario.
Dhiraj Yadav, Gaurav Shukla, Priyanka Sharma, Devendra Kumar Mishra
2023-09-09T09:32:29Z
http://arxiv.org/abs/2309.04731v2
# Quantum-enhanced super-sensitivity of Mach-Zehnder interferometer using squeezed Kerr state ###### Abstract We study the phase super-sensitivity of a Mach-Zehnder interferometer (MZI) with the squeezed Kerr and coherent states as the inputs. We discuss the lower bound in phase sensitivity by considering the quantum Fisher information (QFI) and corresponding quantum Cramer-Rao bound (QCRB). With the help of single intensity detection (SID), intensity difference detection (IDD) and homodyne detection (HD) schemes, we find that our scheme gives better sensitivity in both the lossless as well as in lossy conditions as compared to the combination of well-known results of inputs as coherent plus vacuum, coherent plus squeezed vacuum and double coherent state as the inputs. Because of the possibility of generation of squeezed Kerr state (SKS) with the present available quantum optical techniques, we expect that SKS may be an alternative nonclassical resource for the improvement in the phase super-sensitivity of the MZI under realistic scenario. + Footnote †: preprint: APS/123-QED ## I Introduction In science and technology of metrology, central task is to perform accurate measurement of certain parameters [1]. By exploiting the peculiar properties of quantum mechanics [1; 2; 3; 4] quantum metrology deals with the precision measurements of such a parameter [5]. For precision measurement of certain parameter, which are not measurable directly via conventional techniques, phase estimation [1; 2; 3; 4] via optical interferometers plays an important role. In order to perform the phase measurement scheme, usually, SU(2) or SU(1,1) based interferometers [6] are used. SU(2)-type interferometers, like Michelson interferometer (MI) and MZI, are based on the passive type beam splitters while, SU(1,1)-type interferometers are based on the active elements, e.g., optical parametric amplifiers (OPAs), in place of the beam splitters [7; 8; 6; 9]. Theoretically as well as experimentally, it is found that the performance of the interferometer maximally depends on the input light sources [5]. If we compare the performance of interferometer, depends maximally on the input light, then the ascending order of the performance would be thermal lights, coherent lights and maximally by the nonclassical lights [5; 10]. Non-classical lights are the class of the lights which are only understood by the quantum mechanical theories [11], e.g., single photon state [12], squeezed states [13], twins Fock states [14], Schrodinger's cat states [15; 16; 17], N00N states [18; 19; 20; 21; 22], etc. The well-known combination of coherent and squeezed vacuum as the input states [23; 24; 25] became famous choice, for their good performance in the low as well as high-power range [26; 27; 28] of interferometry and, also, due to its very recent application in the gravitational wave detection [29; 30; 31; 32; 33]. Since the seminal work by Caves [23], four decades ago, squeezing-assisted optical interferometry [34; 35] become a centerpiece of theoretical [36; 37; 38; 28] and experimental [4] quantum metrology. A MZI injected by an intense light in coherent state (which is an eigen state of the annihilation operator, denoted as \(\left|\beta\right\rangle\)) at one input port and squeezed-vacuum state at the other input port can attain the phase sensitivity \(\Delta\phi=e^{-r}/\sqrt{\bar{n}}\), where \(r(\geq 0)\) is the squeezing parameter, \(\bar{n}\) is the total average number of photons and \(\phi\) is the relative phase shift between the two arms of the interferometer [23]. This scheme can beat the shot-noise limit (SNL), \(\Delta\phi_{SNL}=1/\sqrt{\bar{n}}\), by an amount depending on the squeezing parameter \(r\). To date, squeeze factors [39] of more than 10 dB have been observed in several experiments [40; 41; 42]. Interestingly, for the similar intensity, MZI can achieve Heisenberg limit (HL), \(\Delta\phi_{HL}=1/\bar{n}\), for the coherent and squeezed-vacuum states at the inputs [43]. In order to achieve the HL, we must have squeezing in such amount that \(e^{-r}\approx 1/\sqrt{\bar{n}}\). Experimentally it is very tough for higher values of photons [40; 41; 42]. In 2000, Dowling group proposed a new type of state know as NOON state [18] by which one can easily attain the HL. But for higher photon number, generation of NOON state is very much challenging [44; 45]. So, this has led to open a new area of research having significant amount of work in the optimisation and generation of the nonclassical light [35]. In order to generate the nonclassical light, special type of nonlinear materials [46] and techniques [47] are being used. For example, parametric processes in second-order \(\chi^{(2)}\) media generate squeezing and entanglement [48]. On the other hand, the Kerr effect occurring in third-order nonlinear \(\chi^{(3)}\) media are being used to perceive quantum nondemolition measurements [49; 50], to generate quantum superpositions [15; 51] as well as squeezing [52] and entanglement [53; 54]. Most common methods of generating nonclassical light are parametric downconversion (PDC) [48], four-wave mixing [55; 56] and the Kerr effect [57]. Unlike the former two approaches, squeezing via the Kerr interaction [58; 59] is inherently phase matched, which allows for flexibility in the wavelength of the probe light. These features meant the utilisation of Kerr effect is a robust and flexible ap proach. The Kerr interaction requires high optical powers to reach sufficient nonlinearity and this is commonly achieved by using ultrashort pulses [60; 61]. However, this requires careful control of the pulses, since dispersion can act to spread out the pulse and, therefore reduce the nonlinearity. Control of pulse spreading may be achieved by generating optical solitons, where the nonlinearity and dispersion are perfectly balanced [58; 62]. For Kerr squeezing, possibility of using materials such as optical fibre lends a significant flexibility and it does not require a cavity [58] to enhance the strength of interaction as well as simplifying the experimental requirements. Kerr interaction has several applications in quantum technology such as quantum nondemolition measurement [63; 64; 65], nonlinear and quantum controls of light fields [65], all-optical deterministic quantum logic [66; 67], quantum bit regeneration [68], quantum state teleportation [69], and the generation of optical solitons [70; 71]. Gerry & Grobe [72] studied the statistical properties of squeezed Kerr state (SKS) generated by interaction of coherent light with Kerr medium followed by squeezer. Further, its higher-order nonclassical properties have been studied by Mishra [73]. Such states can be realized if laser light is sent through an optical fiber and then into a degenerate parametric amplifier [73]. The squeezed vacuum state can be seen as a limiting case of SKS which we will discuss in the next section as shown in Fig. 2. To the best of our knowledge, there is no study of phase sensitivity of MZI using SKS as one of the inputs. So, we are motivated to study the improvement in phase sensitivity by using the SKS at input of MZI and will compare our results with the previous studies and results. Since, total phase shift in MZI arms is \(\phi=\phi_{us}+\phi_{es}\), where \(\phi_{us}\) is phase change due to unknown source and \(\phi_{es}\) is phase change due to controllable experimental setup. But, \(\phi_{us}<<\phi_{es}\), so we can write \(\phi\approx\phi_{es}\) and for simplicity, throughout the paper, we can ignore the suffix of \(\phi_{es}\) and denote it as \(\phi\). Therefore, for precise observation, one must adjust the phase difference in between the arms of the MZI nearest to \(\phi\). We perform the SID, IDD and HD schemes [26; 37] and we calculate the optimal solutions of phase sensitivity for all these detection schemes. In this paper, we are focused on single parameter case, i.e., the phase change only in arm of the interferometer. So, with the help of the single parameter quantum Fisher information (QFI) and its associated quantum Cramer-Rao bound (QCRB)[74; 75; 28], we analyse the better performance in our MZI setup. The paper is organized as follows. In section II, we discuss the basics of interferometry by using different types of detection schemes and briefly discuss about the SKS. Section III describes the phase sensitivity of MZI with SKS and coherent states as the inputs of MZI under the lossless condition. Section IV describes the phase sensitivity of MZI with SKS and coherent states as the inputs of MZI under lossy condition. In Section V, we conclude our results. ## II Basics of phase sensing and parameter estimation with MZI and generation of SKS In this section, we will discuss the basics of phase estimation and parameter estimation with MZI and generation of SKS. Here, we will see the operator transformations under the MZI action and the method of standard error propagation formula to calculate the phase sensitivity of the interferometer for different detection schemes. We will discuss the lower bound in phase sensitivity by considering the single parameter quantum Fisher information (QFI) and corresponding quantum Cramer-Rao bound (QCRB). Further, we will review the interaction of coherent light with Kerr media followed by the squeezer. ### Interferometery with MZI and detection operators A standard MZI setup, as shown in Fig. 1, consists of two input ports (\(1^{st}\) port and \(2^{nd}\) port) and two output ports (\(3^{rd}\) port and \(4^{th}\) port) associated with two 50:50 beam splitters, two mirrors and two detectors. Interferometry is a three step process: probe state preparation, state evolution and measurement. In probe preparation, two input states are mixed via first beam splitter \(BS_{1}\) and the input/output mode transformations follow the SU(2) transformation [76; 77]. So, input/output relations via the \(BS_{1}\) is written as [77] \[\begin{pmatrix}\hat{a}_{1_{out}}\\ \hat{a}_{2_{out}}\end{pmatrix}=\begin{pmatrix}i\mathcal{R}_{1}&\mathcal{T}_{1} \\ \mathcal{T}_{1}&i\mathcal{R}_{1}\end{pmatrix}\begin{pmatrix}\hat{a}_{1_{in}}\\ \hat{a}_{2_{in}}\end{pmatrix}. \tag{1}\] Here \(i=\sqrt{-1}\), \(\mathcal{R}_{1}\) (\(\mathcal{T}_{1}\)) represents the reflection (transmission) coefficient and \(\hat{a}_{1_{in}2_{in}}\) (\(\hat{a}_{1_{out}2_{out}}\)) are the input Figure 1: The schematic block diagram of MZI having two input and two output ports including two 50:50 beam splitters (\(BS_{1}\) and \(BS_{2}\)), two mirrors (\(M_{1}\) and \(M_{2}\)) and two detectors (\(D_{1}\) and \(D_{2}\)). (output) annihilation operators of the \(BS_{1}\). Probe state, during the propagation inside the interferometer, experiences the phase change and the resulting phase change between the probes is, say \(\phi\). So we consider the phase, \(\phi\), in any one of the arm as a single parameter case and allow the probes to recombine via the second beam splitter \(BS_{2}\). Transformation of annihilation operators, again, follow the relation, Eq. (1). Therefore, working with the two 50:50 beam splitters, i.e., \(\mathcal{R}_{1,2}=\mathcal{T}_{1,2}=1/\sqrt{2}\), and considering phase change during reflection (transmission) equal to \(\pi/2\) (\(0\)), relation between output and input annihilation operators are written as, \[\hat{a}_{3}=-e^{\frac{i\phi}{2}}\left(-\sin\left(\frac{\phi}{2} \right)\hat{a}_{1}+\cos\left(\frac{\phi}{2}\right)\hat{a}_{2}\right), \tag{2}\] \[\hat{a}_{4}=-e^{\frac{i\phi}{2}}\left(\cos\left(\frac{\phi}{2} \right)\hat{a}_{1}+\sin\left(\frac{\phi}{2}\right)\hat{a}_{2}\right). \tag{3}\] Here \(\phi\) is the phase difference between the two arms and \(\hat{a}_{1,2}\) (\(\hat{a}_{3,4}\)) are the input (output) annihilation operators of the MZI. As a final step, data collected by the detectors at the output of MZI are analysed by using statistical protocols and formulae. From the standard error propagation formula, phase sensitivity of the interferometer reads \[\Delta\phi=\frac{\Delta\hat{L}(\phi)}{\left|\frac{\partial\hat{L}(\phi))}{ \partial\phi}\right|}. \tag{4}\] Here \(\hat{L}(\phi)\) is an observable containing information about the phase change and \(\Delta\hat{L}(\phi)\) is the standard deviation of \(\hat{L}(\phi)\) defined by, \[\Delta\hat{L}(\phi)=\sqrt{\langle\hat{L}^{2}(\phi)\rangle-\langle\hat{L}(\phi )\rangle^{2}}. \tag{5}\] Here, \(\langle...\rangle\) is the expectation value of the operator with respect to the state \(|\psi\rangle_{in}=|\psi_{1}\rangle\otimes|\psi_{2}\rangle\), where \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) represent the input states at ports \(1^{st}\) and \(2^{nd}\), respectively. We are considering three detection schemes: SID, IDD, and HD. In quantum mechanics, there must be an operator associated with the every observable. Similarly, each detection scheme is also associated with an operator. For example, operator associated with SID scheme is \[\hat{L}_{sid}(\phi)\equiv\hat{N}_{3}, \tag{6}\] for IDD scheme \[\hat{L}_{idd}(\phi)\equiv\hat{N}_{3}-\hat{N}_{4}, \tag{7}\] and HD scheme perform at \(3^{rd}\) port having quadrature operator \[\hat{L}_{hd}(\phi)=\frac{1}{\sqrt{2}}(\hat{a}_{3}+\hat{a}_{3}^{\dagger}), \tag{8}\] Here, \(\hat{N}_{3}(=\hat{a}_{3}^{\dagger}\hat{a}_{3})\) and \(\hat{N}_{4}(=\hat{a}_{4}^{\dagger}\hat{a}_{4})\) are the photon number operators for the \(3^{rd}\) and \(4^{th}\) ports of the MZI, respectively and \(\hat{a}_{3}\) (\(\hat{a}_{3}^{\dagger}\)) is the annihilation (creation) operator associated with the \(3^{rd}\) port. ### Quantum parameter estimation In an estimation procedure, our task is to estimate the value of a parameter, the total phase change in the arms of the MZI in present case, from the data collected by \(n\) measurements, say \(\{v_{1},v_{2},...,v_{n}\}\). For measurement, we consider an operator \(\hat{O}\). Estimated value of the parameter will be characterized by the statistical error \(\delta\phi\), whose lower bound is the Cramer-Rao bound (CRB) [78; 79], \[\delta\phi^{2}\geq\frac{1}{nF(\phi)}. \tag{9}\] Here \(n\) stands for the number of measurements and \(F(\phi)\) denotes the classical Fisher information (CFI), defined by \[F(\phi)=\left\langle\left(\frac{\partial\ln p(v|\phi)}{\partial\phi}\right)^{ 2}\right\rangle. \tag{10}\] Here, \(p(v|\phi)\) is the probability that the outcome of a measurement is \(v\) when the value of the parameter is \(\phi\), and \(\langle...\rangle\) is the expectation value over the probability distribution \(p(v|\phi)\). If we consider the quantum system, then \(p(v|\phi)=\text{Tr}(\hat{\rho}_{\phi}\hat{\Pi}_{v})\), where \(\hat{\rho}_{\phi}\) is the density operator and \(\hat{\Pi}_{v}\) is the positive operator-valued measure (POVM) operator for the outcome \(v\). By introducing the symmetric logarithmic derivative (SLD), \(\hat{L}_{\phi}\), defined by \(2\partial_{\phi}\hat{\rho}_{\phi}=\hat{L}_{\phi}\hat{\rho}_{\phi}+\hat{\rho}_{ \phi}\hat{L}_{\phi}\), Eq. (10) can be written as, \[F(\phi)=\left\langle\frac{\text{Re}[\text{Tr}(\hat{\rho}_{\phi}\hat{\Pi}_{v} \hat{L}_{\phi})]^{2}}{\text{Tr}(\hat{\rho}_{\phi}\hat{\Pi}_{v})}\right\rangle. \tag{11}\] By maximizing \(F(\phi)\) over all possible quantum measurements on the quantum system, we obtain the QFI as [79]: \[F_{Q}=\text{Tr}(\hat{\rho}_{\phi}\hat{L}_{\phi}^{2}), \tag{12}\] and, thus, QCRB is [28] \[\Delta\phi_{QCRB}\geq\frac{1}{\sqrt{F_{Q}}}. \tag{13}\] This gives us the ultimate precision achievable on the estimation of \(\phi\) independent of a quantum measurement. Density operator, \(\hat{\rho}_{\phi}\), of a mixed state can be written in terms of the complete basis, \(\{|i\rangle\}\), as \(\hat{\rho}_{\phi}=\sum_{i}p_{i}|i\rangle\langle i|\) with \(p_{i}\geq 0\) and \(\sum_{i}p_{i}=1\). Therefore, QFI can be written as [74; 3] \[F_{Q}=\sum_{i,i^{\prime}}\frac{2}{p_{i}+p_{i^{\prime}}}|\langle i|\partial_{ \phi}\hat{\rho}_{\phi}|i^{\prime}\rangle|^{2}. \tag{14}\] The density operator for a pure state \(|\psi\rangle\) is \(\hat{\rho}=|\psi\rangle\langle\psi|\). For this case, the Eq. (14) becomes [80; 74] \[F_{Q}=4\left(\langle\partial_{\phi}\psi|\partial_{\phi}\psi\rangle-|\langle \partial_{\phi}\psi|\psi\rangle|^{2}\right). \tag{15}\] By using transformations given by Eqs. (2) and (3), the Eq. (15) can be written as \[F_{Q}=2(g_{1}+g_{2})+g_{5}+g_{4}-(g_{1}-g_{2})^{2}-g_{6}+g_{10}^{2} \tag{16}\] \[+2i(g_{11}+g_{12}+g_{10}\times(g_{1}+g_{2}-1)),\] Here, \(g_{i}\) with \(i=1,2,...,12\) are given in Eq. (14) of Appendix A and complete expressions for the expectation values and their relations are given in the Appendix B. ### Interaction of coherent light with Kerr medium followed by the squeezer: squeezed-Kerr state (SKS) The Kerr effect, also known as quadratic electro-optic effect, is a change in the refractive index of a material medium in response to an applied electromagnetic field. In the Kerr medium, electromagnetic field interact with the material medium having third-order nonlinearity where the refractive index is intensity dependent [76; 81]. Hamiltonian, \(\hat{H}\), of this quantum mechanical system can be written as [76; 81] \[\hat{H}=\hbar\omega\hat{a}^{\dagger}\hat{a}+\hbar\chi^{(3)}\hat{a}^{\dagger} \hat{a}^{2}\hat{a}^{2}, \tag{17}\] where \(\hbar\) is the Dirac constant, \(\omega\) is the frequency, \(\hat{a}\) (\(\hat{a}^{\dagger}\)) is annihilation (creation) operator of the oscillator and \(\chi^{(3)}\) is the third-order susceptibility of the Kerr medium. The operator associated with the Kerr medium is \[\hat{U}_{K}(\gamma)=\exp[-i\gamma\hat{n}(\hat{n}-1)], \tag{18}\] where \(\hat{n}\) (\(=\hat{a}^{\dagger}\hat{a}\)) is the photon number operator and, \[\gamma=\chi^{(3)}L/v, \tag{19}\] with \(L\) is the length of the Kerr medium and \(v\) is the velocity of the electromagnetic field into the Kerr medium. As we can see from Eq. (19), \(\gamma\) tells about the interaction time of the electromagnetic field with \(\chi^{(3)}\) material medium, so, we can call \(\gamma\) as Kerr interaction coefficient. Factor \(\gamma\) plays important role in our discussion in order to understand the effect of Kerr medium on the phase sensitivity of the MZI. Squeezing operator for a single-mode electromagnetic field is [76] \[\hat{S}(\zeta)\equiv\exp[\frac{1}{2}(\zeta\hat{a}^{\dagger 2}-\zeta^{*}\hat{a }^{2})], \tag{20}\] where, \(\zeta=re^{i\theta}\), with \(r\) as the squeezing parameter and \(\theta\) gives the phase information of the squeezing. In order to find the SKS, simply, inject the coherent state into Kerr medium followed by squeezing. So, injecting the light beam in coherent state \(|\beta\rangle\) through the material having \(\chi^{(3)}\) non-linearity results Kerr state which can be written as \[|\psi_{K}\rangle=\hat{U}_{K}(\gamma)|\beta\rangle. \tag{21}\] Now, application of squeezing operator on the Kerr state gives us the SKS, i.e., \[|\psi_{SK}\rangle=\hat{S}(\zeta)\hat{U}_{K}(\gamma)|\beta\rangle. \tag{22}\] Let, \(\hat{a}\) is the field operator for the coherent state, then the field operator associated with SKS can be written as \[\hat{a}(\zeta,\gamma)=\hat{U}_{K}^{\dagger}(\gamma)\hat{S}^{\dagger}(\zeta) \hat{a}\hat{S}(\zeta)\hat{U}_{K}(\gamma). \tag{23}\] Since, from the Baker-Campbell-Hausdorff formula [82], we can write the squeezing field operator \[\hat{b}(\zeta)\equiv\hat{S}(\zeta)\hat{b}\hat{S}(\zeta)=\hat{b}\cosh r+e^{ \iota\theta}\hat{b}^{\dagger}\sinh r, \tag{24}\] and Kerr state field operator as \[\hat{c}(\gamma)\equiv{U_{K}}^{\dagger}(\gamma)\hat{c}\hat{U}_{K}(\gamma)=e^{ -2\iota\gamma\hat{c}^{\dagger}\hat{c}}\hat{c}. \tag{25}\] Therefore, field operator for the SKS can be written as \[\hat{a}(\zeta,\gamma)=e^{-2i\gamma\hat{a}^{\dagger}\hat{a}}\hat{a}\cosh r+e^ {i\theta}\hat{a}^{\dagger}e^{2i\gamma\hat{a}^{\dagger}\hat{a}}\sinh r. \tag{26}\] We use this operator in our calculations in order to find the general result for the phase sensitivity using the SKS as one of the inputs of the MZI. It is to be noted that the SKS, as shown in Fig. 2, can be used to generate different states under different conditions. ## III Coherent state and SKS as the inputs of MZI under the lossless condition In this section, we will discuss the phase sensitivity of the MZI using the coherent state, \(|\alpha\rangle\), and SKS, \(|\psi_{SK}\rangle\), as the inputs in \(1^{st}\) and \(2^{nd}\) input ports, respectively (Fig. Figure 2: Diagram shows the special cases of SKS. 1). The phase sensitivity, i.e., \(\Delta\phi\), for different detection schemes are the functions of different six variables (\(\phi\), \(\alpha\), \(\beta\), \(\theta\), \(\gamma\) and \(r\)). In order to optimize the parameters, we divide our results in different sub-sections by considering the special cases. For the lossless case, relation between output and input annihilation operators of the MZI are given by Eqs. (2) and (3). For the case of squeezed Kerr state and coherent state as the inputs of the MZI, i.e., \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\), the detailed expressions of the corresponding phase sensitivity associated with SID, IDD and HD schemes are derived in Appendix A and can be written as \[\begin{split}\Delta\phi_{sid}&=\frac{2}{|(g_{1}- g_{2})sin\phi-g_{3}\cos\phi|}\left(g_{1}\sin^{2}\left(\frac{\phi}{2}\right)+g_{2} \cos^{2}\left(\frac{\phi}{2}\right)+(g_{4}-g_{2}^{2})\cos^{4}\left(\frac{\phi }{2}\right)+(g_{5}-g_{1}^{2})\sin^{4}\left(\frac{\phi}{2}\right)\right.\\ &+\left.\frac{1}{4}\left(g_{6}-g_{3}^{2}-2g_{1}g_{2}+4g_{7} \right)\sin^{2}\phi-\left(\frac{1}{2}g_{3}+(g_{8}-g_{2}g_{3})\cos^{2}\left( \frac{\phi}{2}\right)+(g_{9}-g_{1}g_{3})\sin^{2}\left(\frac{\phi}{2}\right) \right)\sin\phi\right)^{\frac{1}{2}},\end{split} \tag{27}\] \[\begin{split}\Delta\phi_{idd}&=\frac{\sqrt{g_{1}+g_ {2}+(g_{4}+g_{5}-2g_{7}-(g_{2}-g_{1})^{2})\cos^{2}\phi+(g_{6}+2g_{7}-g_{3}^{2} )\sin^{2}\phi-(g_{8}-g_{9}-g_{3}(g_{2}-g_{1}))\sin 2\phi}}{|(g_{1}-g_{2})\sin \phi-g_{3}\cos\phi|},\end{split} \tag{28}\] \[\begin{split}\Delta\phi_{hd}&=\frac{\sqrt{2\cos^{ 2}\left(\frac{\phi}{2}\right)\left(Re(e^{i\phi}(\Delta\hat{a}_{2})^{2})+(g_{2 }-\langle\hat{a}_{2}^{\dagger}\rangle\langle\hat{a}_{2}\rangle)\right)+1}}{| Re(e^{i\phi}(\langle\hat{a}_{1}\rangle-i\langle\hat{a}_{2}\rangle))|}.\end{split} \tag{29}\] Where, \(\Delta\phi_{sid}\), \(\Delta\phi_{idd}\) and \(\Delta\phi_{hd}\) are the phase sensitivity for SID, IDD and HD schemes in lossless case, respectively. Here, the \(g_{i}\) with \(i=1,2,...,9\) are given in Eq. (29) and complete expressions for the expectation values and their relations are given in the Appendix B. Calculation of QCRB is done by using Eq. (16) in Eq. (13). Note that, throughout the calculations, without loss of generalities, we take \(\alpha=|\alpha|\), \(\beta=|\beta|\) and \(\theta=\pi\), since, analytically we found that \(\theta=\pi\) gives better result in comparison to other values of \(\theta\) in all the cases. The central task of our work is to find the effect of Kerr nonlinearity (in terms of \(\gamma\)) and Kerr nonlinearity with squeezing parameter (\(r\)) on the \(\Delta\phi\) of the MZI for three different detection schemes. So, in each case we will try to see the variation of \(\Delta\phi\) with \(\gamma\). To be more clear, we divide our discussions in two parts: (i) \(|\psi\rangle_{in}=|0\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\); (ii)\(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\). ### Vacuum state and SKS as inputs of MZI For the case of \(|\psi\rangle_{in}=|0\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\), Eqs. (27), (28) and (29) become \[\begin{split}\Delta\phi_{sid}&=\frac{\sqrt{g_{2}+ (g_{4}-g_{2}^{2})\cos^{2}\left(\frac{\phi}{2}\right)}}{\left|g_{2}\sin\left( \frac{\phi}{2}\right)\right|},\\ \end{split} \tag{30}\] \[\begin{split}\Delta\phi_{idd}&=\frac{\sqrt{g_{2}+(g_ {4}-g_{2}^{2})\cos^{2}\phi}}{|g_{2}\sin\phi|},\\ \end{split} \tag{31}\] and \[\begin{split}\Delta\phi_{hd}&=\frac{1}{|Re(-ie^{i \phi}\langle\hat{a}_{2}\rangle)|}\left(2\cos^{2}\left(\frac{\phi}{2}\right) \times\right.\\ &\left.\left(Re(e^{i\phi}(\Delta\hat{a}_{2})^{2})+(g_{2}-\langle \hat{a}_{2}^{\dagger}\rangle\langle\hat{a}_{2}\rangle)\right)+1\right)^{ \frac{1}{2}},\end{split} \tag{32}\] respectively. From Eqs. (30) and (31), we can see that in the case of optimal phase, i.e., for which phase sensitivity becomes maximum, phase sensitivity for SID and IDD schemes become \[\begin{split}\Delta\phi_{sid}&=\Delta\phi_{idd}|_{| \alpha|=0}=1/\sqrt{g_{2}}.\\ \end{split} \tag{33}\] Here \(g_{2}\), the total photon number in the second input port and given in Eq. (24). _Case (i)_: For \(r=0\) case, i.e. \(|\psi\rangle_{in}=|0\rangle_{1}\otimes|\psi_{K}\rangle_{2}\). In this case \(g_{2}\) becomes \(|\beta|^{2}\). This implies that, for optimal values of \(\phi\), \(\Delta\phi_{sid}\) and \(\Delta\phi_{idd}\) saturates the SNL and is independent from the \(\gamma\) (Fig. 3). Further, for HD scheme, we see that in the case of \(r=0\), the Eq. (32) becomes \[\begin{split}\Delta\phi_{hd}&=\frac{1}{|c\sin(\phi -s)|}\left(\frac{1}{g_{2}}+2\cos^{2}\left(\frac{\phi}{2}\right)\left(1-c^{2} \right.\right.\\ &+\left.c_{2}\cos(\phi-2\gamma-s_{2})-\left.c\cos(\phi-2s))\right) ^{\frac{1}{2}},\end{split} \tag{34}\] and for \(\gamma=0\), \[\begin{split}\Delta\phi_{hd}&=\frac{1}{\sqrt{g_{2} \sin(\phi)}}.\\ \end{split} \tag{35}\] At \(\phi=\pi/2\), we get the maximum value of \(\Delta\phi_{hd}\) and which is nothing but the SNL. Hence, at \(r=\gamma=0\), we get \(\Delta\phi_{sid}=\Delta\phi_{idd}=\Delta\phi_{hd}=\Delta\phi_{SNL}\) and these are the well know results. From Eq. (34), we find that for \(r=0\), \(\Delta\phi_{hd}\) is depending on \(\gamma\), as one can see in Fig. 3. So, for HD scheme with \(\gamma\neq 0\), optimum value of \(\phi\) varies with \(|\beta|\). We found analytically that for wide range values of \(|\beta|\) (\(\sim 1\) to \(100\)) optimum value of \(\phi\) is approximately at \(7\pi/4\). So, it is interesting to mention that with the optimum value of \(\phi=7\pi/4\), \(\Delta\phi_{hd}\) beats SNL for some non-zero values of \(\gamma\) keeping first input port as vacuum (Fig. 3). This is in agreement with the recent study of Masahiro _et al._[38] that a system working with a vacuum state can beat the SNL if only one of the arms of the MZI has unknown phase shift (i.e., single parameter estimation case) and the detector uses any external phase reference and power resource during the detection process. In our work, we are not ignoring the global phase factor as is obvious in Eqs. (2) and (3). If we ignore the global phase factor, phase sensitivity never beats the SNL, we can see this in Fig. 4. This means that global phase factor acts as an external phase source for the HD scheme and beating of the SNL, in Fig. 3, is not the violation of "no-go theorem" [23; 38]. We note that, for the case of HD scheme, normalised phase sensitivity (\((\Delta\phi)/SNL\)) can not reach up to 1 for \(\gamma=0\) (Fig. 3). This is because, here, we plot the graph at \(\phi=7\pi/4\) while the optimal value of \(\phi\) is \(\pi/2\) for the case of \(\gamma=0\). Similar cases arise for the Figs. 5 & 6. _Case (ii)_: Consider \(r\neq 0\) case, i.e., \(|\psi\rangle_{in}=|0\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\). From Eqs. (33) and (B4), we find that \(\Delta\phi_{sid}\) and \(\Delta\phi_{idd}\) are dependent on \(\gamma\). That means, Kerr medium plays role in the variation of phase sensitivity in case of SID and IDD schemes when \(r\neq 0\). In order to visualize the effect of \(\gamma\) on \(\Delta\phi\) we consider two values (lower and higher energies) of \(|\beta|=5\) & \(100\) and see the variation of \(\Delta\phi/\Delta\phi_{SNL}\) with \(\gamma\) for different values of the squeezing parameter \(r\) (plots are shown in Figs. 5 & 6). In Figs. 5 & 6, we can see that phase sensitivity for SID and IDD saturates the SNL for all values of \(\gamma\). While, enhancement in phase sensitivity occurs for HD scheme significantly. We also plot \(\Delta\phi_{QCRB}/\Delta\phi_{SNL}\) and \(\Delta\phi_{HL}/\Delta\phi_{SNL}\) and we can see that for some values of \(\gamma\) phase sensitivity for HD scheme saturates QCRB and approaches HL. As we can see that phase sensitivity enhances with increase in \(r\), so, in order to enhance the precision one can use the higher squeezing for better phase sensitivity. It is important to mention here that the current record for the squeezing factor (\(15.3\) dB or r = \(1.7\)) is reported in [41]. Further, in Fig. 3 and also in Figs. 5 & 6, we can see that maximum phase sensitivity depends on \(|\beta|\) as well as on \(\gamma\) in case of HD scheme. As \(|\beta|\) changes the corresponding optimal value of \(\gamma\) for which we get maximum phase sensitivity also changes. So, in order to explore the effect of \(|\beta|\) and \(\gamma\) on \(\Delta\phi_{hd}\), we plot a graph between \(|\beta|\) and \(\gamma\) and show the variation in phase sensitivity via colour change in the graph (Fig. 7). Fig. 7 gives two important results for experimental point of view, (i) enhancement in \(\Delta\phi_{hd}\) with increase in the value of \(|\beta|\); and (ii) for higher values of \(|\beta|\), optimal value of \(\gamma\) decreases. Value of \(\gamma\) decreases means interaction time of light with Kerr medium decreases (Eq. (19)). Thus, we can say that Kerr medium plays important role in enhancement of the phase sensitivity with HD scheme. ### Coherent and SKS as inputs of MZI For the case of SKS and coherent state as the inputs of the MZI, i.e., \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\), the expressions of the corresponding phase sensitivity associated with SID, IDD and HD schemes are given in Eqs. (27), (28) and (29), respectively. Since, \(\Delta\phi\) is dependent on input number of photons, \(N=g_{1}+g_{2}\), and \(N\) depends on the parameters \(|\alpha|,~{}|\beta|,~{}\gamma,~{}\theta\) and \(r\), so, we can figure out the role of these parameters on \(\Delta\phi\) by looking the variation of \(N\) with these parameters. We have seen in the previous cases that the phase sensitivity is better for \(\theta=\pi\) in comparison to the other values of \(\theta\) and, so, here we consider \(\theta=\pi\). Now, we see the variation in \(N\) with \(\gamma\) for different values of \(r\) in three different cases by considering low and high energy limits at inputs, viz., (i) \(|\alpha|=3\) and \(|\beta|=2\), (ii) \(|\alpha|=100\) and \(|\beta|=2\) and (iii) \(|\alpha|=100\) and \(|\beta|=100\), as shown in Fig. 8. We find that, \(\gamma\) is independent of \(\gamma\) in case of \(r=0\) but for the case \(r\neq 0\), rapid growth in \(N\) is seen as \(\gamma\) is increasing. As we can see that, in Fig. 8, in case of squeezed coherent state, \(|\psi_{S}\rangle\) (\(r\neq 0\) and \(\gamma=0\)), photon number decreases, while for \(\gamma\neq 0\) it varies rapidly. That is, variation in input number of photons, \(N\), is dependent on the interaction of photons in the Kerr medium. As a particular case, let us consider two cases: (i) \(r=0\), i.e., \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{K}\rangle_{2}\) (when second input is in Kerr state), and (ii) \(r\neq 0\), i.e., \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\) (when second input is squeezed Kerr state). Figure 5: Plots of \(\Delta\phi/\Delta\phi_{SNL}\) with \(\gamma\) for different values of \(r\). One can see that, phase sensitivity saturates the SNL for both SID and IDD schemes for \(r=0\) and for \(r\neq 0\), while phase sensitivity for HD scheme beats the SNL. QCRB, for different values of \(r\), shows the lower limit achieve by the system. Other parameters are \(|\beta|=5,~{}\theta=\pi,~{}|\alpha|=0\) and \(\phi=\pi,~{}\pi/2\) for SID, IDD schemes, respectively. Figure 6: Plots of \(\Delta\phi/\Delta\phi_{SNL}\) with \(\gamma\) for different values of \(r\). In case of SID and IDD scheme, phase sensitivity saturates the SNL for \(r=0\) and for \(r\neq 0\) both, while phase sensitivity for HD scheme beats the SNL. QCRB, for different values of \(r\), shows the lower limit achieve by the system. Other parameters are \(|\beta|=50,~{}\theta=\pi,~{}|\alpha|=0\) and \(\phi=\pi,~{}\pi/2\) for SID, IDD schemes, respectively. Figure 7: Colour graph, for HD scheme, in between \(|\beta|\) and \(\gamma\) where colour variation shows \(\Delta\phi/\Delta\phi_{SNL}\). This graph gives two results, one is on increasing the value of \(|\beta|\) enhancement in \(\Delta\phi\) occurs; and second is for higher values of \(|\beta|\) the corresponding optimal value of \(\gamma\) decreases. Other parameters are \(\phi=7\pi/4,~{}\theta=\pi,~{}|\alpha|=0,~{}r=0\) #### iii.3.1 Kerr state at the second input port It is obvious from the Eqs. (27), (28) & (29) that for \(r=0\), \(\Delta\phi\) still depends on \(|\alpha|,\ |\beta|,\ \gamma\) and \(\phi\). In the case of \(|\alpha|=0\), calculation of optimal value of \(\phi\) for different detection cases was straightforward but, here, it is relatively hard. So, keeping in mind the lower and high energy inputs, we consider four cases: (i) \(|\alpha|=3\) and \(|\beta|=2\), (ii) \(|\alpha|=50\) and \(|\beta|=2\), (iii) \(|\alpha|=3\) and \(|\beta|=50\) and (iv) \(|\alpha|=50\) and \(|\beta|=50\); and plot the \(\phi\)_vs_\(\gamma\) graph as shown by Fig. 9, Fig. 10, Fig. 11 and Fig. 12, respectively, where colour variation shows \(\Delta\phi/\Delta\phi_{SNL}\). We can see that phase sensitivity is better in IDD scheme as compared to the SID and HD schemes having optimal phase \(3\pi/4\) or \(7\pi/4\) in case (i), as shown in Fig. 9. In case (ii), (iii) & (iv), all these three detection schemes perform better phase sensitivity with different optimal phases, as shown in Figs. 10, 11 & 12, respectively. Multiple colour regions in phase sensitivity pattern in case (iii) & (iv) is because of fluctuation in \(N\), as previously we saw in Fig. 8(c). Figure 8: Variation in \(N\) with \(\gamma\) by considering the different values of \(r\) for (a) \(|\alpha|=3\), \(|\beta|=2\), (b) \(|\alpha|=100\), \(|\beta|=2\) and (c) \(|\alpha|=100\), \(|\beta|=100\). We find that, \(N\) is not varies with \(\gamma\) in case of \(r=0\) but for the case \(r\neq 0\) rapid growth is seen. Here we take \(\theta=\pi\). Figure 10: Colour graphs show how optimal phase \(\phi_{opt}\) varies with \(\phi\) and \(\gamma\) in different detection schemes. We can see that, phase sensitivity is better in all the detection schemes having different optimal phases. Other parameters are \(\theta=\pi,\ |\alpha|=50,\ |\beta|=2,\ r=0\) Figure 9: Colour graphs show how optimal phase \(\phi_{opt}\) varies with \(\phi\) and \(\gamma\) in different detection schemes. We can see that, phase sensitivity is better in IDD scheme having optimal phase \(3\pi/4\) or \(7\pi/4\). Other parameters are \(\theta=\pi,\ |\alpha|=3,\ |\beta|=2,\ r=0\). So in conclusion we find that, in enhancing the phase sensitivity of the interferometer, coherent input with Kerr state is more useful than double coherent input, in approximately all the situations. #### iii.2.2 Squeezed Kerr state at the second input port In this section, we explore the Eqs. (27)-(29) by considering \(r\neq 0\). For better phase sensitivity, optimal value of \(\phi\) depends on the parameters \(|\alpha|\), \(|\beta|\), \(\gamma\) and \(r\) similar to the previous section III.2.1. In order to explore the effect of \(\gamma\) on \(\Delta\phi\), for the three detection schemes, we find nearly optimal value of \(\phi\) by using the analytical method. Analytically, we find that, for the cases (i) \(|\alpha|=3\), \(|\beta|=2\), (ii) \(|\alpha|=50\), \(|\beta|=2\), (iii) \(|\alpha|=50\), \(|\beta|=50\) and (iv) \(|\alpha|=3\), \(|\beta|=50\), optimal phases \(\phi_{opt}\) for SID scheme will be \(9\pi/8\), \(\pi/4\), \(9\pi/8\) and \(9\pi/8\), respectively. But, all these four cases have \(\phi_{opt}=\pi/2\) for IDD scheme and \(\phi_{opt}=0\) for HD scheme. Since we find that squeezing triggers the photon enhancement in Kerr medium at several instances, as shown in Fig. 8, so here we choose the value of \(r=1.5\) for our convenience. So, we plot \(\Delta\phi/\Delta\phi_{SNL}\)_versus_\(\gamma\) by taking \(\phi_{opt}\) for \(r=1.5\). Figs. 13, Fig. 14, Fig. 15 and Fig. 16 show the phase sensitivity for the four cases (i), (ii), (iii) and (iv), re Figure 11: Colour graphs show how optimal phase \(\phi_{opt}\) varies with \(\phi\) and \(\gamma\) in different detection schemes. We can see that, phase sensitivity is better in HD scheme having optimal phases \(0\) or \(2\pi\). Other parameters are \(\theta=\pi,\ |\alpha|=3,\ |\beta|=50,\ r=0\) Figure 12: Colour graphs show how optimal phase \(\phi_{opt}\) varies with \(\phi\) and \(\gamma\) in different detection schemes. We can see that, phase sensitivity is better in all the detection schemes having broad optimal phase range. Other parameters are \(\theta=\pi,\ |\alpha|=50,\ |\beta|=50,\ r=0\) spectively. We can see that, Kerr medium enhances the phase sensitivity remarkably. If we see the performance of the three detection schemes, we find that HD scheme is dominant in all the four cases than the IDD scheme which, in turn is doing better than the SID scheme. On comparison of the four cases (i), (ii), (iii) and (iv), we find that increased values of \(|\alpha|\) and \(|\beta|\) enhances the phase sensitivity but it is important to note that variation in \(|\beta|\) affects more than that for \(|\alpha|\). As we can see in Fig. 15 and Fig. 16, increase in \(|\alpha|\) is less effective for the larger values of \(|\beta|\). On the other hand, increase in \(|\beta|\) gets more effective even though \(|\alpha|\) is large enough (Fig. 14 and Fig. 15). So, from these results, we can compare the sensitivity of MZI for the two cases of inputs: (i) coherent plus SKS, i.e., \(|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\) and (ii) coherent plus squeezed vacuum, i.e., \(|\alpha\rangle_{1}\otimes|\psi_{SV}\rangle_{2}\). To do this, we plot a colour graph in between \(\gamma\) and \(|\beta|\), where colour variation shows \(\Delta\phi/\Delta\phi_{SNL}\), for IDD and HD schemes only. Since, analytically, we know that \(\Delta\phi_{opt}\) for IDD and HD schemes are \(\pi/2\) and \(0\), respectively, for all the values of \(|\alpha|\) and \(|\beta|\) but in case of SID, \(\Delta\phi_{opt}\) varies with \(|\beta|\). Fig. 17 and Fig. 18 are showing the variation for IDD and HD, respectively, in which dark region shows the maximum phase sensitivity and we can clearly see that, combination of coherent plus squeezed Kerr (\(\gamma\neq 0,|\beta|\neq 0\)) states as inputs give better phase sensitivity than the coherent plus squeezed vacuum (\(\gamma=0,|\beta|=0\)) states as inputs. tons in undesired directions by means of optical elements used in the setup of the quantum optics experiment (we call it as internal photon loss). In the second way, sensitivity of detectors use for the detection of photons in the experiment (we call it as external photon loss). Typically, photon loss can be modelled by considering a fictitious beam splitter that routes photons out of the interferometer, Fig. 19[77]. Suppose there is a fictitious beam splitter having transmittivity \(\tau\). Let us take the annihilation operator \(\hat{i}\) corresponding to the input photons and \(\hat{v}\) corresponds to the annihilation operator for the vacuum. Therefore, annihilation operator \(\hat{t}\) for the transmitted photons can be written in terms of \(\hat{i}\) and \(\hat{v}\) as \[\hat{t}=\sqrt{\tau}\hat{i}+\sqrt{1-\tau}\hat{v}. \tag{36}\] In order to consider the photon loss, we simply take the transmitted photons from the fictitious beam splitter as our main signal and reflected photons as loss (Fig. 19). Therefore, for internal photon loss, we use a fictitious beam splitter, with transmitivity \(\mu\), in both arm of the interferometer (Fig. 20) and for external photon loss, we use fictitious beam splitter, with transmitivity \(\eta\), at the outputs of the interferometer. The relation between the input and output annihilation operators under the photon loss (both _internal_ and _external_) conditions, as Figure 19: The schematic diagram of a fictitious beam splitter mimicking the photon loss. shown in Fig. 20 are, \[\hat{a}_{3}^{\prime}=-\sqrt{\eta\mu}e^{\frac{i\phi}{2}}(-\sin\left( \frac{\phi}{2}\right)\hat{a}_{1}+\cos\left(\frac{\phi}{2}\right)\hat{a}_{2})\] \[\qquad+\sqrt{\frac{\eta(1-\mu)}{2}}(i\hat{m}_{1}+\hat{m}_{2})+ \sqrt{1-\eta}\hat{n}_{2}, \tag{37}\] \[\hat{a}_{4}^{\prime}=-\sqrt{\eta\mu}e^{\frac{i\phi}{2}}(\cos\left( \frac{\phi}{2}\right)\hat{a}_{1}+\sin\left(\frac{\phi}{2}\right)\hat{a}_{2})\] \[\qquad+\sqrt{\frac{\eta(1-\mu)}{2}}(\hat{m}_{1}+i\hat{m}_{2})+ \sqrt{1-\eta}\hat{n}_{1}. \tag{38}\] Where \(\hat{a}_{3}^{\prime}\) and \(\hat{a}_{4}^{\prime}\) are the output annihilation operators corresponding to \(3^{rd}\) and \(4^{th}\) port, respectively. Here, \(\hat{m}_{1},\ \hat{m}_{2},\ \hat{n}_{1}\) and \(\hat{n}_{2}\) are the vacuum annihilation operators associated with fictitious beam splitter of the corresponding modes, as shown in Fig. 20. Therefore, for the lossy case, phase sensitivity associated with SID, IDD and HD schemes, when squeezed Kerr state and coherent state taken as the inputs of the MZI, i.e., \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\), can be written as \[\Delta\phi_{sid}^{\prime}=\frac{2}{|(g_{1}-g_{2})sin\phi-g_{3} \cos\phi|}\left(\frac{1}{\mu\eta}\left(g_{1}\sin^{2}\left(\frac{\phi}{2} \right)+g_{2}\cos^{2}\left(\frac{\phi}{2}\right)-\frac{1}{2}g_{3}\sin\phi \right)+(g_{4}-g_{2}^{2})\cos^{4}\left(\frac{\phi}{2}\right)\right.\] \[+\left.(g_{5}-g_{1}^{2})\sin^{4}\left(\frac{\phi}{2}\right)+ \frac{1}{4}\left(g_{6}-g_{3}^{2}-2g_{1}g_{2}+4g_{7}\right)\sin^{2}\phi-\left( (g_{8}-g_{2}g_{3})\cos^{2}\left(\frac{\phi}{2}\right)+(g_{9}-g_{1}g_{3})\sin ^{2}\left(\frac{\phi}{2}\right)\right)\sin\phi\right)^{\frac{1}{2}}, \tag{39}\] \[\Delta\phi_{idd}^{\prime}=\frac{\sqrt{\frac{1}{\mu\eta}\left(g_{1}+g_{2} \right)+(g_{4}+g_{5}-2g_{7}-(g_{2}-g_{1})^{2})\cos^{2}\phi+(g_{6}+2g_{7}-g_{3 }^{2})\sin^{2}\phi-(g_{8}-g_{9}-g_{3}(g_{2}-g_{1}))\sin 2\phi}}{|(g_{1}-g_{2})\sin \phi-g_{3}\cos\phi|}, \tag{40}\] Figure 20: The schematic diagram of MZI having two input and two output associated with two 50:50 beam splitters (\(BS_{1}\) and \(BS_{2}\)), two mirrors (\(M_{1}\) and \(M_{2}\)) and two detectors (\(D_{1}\) and \(D_{2}\)). In order to consider the internal (external) loss, we consider the fictitious beam splitters having transmission coefficient is \(\mu(\eta)\) in the internal (external) arms of the interferometer. \(\hat{m}_{1},\ \hat{m}_{2},\ \hat{n}_{1}\) and \(\hat{n}_{2}\) are the vacuum annihilation operators of the corresponding modes. \[\Delta\phi^{\prime}_{hd}=\frac{\sqrt{2\cos^{2}\left(\frac{\phi}{2}\right)\left(Re(e ^{i\phi}(\Delta\hat{a}_{2})^{2})+(g_{2}-\langle\hat{a}_{2}^{\dagger}\rangle \langle\hat{a}_{2}\rangle)\right)+\left(\frac{1}{\mu\eta}\right)}}{|Re(e^{i \phi}(\langle\hat{a}_{1}\rangle-i\langle\hat{a}_{2}\rangle))|}. \tag{41}\] Here, the \(g_{i}\) with \(i=1,2,...,9\) are given in Eq. (A9). Detailed expressions of the corresponding phase sensitivity associated with SID, IDD and HD schemes in lossy case are derived in Appendix C. We can see that Eqs. (39), (40) and (41) obviously depicts that internal (\(\mu\)) and external (\(\eta\)) losses shows the equal effect on the phase sensitivity in case of SID, IDD and HD schemes respectively. So, we can consider either internal or external loss and can see the variation in \(\Delta\phi\) with \(\gamma\). Here, we are considering the case of photon loss under different combinations as discussed in Section III. So, let us start with \(|\psi\rangle_{in}=|0\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\), Eqs. (39)-(41) can be written as, \[\Delta\phi^{\prime}_{sid}=\frac{\sqrt{\left(\frac{g_{2}}{\mu\eta}\right)+(g_ {4}-g_{2}^{2})\cos^{2}\left(\frac{\phi}{2}\right)}}{\left|g_{2}\sin\left( \frac{\phi}{2}\right)\right|}, \tag{42}\] \[\Delta\phi^{\prime}_{idd}=\frac{\sqrt{\left(\frac{g_{2}}{\mu\eta}\right)+(g_ {4}-g_{2}^{2})\cos^{2}\phi}}{|g_{2}\sin\phi|}, \tag{43}\] \[\Delta\phi^{\prime}_{hd}=\frac{1}{|Re(-ie^{i\phi}\langle\hat{a}_{2}\rangle)| }\left(2\cos^{2}\left(\frac{\phi}{2}\right)\times\right. \tag{44}\] Fig. 21 and Fig. 22 show the variation of \(\Delta\phi/\Delta\phi_{SNL}\) with \(\gamma\) for \(r=0\) and \(r=1.5\), respectively. For Kerr state, in Fig. 21, we can see that in the case of HD scheme we can surpass the SNL in lossy case (for \(<20\%\) loss) for some values of \(\gamma\). While for SID and IDD schemes we are getting worse phase sensitivity compared to SNL under lossy conditions. On the other hand, if we take SKS we can surpass the SNL even for more than \(40\%\) photon loss in the HD scheme (Fig. 22), while SID and IDD schemes are yet giving worse phase sensitivity with loss compared to SNL. Now, we are considering the case in which input state is \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle_{2}\). Phase sensitivity for this case is given in Eqs. (39)-(41) for all three cases. Fig. 23 and Fig. 24 show the variation of \(\Delta\phi/\Delta\phi_{SNL}\) with \(\gamma\) for \(r=0\) and \(r=1.5\), respectively. For Kerr state, in Fig. 23, we can see that in all the three cases we can surpass the SNL in lossy case (for \(<30\%\) loss) for some values of \(\gamma\). If we take SKS we can surpass the SNL even for more than \(40\%\) photon loss in the HD scheme (Fig. 24), and IDD scheme gives phase sensitivity below the SNL for more than \(20\%\) photon loss. For SID scheme, even for lossless case we can not beat the SNL. coherent state, etc. We discuss our results in Section III and Section IV for lossless and lossy conditions, respectively. These sections are further divided into sub sections on the basis of the input combinations of the light. In order to conclude the results founded in section III and section IV we made two tables TABLE I and TABLE II, respectively. These tables contains the approximated best values of \(\Delta\phi/\Delta\phi_{SNL}\) for different cases from there respective plots for the all three detection schemes. So that, we can conclude the results step by step. In Section III.1, we discussed the vacuum and SKS as the inputs of MZI, i.e., \(|\psi\rangle_{in}=|0\rangle_{1}\otimes|\psi_{SK}\rangle\). We found that, by using the SKS along with vacuum state as the inputs of the MZI we can surpass the SNL with significant amount for HD scheme (Figs. 5 and 6). Not only for SKS but also for Kerr state, phase sensitivity surpasses the SNL in HD scheme (Fig. 3). In Section III.2, we discussed the coherent and SKS as the inputs of MZI, i.e., \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle\). Firstly we investigated the phase sensitivity of MZI by choosing the Kerr state with coherent state as the input, i.e., \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{K}\rangle\). We found that, coherent input with Kerr state is more useful than double coherent input, in approximately all the situations. After that, we investigated \(|\psi\rangle_{in}=|\alpha\rangle_{1}\otimes|\psi_{SK}\rangle\) case and found that, in the case of squeezing Kerr medium enhances the phase sensitivity remarkably. Interesting founding was, coherent plus SKS as inputs give better phase sensitivity than the coherent plus squeezed vacuum state as the inputs. If we see the performance of the three detection schemes, we find that HD scheme is dominant in all the four cases than the IDD scheme which, in turn is doing better than the SID scheme. In Section IV we investigated the tolerances property of Kerr state against the photon loss. For Kerr state along with vacuum state, in Fig. 21, we can see that in the case of HD scheme we can surpass the SNL in lossy case with \(<20\%\) loss. If we take SKS along with vacuum, we can surpass the SNL even for more than \(40\%\) photon loss in the HD scheme (Fig. 22). For Kerr state along with coherent state, we found that in all the three cases we can surpass the SNL in lossy case for \(<30\%\) photon loss (Fig. 23). If we take SKS along with coherent state, we can surpass the SNL even for more than \(40\%\) photon loss in the HD scheme, and IDD scheme gives phase sensitivity below the SNL for more than \(20\%\) photon loss (Fig. 24). Since, factor \(\gamma\) tells about the interaction time of coherent state with Kerr medium in order to produce the required Kerr state. From relation (19), we can see that \(\gamma\) is proportional to Kerr medium length or interaction time. Larger \(\gamma\) means larger medium length and vice-versa. Analytically, we found that for larger value of \(|\beta|\) phase sensitivity increases as well as factor \(\gamma\) is decreases, as we can see in Figs 5 and 18. Means, for intense coherent light we required small interaction time. In summary, SKS can be used to improve the phase sensitivity of a MZI. Importantly, we found some alternate states in place of the squeezed vacuum state for the phase super-sensitivity under both lossless as well as lossy conditions. ## VI Acknowledgements DY, GS and PS acknowledge UGC for the UGC Research Fellowship. DKM acknowledges financial supports from the Science & Engineering Research Board (SERB), New Delhi for CRG Grant (CRG/2021/005917) and Incentive Grant under Institution of Eminence (IoE), Banaras Hindu University, Varanasi, India. ## Appendix A Phase sensitivity under lossless case In order to find the phase sensitivity from Eq. (4), we derive the expression for \(\langle\hat{L}\rangle\), \(\langle\hat{L}^{2}\rangle\) and \(|\partial\langle\hat{L}\rangle/\partial\phi|\) for different detection schemes. So, for SID scheme, from Eq. (2) and Eq. (6), we get \[\langle\hat{L}_{sid}\rangle=g_{1}\sin^{2}\left(\frac{\phi}{2}\right)+g_{2}\cos ^{2}\left(\frac{\phi}{2}\right)-\frac{1}{2}g_{3}\sin\phi, \tag{20}\] \[\langle\hat{L}_{sid}^{2}\rangle=g_{1}\sin^{2}\left(\frac{\phi}{2} \right)+g_{2}\cos^{2}\left(\frac{\phi}{2}\right)-\frac{1}{2}g_{3}\sin\phi\] \[+g_{4}\cos^{4}\left(\frac{\phi}{2}\right)+g_{5}\sin^{4}\left( \frac{\phi}{2}\right)+\frac{1}{4}g_{6}\sin^{2}\phi \tag{21}\] \[+g_{7}\sin^{2}\phi-\left(g_{8}\cos^{2}\left(\frac{\phi}{2}\right) +g_{9}\sin^{2}\left(\frac{\phi}{2}\right)\right)\sin\phi,\] and variation of \(\langle\hat{L}_{sid}(\phi)\rangle\) with \(\phi\) \[\left|\frac{\partial\langle\hat{L}_{sid}\rangle}{\partial\phi}\right|=\frac{ 1}{2}\left|(g_{1}-g_{2})sin\phi-g_{3}\cos\phi\right|. \tag{22}\] For IDD scheme, from Eqs. (2), (3) and (7), we get \[\langle\hat{L}_{idd}\rangle=(g_{2}-g_{1})\cos\phi-g_{3}\sin\phi, \tag{23}\] \begin{table} \begin{tabular}{c c c c c c} \(|\psi\rangle_{in}\) & \(|\alpha|\) & \(|\beta|\) & \(\mu\) & \(\Delta\phi_{sid}/\Delta\phi_{SNL}\approx\) & \(\Delta\phi_{idd}/\Delta\phi_{SNL}\approx\) & \(\Delta\phi_{hd}/\Delta\phi_{SNL}\approx\) \\ \hline \(|0\rangle\otimes|\psi_{K}\rangle\) & 0 & 5 & 1 & \(=1\) & \(=1\) & \(\approx 0.85\) \\ & & & 0.8 & \(\approx 1.12\) & \(\approx 1.12\) & \(\approx 1\) \\ & & & 0.6 & \(\approx 1.3\) & \(\approx 1.3\) & \(\approx 1.21\) \\ \hline \(|0\rangle\otimes|\psi_{SK}\rangle\)1 & 0 & 50 & 1 & \(=1\) & \(=1\) & \(\approx 0.1\) \\ & & & 0.8 & \(\approx 1.12\) & \(\approx 1.12\) & \(\approx 0.35\) \\ & & & 0.6 & \(\approx 1.3\) & \(\approx 1.3\) & \(\approx 0.82\) \\ \hline \(|\alpha\rangle\otimes|\psi_{K}\rangle\) & 50 & 2 & 1 & \(\approx 0.67\) & \(\approx 0.72\) & \(\approx 0.74\) \\ & & & 0.8 & \(\approx 0.87\) & \(\approx 0.88\) & \(\approx 0.9\) \\ & & & 0.6 & \(\approx 1.12\) & \(\approx 1.1\) & \(\approx 1.12\) \\ \hline \(|\alpha\rangle\otimes|\psi_{SK}\rangle\)1 & 50 & 50 & 1 & \(\approx 1.6\) & \(\approx 0.35\) & \(\approx 0.06\) \\ & & & 0.8 & \(\approx 1.8\) & \(\approx 0.7\) & \(\approx 0.4\) \\ & & & 0.6 & \(\approx 2.1\) & \(\approx 1\) & \(\approx 0.6\) \\ \end{tabular} \end{table} Table 2: This table contains the best values of \(\Delta\phi/\Delta\phi_{SNL}\) of the all three detection schemes for different cases in lossy condition. Values listed here are approximated values taken from the graphs plotted in the section IV. \begin{table} \begin{tabular}{c c c c c c} \(|\psi\rangle_{in}\) & \(|\alpha|\) & \(|\beta|\) & \(\Delta\phi_{sid}/\Delta\phi_{SNL}\) & \(\Delta\phi_{idd}/\Delta\phi_{SNL}\) & \(\Delta\phi_{hd}/\Delta\phi_{SNL}\approx\) \\ \hline \(|0\rangle\otimes|\psi_{K}\rangle\) & 0 & 2 & \(=1\) & \(=1\) & \(\approx 0.9\) \\ & 0 & 5 & \(=1\) & \(=1\) & \(\approx 0.85\) \\ & 0 & 15 & \(=1\) & \(=1\) & \(\approx 0.85\) \\ \hline \(|0\rangle\otimes|\psi_{SK}\rangle\)1 & 0 & 5 & \(=1\) & \(=1\) & \(\approx 0.15\) \\ & 0 & 50 & \(=1\) & \(=1\) & \(\approx 0.09\) \\ \hline \(|\alpha\rangle\otimes|\psi_{K}\rangle\) & 3 & 2 & \(>1\) & \(=1\) & \(>1\) \\ & 50 & 2 & \(\approx 0.65\) & \(\approx 0.72\) & \(\approx 0.73\) \\ & 3 & 50 & \(\leq 1\) & \(\leq 1\) & \(\leq 1\) \\ & 50 & 50 & \(\approx 0.8\) & \(\approx 0.8\) & \(\approx 0.8\) \\ \hline \(|\alpha\rangle\otimes|\beta\rangle\) & 3 & 2 & \(=1\) & \(=1\) & \(=1\) \\ & 50 & 2 & \(=1\) & \(=1\) & \(=1\) \\ & 3 & 50 & \(=1\) & \(=1\) & \(=1\) \\ & 50 & 50 & \(=1\) & \(=1\) & \(=1\) \\ \hline \(|\alpha\rangle\otimes|\psi_{SK}\rangle\)1 & 3 & 2 & \(\approx 3\) & \(\approx 1.2\) & \(\approx 0.27\) \\ & 50 & 2 & \(\approx 0.6\) & \(\approx 0.2\) & \(\approx 0.18\) \\ & 50 & 50 & \(\approx 1.8\) & \(\approx 0.35\) & \(\approx 0.23\) \\ & 3 & 50 & \(\approx 1\) & \(=1\) & \(\approx 0.085\) \\ \hline \(|\alpha\rangle\otimes|\psi_{SV}\rangle\)1 & 50 & 0 & \(\approx 0.23\) & \(\approx 0.23\) \\ \end{tabular} \end{table} Table 1: This table contains the best values of \(\Delta\phi/\Delta\phi_{SNL}\) of the all three detection schemes for different cases in lossless condition. Values listed here are approximated values taken from the graphs plotted in the section III. \[\begin{array}{c}\langle\hat{L}_{idd}^{2}\rangle=g_{1}+g_{2}+(g_{4}+g_{5})\cos^{2} \phi+g_{6}\sin^{2}\phi\\ \qquad\qquad\qquad\qquad\qquad-2g_{7}\cos 2\phi-(g_{8}-g_{9})\sin 2\phi,\end{array} \tag{10}\] and variation of \(\langle\hat{L}_{idd}(\phi)\rangle\) with \(\phi\) \[\left|\frac{\partial\langle\hat{L}_{idd}\rangle}{\partial\phi}\right|=\left|(g _{1}-g_{2})\sin\phi-g_{3}\cos\phi\right|. \tag{11}\] For HD scheme, from Eqs. (2), (3) and (8), we get \[\begin{array}{c}(\Delta\hat{L}_{hd})^{2}=\langle\hat{L}_{hd}^{2}\rangle- \langle\hat{L}_{hd}\rangle=\frac{1}{2}+\cos^{2}\left(\frac{\phi}{2}\right)\\ \qquad\times\left(Re(e^{i\phi}(\Delta\hat{a}_{2})^{2})+(g_{2}-\langle\hat{a}_{ 2}^{\dagger}\rangle\langle\hat{a}_{2}\rangle)\right),\end{array} \tag{12}\] \[\left|\frac{\partial\langle\hat{L}_{hd}\rangle}{\partial\phi}\right|=\frac{1 }{\sqrt{2}}\left|Re(e^{i\phi}(\langle\hat{a}_{1}\rangle-i\langle\hat{a}_{2} \rangle))\right|. \tag{13}\] Where, \[\begin{array}{c}g_{1}=\langle\hat{a}_{1}^{\dagger}\hat{a}_{1}\rangle,\ g_{2}= \langle\hat{a}_{2}^{\dagger}\hat{a}_{2}\rangle,\ g_{3}=\langle\hat{a}_{1}\hat{a }_{2}^{\dagger}\rangle+\langle\hat{a}_{1}^{\dagger}\hat{a}_{2}\rangle,\\ g_{4}=\langle\hat{a}_{1}^{2}\hat{a}_{2}^{2}\rangle,\ g_{5}=\langle\hat{a}_{1 }^{\dagger 2}\hat{a}_{1}^{2}\rangle,\ g_{6}=\langle\hat{a}_{1}^{2}\hat{a}_{2}^{ \dagger 2}\rangle\\ +\langle\hat{a}_{1}^{\dagger 2}\hat{a}_{2}^{2}\rangle,\ g_{7}=\langle\hat{a}_{1 }^{\dagger}\hat{a}_{1}\hat{a}_{2}^{\dagger}\hat{a}_{2}\rangle,\ g_{8}=\langle \hat{a}_{1}\hat{a}_{2}^{\dagger 2}\hat{a}_{2}\rangle\\ +\langle\hat{a}_{1}^{\dagger}\hat{a}_{2}^{2}\hat{a}_{2}^{2}\rangle,\ g_{9}= \langle\hat{a}_{1}^{\dagger}\hat{a}_{1}^{2}\hat{a}_{2}^{\dagger}\rangle+ \langle\hat{a}_{1}^{\dagger 2}\hat{a}_{1}\hat{a}_{2}\rangle,\\ g_{10}=\langle\hat{a}_{1}^{\dagger}\hat{a}_{2}\rangle-\langle\hat{a}_{1 }\hat{a}_{2}^{\dagger}\rangle,\ g_{11}=\langle\hat{a}_{1}^{\dagger}\hat{a}_{1 }^{2}\hat{a}_{2}^{\dagger}\rangle\\ -\langle\hat{a}_{1}^{\dagger 2}\hat{a}_{1}\hat{a}_{2}\rangle,\ g_{12}= \langle\hat{a}_{1}\hat{a}_{2}^{\dagger 2}\hat{a}_{2}\rangle-\langle\hat{a}_{1 }^{\dagger}\hat{a}_{2}^{\dagger}\hat{a}_{2}^{2}\rangle.\end{array} \tag{14}\] Appendix B contains the separate expressions of the expectation value of the operators given in the Eq. (10). Appendix B Expectation values of the operators w.r.t. state \(|\psi\rangle_{in}=|\alpha\rangle\otimes|\psi_{SK}\rangle\) \[\begin{array}{c}\langle\hat{a}_{2}\rangle=|\beta|c(Ce^{-is}+Se^{i(s+\theta)}),\end{array} \tag{15}\] \[\begin{array}{c}\langle\hat{a}_{2}^{2}\rangle=C^{2}|\beta|^{2}c_{2}e^{-i(2 \gamma+s_{2})}+CS(2|\beta|^{2}+1)e^{i\theta}\\ \qquad\qquad\qquad\qquad\qquad+S^{2}|\beta|^{2}c_{2}e^{i(2\gamma+2\theta+s_{ 2})},\end{array} \tag{16}\] \[\begin{array}{c}\langle\hat{a}_{1}^{\dagger}\hat{a}_{1}\rangle=|\alpha\mid^ {2},\end{array} \tag{17}\] \[\begin{array}{c}\langle\hat{a}_{2}^{\dagger}\hat{a}_{2}\rangle=|\beta|^{2}(C^ {2}+S^{2}+2c_{2}CS\cos\left(2\gamma+\theta+s_{2}\right))+S^{2},\end{array} \tag{18}\] \[\begin{array}{c}\langle\hat{a}_{1}\hat{a}_{2}^{\dagger}\rangle+\langle\hat{a }_{1}^{\dagger}\hat{a}_{2}\rangle=2\mid\alpha\mid\mid\beta\mid c\left(C\cos(s)+ S\cos(\theta+s)\right),\end{array} \tag{19}\] \[\begin{array}{c}\langle\hat{a}_{1}^{\dagger 2}\hat{a}_{2}^{2}\rangle=|\beta|^{4}C^{4}+ \left(|\beta|^{4}S^{4}+4|\beta|^{2}S^{4}+2S^{4}\right)\\ \qquad\qquad\qquad\qquad\qquad+2|\beta|^{4}C^{2}S^{2}c_{4}\cos\left(2\theta+ 12\gamma+s_{4}\right)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+4|\beta|^{4}c_{2}CS(C^{2}+S^{ 2})\cos\left(\theta+6\gamma+s_{2}\right)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+2CS|\beta|^{2}c_{2}(C^{2}+5S^{ 2})\cos\left(\theta+2\gamma+s_{2}\right)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\left(4|\beta|^{4}+8|\beta|^{2}+ 1\right)C^{2}S^{2},\end{array} \tag{20}\] \[\begin{array}{c}\langle\hat{a}_{1}^{\dagger 2}\hat{a}_{1}^{2}\rangle=|\alpha|^{4},\end{array} \tag{21}\] \[\begin{array}{c}\langle\hat{a}_{1}^{2}\hat{a}_{2}^{\dagger 2}\rangle+\langle\hat{a }_{1}^{\dagger 2}\hat{a}_{2}^{2}\rangle=2|\alpha|^{2}|\beta|^{2}\left(c_{2}C^{2}\cos \left(2\gamma+s_{2}\right)\right.\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+2CS\cos\theta+c_{2}S^{2} \cos\left(2\theta+2\gamma+s_{2}\right)\right)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+2|\alpha|^ {2}CS\cos\theta,\end{array} \tag{22}\] \[\begin{array}{c}\langle\hat{a}_{1}^{\dagger}\hat{a}_{1}\hat{a}_{2}^{\dagger 2}\rangle+\langle\hat{a}_{1}^{\dagger 2}\hat{a}_{1}\hat{a}_{2}\rangle=2|\alpha|^{3}|\beta|\left(C\sin s-S\sin \left(\theta+s\right)\right),\end{array} \tag{23}\] \[\begin{array}{c}\langle\hat{a}_{1}\hat{a}_{2}^{\dagger 2}\hat{a}_{2}\rangle-\langle\hat{a }_{1}^{\dagger 2}\hat{a}_{2}\rangle-\langle\hat{a}_{1}^{\dagger 2}\hat{a}_{2}^{\dagger}\hat{a}_{2}^{2}\rangle=\\ 2i|\alpha|\left(|\beta|^{3}c((C^{3}+2CS^{2})\sin\left(2\gamma+s\right)\right. \right.\\ \qquad\qquad\qquad\qquad\qquad\qquad\left.-(S^{3}+2C^{2}S)\sin\left(\theta+2 \gamma+s\right)\right)\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-|\beta|c((2S^{3}+C^{2}S)\sin \left(\theta+s\right)-3CS^{2}\sin s)\\ \qquad\qquad\qquad\qquad\qquad\left.+|\beta|^{3}c_{3}CS(C\sin\left(\theta+6 \gamma+s_{3}\right)\right.\\ \qquad\qquad\qquad\qquad\qquad\left.-\left.\left.S\sin\left(2\theta+6\gamma+s_{ 3}\right)\right)\right).\end{array} \tag{24}\] Where, \(C=\cosh r,\ S=\sinh r,\ c=e^{|\beta|^{2}(\cos 2\gamma-1)},\ c_{2}=e^{|\beta|^{2}(\cos 4\gamma-1)},\ c_{3}=e^{|\beta|^{2}(\cos 6 \gamma-1)},\ c_{4}=e^{|\beta|^{2}(\cos 8\gamma-1)},\ s=|\beta|^{2}\sin 2\gamma,\ s_{2}=|\beta|^{2}\sin 4\gamma,\ s_{3}=|\beta|^{2}\sin 6 \gamma,\ s_{4}=|\beta|^{2}\sin 8\gamma.\) ## Appendix C Phase sensitivity in lossy condition In order to find the phase sensitivity from Eq. (4), we derive the expression for \(\langle\hat{L}\rangle\), \(\langle\hat{L}^{2}\rangle\) and \(|\partial\langle\hat{L}\rangle/\partial\phi|\) for different detection schemes. So, for SID scheme, from Eq. (37) and Eq. (6), we get \[\langle\hat{L}_{sid}\rangle=\mu\eta\left(g_{1}\sin^{2}\left(\frac{ \phi}{2}\right)+g_{2}\cos^{2}\left(\frac{\phi}{2}\right)-\frac{1}{2}g_{3}\sin \phi\right), \tag{38}\] \[\langle\hat{L}_{sid}^{2}\rangle=\langle\hat{L}_{sid}\rangle+\mu^{2}\eta^{2} \left(g_{4}\cos^{4}\left(\frac{\phi}{2}\right)\right.\] \[\left.+g_{5}\sin^{4}\left(\frac{\phi}{2}\right)+g_{7}\sin^{2} \phi+\frac{1}{4}g_{6}\sin^{2}\phi\right.\] \[-\left.\left(g_{8}\cos^{2}\left(\frac{\phi}{2}\right)+g_{9}\sin^{ 2}\left(\frac{\phi}{2}\right)\right)\sin\phi\right),\] and variation of \(\langle\hat{L}_{sid}(\phi)\rangle\) with \(\phi\) \[\left|\frac{\partial\langle\hat{L}_{sid}\rangle}{\partial\phi} \right|=\frac{\mu\eta}{2}\left|(g_{1}-g_{2})sin\phi-g_{3}\cos\phi\right|. \tag{39}\] For IDD scheme, from Eqs. (37), (38) and (7), we get \[\langle\hat{L}_{idd}\rangle=\mu\eta\left((g_{2}-g_{1})\cos\phi-g_{3}\sin\phi \right), \tag{40}\] \[\langle\hat{L}_{idd}^{2}\rangle=\mu\eta(g_{1}+g_{2})+\mu^{2}\eta^{2}((g_{4}+g_ {5})\cos^{2}\phi \tag{41}\] and variation of \(\langle\hat{L}_{idd}(\phi)\rangle\) with \(\phi\) \[\left|\frac{\partial\langle\hat{L}_{idd}\rangle}{\partial\phi} \right|=\mu\eta\left|(g_{1}-g_{2})\sin\phi-g_{3}\cos\phi\right|. \tag{42}\] For HD scheme, from Eqs. (37), (38) and (8), we get \[\begin{split}(\Delta\hat{L}_{hd})^{2}&=\langle \hat{L}_{hd}^{2}\rangle-\langle\hat{L}_{hd}\rangle=\frac{1}{2}+\frac{\mu\eta}{ 2}\cos^{2}\left(\frac{\phi}{2}\right)\\ &\times\left(Re(e^{i\phi}(\Delta\hat{a}_{2})^{2})+(g_{2}-\langle \hat{a}_{2}^{\dagger}\rangle\langle\hat{a}_{2}\rangle)\right)\end{split} \tag{43}\] \[\left|\frac{\partial\langle\hat{L}_{hd}\rangle}{\partial\phi} \right|=\sqrt{\frac{\mu\eta}{2}}\left|Re(e^{i\phi}(\langle\hat{a}_{1}\rangle- i\langle\hat{a}_{2}\rangle))\right|. \tag{44}\]
2307.16841
Prospects for Future Experimental Tests of Gravity with Black Hole Imaging: Spherical Symmetry
Astrophysical black holes (BHs) are universally expected to be described by the Kerr metric, a stationary, vacuum solution of general relativity (GR). Indeed, by imaging M87$^\star$ and Sgr A$^\star$ and measuring the size of their shadows, we have substantiated this hypothesis through successful null tests. Here we discuss the potential of upcoming improved imaging observations in constraining deviations of the spacetime geometry from that of a Schwarzschild BH (the nonspinning, vacuum GR solution), with a focus on the photon ring. The photon ring comprises a series of time-delayed, self-similarly nested higher-order images of the accretion flow, and is located close to the boundary of the shadow. In spherical spacetimes, these images are indexed by the number of half-loops executed around the BH by the photons that arrive in them. The delay time offers an independent shadow size estimate, enabling tests of shadow achromaticity, as predicted by GR. The image self-similarity relies on the lensing Lyapunov exponent, which is linked to photon orbit instability near the unstable circular orbit. Notably, this critical exponent, specific to the spacetime, is sensitive to the $rr-$component of the metric, and also offers insights into curvature, beyond the capabilities of currently available shadow size measurements. The Lyapunov time, a characteristic instability timescale, provides yet another probe of metric and curvature. The ratio of the Lyapunov and the delay times also yields the lensing Lyapunov exponent, providing alternative measurement pathways. Remarkably, the width of the first-order image can also serve as a discriminator of the spacetime. Each of these observables, potentially accessible in the near future, offers spacetime constraints that are orthogonal to those of the shadow size, enabling precision tests of GR.
Prashant Kocherlakota, Luciano Rezzolla, Rittick Roy, Maciek Wielgus
2023-07-31T17:03:48Z
http://arxiv.org/abs/2307.16841v2
# Extreme Light Bending in Spherically-Symmetric Black Hole Spacetimes: ###### Abstract Recent images from the Event Horizon Telescope of accreting supermassive black holes (BHs), along with upcoming observations with better sensitivity and angular resolution, offer exciting opportunities to deepen our understanding of spacetime in strong gravitational fields. A significant focus for future BH imaging observations is the direct detection of the "photon ring," a key and novel target. Due to intense gravitational lensing near a BH, light emitted by hot plasma nearby can loop around the BH multiple times before reaching an observer far away. These highly-lensed photons form a narrow band on the observer's sky, known as the photon ring. The photon ring displays intricate structure, consisting of self-similarly nested subrings. In spherically-symmetric spacetimes, these subrings are neatly indexed by the maximum number of half-loops executed around the BH by the photons that arrive in them. Each subring represents an entire "higher-order" image of the horizon-scale accretion flow. Furthermore, this self-similarity is controlled by a single critical exponent linked to the radial (in)stability of photon orbits near the critical (circular) photon orbit, solely determined by the spacetime geometry. However, extracting such information about the spacetime geometry can be challenging because the observed photon ring is also influenced by the structure of the emitting region. To address this, we conducted a comprehensive study by varying (a) a wide range of emission-zone morphology models and (b) families of spacetime metrics. Our findings show that several observables can provide valuable information about the spacetime geometry. Specifically, the lensing exponent can be reliably determined from future ultrahigh-resolution observations, and the width of the first-order photon subring serves as an important discriminator of the spacetime geometry. Although the BH shadow size is also determined by the spacetime geometry, it does not capture all its aspects. Detecting the lensing exponent in the future will grant access also to the \(rr-\)component of the spacetime metric as well as significantly narrow down currently accessible metric-deviation parameter spaces. Additionally, observations of flaring events across different wavelengths might reveal time-delayed secondary images, with the delay time providing a promising new way to independently estimate the BH shadow size. In conclusion, these complementary and diverse observables have the potential to shed light on the fundamental theory of gravity and fields, making valuable contributions to the experimental gravity program. The EHT Collaboration has recently imaged the supermassive compact objects M87\({}^{\star}\)[1; 2; 3; 4; 5; 6] and Sgr A\({}^{\star}\)[7; 8; 9; 10; 11; 12], adding to the mounting evidence indicating the ubiquitous existence of Kerr BHs of general relativity (GR) at the centers of galaxies. Images of both EHT sources reveal a telltale dark region in the center that is surrounded by a bright emission ring, features that are typical in the synthetic images constructed from the simulations of accretion of hot, magnetized plasma onto Kerr BHs, that are used to model the astrophysical conditions of such objects [13; 14; 5; 10]. Optical transparency at \(1.3\mathrm{mm}\), the EHT observing wavelength for these sources, implies that the observed central dark region is best explained by the presence of a photon shell in the spacetime [12], which is assured to occur in a typical BH spacetime [15; 16]. In the spacetime of a Kerr BH, the photon shell \(\mathrm{S}\) is the union of a set of spherical surfaces (in appropriately chosen coordinates, e.g., Boyer-Lindquist coordinates [17]), each of which admits bound photon orbits [18]. The (exterior; cf. [19]) photon shell of a Kerr BH encloses its horizon and demarcates the fate of photons impinging onto it into those that (a) fall into its interior and eventually end up inside the BH, (b) remain on one of the spheres in \(\mathrm{S}\), or (c) escape to faraway observers, depending on their four-momenta or, equivalently, their impact parameters. Photon orbits of type (b) are of particular interest since they determine the BH shadow boundary curve (or the \(n\!=\!\infty\) critical curve) \(\mathcal{C}_{\infty}\) (see, e.g., [19; 20; 21]) on the observer's image plane. Of these, there are also two planar (equatorial) bound orbits that are fundamentally tied to the symmetries of the spacetime (see, e.g., [22]). When a BH is lit up by a source of emission such as hot inflowing gas, the central intensity depression that appears in the image can typically be related to the BH shadow (region interior to \(\mathcal{C}_{\infty}\)). Photon orbits that cross the horizon [type (a)] have shorter paths through the spacetime than those that do not [type (c)] and thus pick up lesser emission from the hot plasma present outside the BH, leading to much smaller intensities on the image plane in the pixels that they arrive in [23; 24; 25; 26; 27; 28; 29; 30; 31]. This does not mean that the intensity maximum in the image coincides exactly with the shadow boundary but that there is a strong association between the two (see, e.g., [24; 28; 29; 30]). Indeed the diameter of the emission ring in the images depends not just on the spacetime geometry of M87\({}^{\star}\) or of Sgr A* but also on the arrangement, absorption, and emissivity of the accreting material in their vicinity [12; 6]. Nevertheless, the set of sizes of the observed bright emission ring and that of the shadow boundary can be empirically related through synthetic images obtained from simulations, thereby allowing for an inference of the shadow size of M87* [6; 32; 33] and of Sgr A* [12] from the measured emission ring diameter [34]. The EHT finds that the shadow sizes of M87* and of Sgr A* are consistent with those of Kerr BHs of their respective masses [6; 12]. Together with gravitational-wave measurements involving stellar-mass BHs [35; 36; 37], the EHT observations demonstrate the success of GR in describing the strong-field gravity near astrophysical BHs. The description above illustrates how the location of the photon shell \(\mathrm{S}\) and the shape of the shadow boundary \(\mathcal{G}_{\infty}\) is determined by the causal structure of the Kerr spacetime, and how the latter can be "measured" approximately. Since \(\mathcal{G}_{\infty}\) is the intersection of the past light cone at the location of an asymptotic static observer with the world-tube of \(\mathrm{S}\)[38; 39], it depends additionally also on the inclination of the observer in general. In standard (cf. [15]) static and spherically-symmetric spacetimes, which we restrict ourselves to henceforth, the photon shell \(\mathrm{S}\) degenerates to a single spherical surface in the "bulk" of space, and the shadow boundary \(\mathcal{G}_{\infty}\), which is the gravitationally-lensed projection of \(\mathrm{S}\) on the observer's image plane ("boundary"), is circular and independent of the viewing angle. When photons orbits coincide with the null geodesics of the spacetime metric tensor \(\mathpzc{G}\)[40], \(\mathcal{G}_{\infty}\) is determined solely by \(\mathpzc{G}\), allowing for clean tests of the no-hair theorems [12; 41; 32]. This is a presupposition throughout this work. Furthermore, horizonless ultracompact objects (UCOs) can also possess photon spheres and can naturally also cast shadows (see, e.g., [42; 43; 44; 45; 46; 47; 48; 49]). Therefore, observations of the shadow cast by an astrophysical UCO allow us to potentially distinguish between - and possibly rule out - different types of BHs, naked singularities, wormholes, gravastars, and other exotic objects that may be _a priori_ allowed models (see, e.g., [12; 33; 50; 51; 52]). That this, in turn, can be used to distinguish between candidate theories of classical gravity (and fields) that these UCO models arise as solutions in has also been emphasized there. A concomitant of the underlying \(\mathcal{O}(3)\)-isometry in such spherically-symmetric spacetimes is that all geodesic orbits are spatially-planar. For photon orbits that connect an emitter and observer in particular, the orbital plane is simply defined by the spatial locations of the emitter, the observer, and the center of space. Once the plane of a photon orbit is fixed, the (apparent) impact parameter \(\xi\), which is the ratio of its conserved angular momentum \(p_{\varphi}\) and energy \(|p_{I}|\), single-handedly determines the radius at which the photon appears on the observer's image plane [20; 21]. In particular, the critical impact parameter \(\xi=\xi_{\mathrm{ps}}\) of photons traversing the photon sphere, located at some \(r=r_{\mathrm{ps}}\), corresponds to the radius of the shadow boundary. Photons that appear in a region close to the shadow boundary \(\mathcal{G}_{\infty}\) (\(|\xi-\xi_{\mathrm{ps}}|\ll 1\)) can have orbits that access the region close to the photon sphere \(\delta\mathrm{S}\) (\(|r-r_{\mathrm{ps}}|\ll 1\)). Since photon orbits close on themselves at the photon sphere (all bound photon orbits are also planar in spherically-symmetric spacetimes, i.e., they are simply all circular), we can expect naturally that \(\delta\mathrm{S}\) is a region of strong gravitational lensing [53; 54; 21; 55], i.e., photons that appear in \(\mathcal{G}\mathcal{G}_{\infty}\) and which access \(\delta\mathrm{S}\) have orbits that differ substantially from a straight line and are bent through large angles. In particular, the total deflection angle \(\not{\Delta}\varphi(\xi)\) experienced by photons diverges logarithmically in the limit of approach to the \(n=\infty\) critical curve, \(\xi\to\xi_{\mathrm{ps}}\), and the slope of this divergence is connected to a single critical exponent [53; 21] that governs the expansion of a meridional null congruence at the photon sphere (see, e.g., [56]). Therefore, this lensing Lyapunov exponent \(\gamma_{\mathrm{ps}}\) is also determined purely by the spacetime metric tensor \(\mathpzc{G}\) and is a new observable that can be used to discriminate between UCOs and to test alternative theories of classical gravity [57; 58; 59]. The region \(\mathcal{G}\mathcal{G}_{\infty}\) is referred to as the photon ring ([57; 58]; not to be confused with the light ring, which is a bound, planar null geodesic [22]) and has a rich (barring when the emitting region is perfectly spherical [31]) substructure. The photon ring is comprised of a series of discrete - and often overlapping - subrings that are indexed by the number of half-loops executed by the photons that arrive in them. Furthermore, these subrings are organized self-similarly on the image plane, with the spacetime-specific lensing Lyapunov exponent governing this self-similarity. For accreting UCOs such as M87* or Sgr A*, their photon subrings would be higher-order images of the emitting region present in the close vicinity. Naturally, the structures of their observable photon rings are determined both by their respective spacetime geometries as well as by the properties of the hot plasma. Recent interest in the theoretical characterization of photon rings has been motivated by the feasibility of detecting them with future radio-interferometric observations through analysis of the measurements of the visibility amplitudes, i.e., the Fourier transforms of the image intensity, at sufficiently high spatial frequencies [58] (see also [60]). Indeed, recent proposals have forwarded the possibility of observing the photon ring through very long baseline interferometry (VLBI) methods using proposed facilities such as the next-generation Event Horizon Telescope (ngEHT; [61]) and next-generation microarcsecond resolution instruments with space-based VLBI baselines [62; 63; 64]. Methods have been proposed to test the consistency of observations with the Kerr metric [62; 65], extract Kerr metric parameters [66; 67], and test deviations from the Schwarzschild metric [59]. The theoretical characterization of photon rings and higher order images remains incomplete, e.g., apart from very few studies [31] geometrically-thin equatorial emission has been assumed (e.g., [65; 66; 67; 68; 69; 70]). In anticipation of detections of the photon ring, our goal here is to understand both qualitatively and quantitatively how its structure is dependent on both the emission physics and the spacetime geometry. This will help clarify, in turn, what we can learn about the gravitational and non-gravitational aspects of supermassive astrophysical objects undergoing accretion by studying their images. Towards this end, in Sec. 1 we begin with a discussion on gravitational lensing in standard static and spherically-symmetric spacetimes, features of strong-lensing such as higher-order images in particular, and the intimately linked notion of the photon shell Lyapunov exponent. In Sec. II we will explore the total range of variations of the observable photon ring caused by varying the morphology of the emitting region in a Schwarzschild BH spacetime. To achieve this, we introduce a simplistic \(3-\)parameter model for the morphology of the emitting region which can be used to interpolate between a geometrically-thin-disk and a sphere. We use conical surfaces, pictured in Fig. 1, to generate an axially-symmetric wedge-shaped region - a "conical torus" - with the three parameters being used to set the locations of its inner \(r_{\rm in}\) and outer \(r_{\rm out}\) surfaces as well as its geometrical-thickness or half-opening angle \(\vartheta_{1/2}\) (this defines the latitudes of the bounding surfaces). Qualitatively we will see below that for a thin emission disk (\(\vartheta_{1/2}\approx 0\)) with an inner boundary \(r_{\rm in}\) at the innermost stable circular orbit (ISCO), as in the case of the Novikov-Thorne accretion model [71] which yields an excellent descriptor of quasar spectra [72], the photon subrings are all compact, lie outside the shadow boundary and are spatially-separated on the image plane (the photon ring has "gaps"; See Fig. 4). On the other extreme, for a spherical source of emission (\(\vartheta_{1/2}\approx\pi/2\)) with an inner emission boundary \(r_{\rm in}\) at the event horizon, as in the case of the Bondi-Michel accretion model [73], the subrings straddle the shadow boundary and appear on top of each other. Furthermore, the first-order subring is noncompact. Thus, we can see how different emitting region morphologies in the same spacetime can cast qualitatively different photon rings on the image plane. To understand this more concretely and to disentangle the effects of scale height, the location of the inner boundary and the compactness of the emitting region as well as the observer inclination angle \(\dot{\iota}\), we study the impact of each of these aspects extensively and report quantitative estimates for the maximal variations in the diameters and widths of the photon subrings with varying morphological parameters to better guide future attempts at detecting them. In Sec. III, we then explore the variation in these characteristic features when allowing the spacetime geometry to also vary. We parametrize the spacetime metric using the Rezzolla-Zhidenko (RZ) scheme [30; 74] and demonstrate here for the first time how future measurements of the Lyapunov exponent when combined with the already available measurements of the shadow size can break degeneracies that persist in constraining the spacetime metric of astrophysical BHs. Finally, in Sec. IV we summarize our results and conclude by discussing the potential for future tests of the spacetime geometry of astrophysical UCOs. Finally, on convention: We will reserve \(\nu\) to denote a frequency throughout; It will never be used as a label for a tensor index. For example, \(I_{\nu}\) will be used to denote specific intensity (\(\rm W\ m^{-2}sr^{-1}Hz^{-1}\)) and not the component of a one-form \(I\). We employ geometrized units throughout, in which \(G=c=1\). I Higher-order images due to gravitational lensing, universal photon ring scaling relations, and the photon sphere Lyapunov exponent(s) The line element of an arbitrary static and spherically-symmetric spacetime can be expressed in spherical-polar coordinates \(x^{\alpha}=(t,r,\vartheta,\varphi)\) as, \[\mathrm{d}s^{2}=\mathpzc{E}_{\mu\nu}\mathrm{d}x^{\alpha}\mathrm{d}x^{\beta}=- f\ \mathrm{d}t^{2}+\frac{g}{f}\ \mathrm{d}r^{2}+h\ \mathrm{d}\Omega_{2}^{2}\,, \tag{1}\] where the metric functions \(f,g\), and \(h\) are functions of \(r\) alone, and \(\mathrm{d}\Omega_{2}^{2}=\mathrm{d}\vartheta^{2}+\sin^{2}\vartheta\ \mathrm{d}\varphi^{2}\) is the standard line element on a unit 2-sphere. We will assume reasonably that \(g>0\) everywhere and that \(h>0\) except at the center of space, located at \(h=0\). The metric above describes a BH spacetime if \(f(r)\) admits real, positive zeroes (with \(h>0\)), the largest of which locates the event horizon, which we denote by \(r_{\rm h}\). The Lagrangian \(\mathscr{L}\) describing the motion along a null geodesic \(x^{\alpha}(\lambda)\) is defined as \(2\mathscr{L}:=g_{\alpha\beta}\dot{x}^{\alpha}\dot{x}^{\beta}\), where \(\dot{x}^{\alpha}=\mathrm{d}x^{\alpha}/\mathrm{d}\lambda=:k^{\alpha}\) is the tangent and \(\lambda\) an affine parameter along it. Of the associated momenta, \(p_{\alpha}=\partial_{\dot{x}^{\alpha}}\mathscr{L}=k_{\alpha}\), there are two that are conserved (due to the Euler-Lagrange equations, \(\dot{p}_{\alpha}=\partial_{\alpha}\mathscr{L}\)), namely the energy \(E=-p_{t}\) and the azimuthal angular momentum \(L=p_{\varphi}\) of the orbit. In addition to the three conserved quantities \(\{2\mathscr{L}(=0),E,L\}\), a fourth constant of the motion exists, namely the (non-negative) Carter constant \(C=p_{\vartheta}^{2}+p_{\varphi}^{2}\csc^{2}\vartheta\). The Carter constant defined in this way is simply the square of the total angular momentum and is associated with a Killing-Yano tensor (see, e.g., [75]), which plays a role similar to that of the familiar Laplace-Runge-Lenz vector in Newtonian gravity [76; 77; 78; 79]. With four dynamical constants, we can separate the geodesic equation and write the tangent to an arbitrary null geodesic as being [80], \[k^{\alpha}=E\left(f^{-1},\pm_{r}g^{-1/2}\sqrt{1-\eta^{2}fh^{-1}},\pm_{\vartheta }h^{-1}\sqrt{\eta^{2}-\xi^{2}\csc^{2}\vartheta},\pm_{\varphi}h^{-1}\xi\csc^{2 }\vartheta\right)\,, \tag{2}\] where the indices \(\pm_{r},\pm_{\vartheta}\), and \(\pm_{\varphi}\) correspond to the signs of the radial, polar, and azimuthal velocities respectively, and we have introduced the useful (energy-rescaled, non-negative) quantities \(\xi=|L|/E\) and \(\eta=\sqrt{C}/E\). Orbits with \(\eta=0\) necessarily require \(\xi=0\) as well (see \(k^{\vartheta}\)), i.e., \(\dot{\vartheta}=\ddot{\vartheta}=\dot{\varphi}=\ddot{\varphi}=0\), which correspond to radial null geodesics, i.e., members of the ingoing (\(-_{r}\)) and outgoing (\(+_{r}\)) principal null congruences (PNCs) of these type-D static spacetimes. Orbits with \(\eta=\xi\neq 0\) are equatorial orbits and those with \(\eta\neq 0\) but \(\xi=0\) correspond to meridional photon orbits (constant-\(\varphi\) orbits, corresponding to motion along longitudes). Notice that the spatial projections of all of these photon orbits are planar. In fact, due to spherical symmetry, all null geodesics are spatially-planar. The solution to the relevant Euler equation, \[\frac{\mathrm{d}\varphi}{\mathrm{d}\vartheta}=\frac{k^{\varphi}}{k^{\vartheta}}= \pm_{\vartheta}\pm_{\varphi}\,\frac{(\xi/\eta)\csc^{2}\vartheta}{\sqrt{1-(\xi/ \eta)^{2}\csc^{2}\vartheta}}\,, \tag{3}\] is given simply as, \[\varphi(\vartheta)=-\pm_{\vartheta}\pm_{\varphi}\sin^{-1}\left(\cot\vartheta/ \chi\right)+\varphi_{0}\,, \tag{4}\] where \(\chi:=\left(\eta^{2}/\xi^{2}-1\right)^{1/2}\) and \(\varphi_{0}\) is an integration constant. We can put this into a more suggestive form as, \[(-\tilde{\chi}\sin\varphi_{0})\sin\vartheta\cos\varphi+(\tilde{\chi}\cos \varphi_{0})\sin\vartheta\sin\varphi+\cos\vartheta=0\,,\] in order to see that this is the equation of a plane passing through the center of space. In this last equation, we have absorbed all the signs into \(\tilde{\chi}:=-\pm\vartheta\pm_{\varphi}\chi\) and clearly orbits (or their sections) that differ only in the sign of \(\tilde{\chi}\) lie in the same plane. For completeness, we note that since the tangents to arbitrary nonnull geodesics differ from eq. 2 only in the radial component (which is then \(\dot{r}=\pm_{r}Eg^{-1/2}\sqrt{1+2\mathcal{E}E^{-2}f-\eta^{2}fh^{-1}}\) instead), all geodesics (and in particular timelike ones \(2\mathcal{E}=-1\)) are also planar. Finally, all geodesic orbits are simply copies of those that lie in, e.g., the equatorial plane or a meridional plane, to which we restrict our attention to henceforth. Due to the strong gravity near compact objects, it is generically possible for photons to move on circular orbits (see, e.g., Refs. [15; 16]), such that \(k^{r}=\dot{k}^{r}=0\) which, of course, correspond to fixed points of the radial motion (see Sec. 1.2 below). While there exist static spacetimes admitting multiple such fixed points (see, e.g., Refs. [81; 50; 82]), we restrict our focus here to those that admit a single unstable fixed point, present at \(r=r_{\mathrm{ps}}\), corresponding to the location of the photon sphere. Photons moving on meridional circular orbits have \(\eta=\eta_{\mathrm{ps}}=\sqrt{h(r_{\mathrm{ps}})/f(r_{\mathrm{ps}})}\) (see eq. 46), and only those meridional photon orbits with angular momenta \(\eta\geq\eta_{\mathrm{ps}}\) admit radial turning points at some \(r=r_{\mathrm{tp}}(\eta)\), which can be identified as solutions of \(k^{r}(\eta,r_{\mathrm{tp}}(\eta))=0\). ### Angular Deflection due to Gravitational Lensing and Higher-Order Images Meridional orbits admit two trivial polar turning points at \(\vartheta=0,\pi\), courtesy of the coordinate system. Treating their polar sections \(\vartheta(\lambda)\) as oriented curves, due to the invariance of the sense of rotation or "polarity" of the orbit around the central object, it is clear to see that the total polar deflection experienced by meridional photons on the photon sphere diverges, \(\dot{\mathcal{A}}\vartheta_{\mathrm{ps}}=\dot{f}\,\mathrm{d}\vartheta=E\eta_{ \mathrm{ps}}h^{-1}(r_{\mathrm{ps}})\int_{0}^{\infty}\mathrm{d}\lambda\), where \(\dot{f}\) denotes an integral along the worldline of the photon (see, e.g., [68; 75; 83]). Naturally, we can anticipate that the deflection experienced by the photons that approach close to the photon sphere can become arbitrarily large. This total angular deflection \(\dot{\mathcal{A}}\vartheta\) allows us to conveniently introduce the winding number along individual photon orbits below (9), as in [57; 21; 53] (see also [84; 15; 85]). The signed change in the polar coordinate \(\dot{\mathcal{A}}\vartheta^{\pm}:=\vartheta(\lambda=\lambda_{\mathrm{f}})- \vartheta(\lambda=0)\) experienced by a photon moving on a meridional photon orbit (\(\xi=0\)) and reaching an asymptotic observer \(r(\lambda=\lambda_{\mathrm{f}})=\infty\) depends on its angular momentum \(\eta\), the emission radius \(r(\lambda=0)=r_{\mathrm{e}}\), and the initial signs of its radial (\(k^{r}_{\mathrm{e}}=k^{r}(0)\)) and polar (\(k^{\vartheta}_{\mathrm{e}}=k^{\vartheta}(0);\pm\)) velocities. We do not need the magnitudes of the initial coordinate velocities since they are determined by the choice of \(r_{\mathrm{e}}\) and \(\eta\) (2). This is the (total) angular deflection due to gravitational lensing experienced by such photons and is given heuristically as, \[\dot{\mathcal{A}}\vartheta=\dot{\int}\,\mathrm{d}\vartheta=\dot{\int}\,\hat{ \vartheta}\,\mathrm{d}\lambda=\sum_{p}\left[\int_{p}\left(\hat{\vartheta}/\dot {r}\right)\mathrm{d}r\right]=\sum_{p}\mathrm{I}_{p}\,. \tag{5}\] In writing the last equality we have split the photon orbit into pieces \(p\) on which \(r(\lambda)\) is bijective and the integrals \(\mathrm{I}_{p}\) are path-independent. We can rewrite the above explicitly to obtain an expression for the total angular deflection as, \[\dot{\mathcal{A}}\vartheta^{\pm}(\eta,r_{\mathrm{e}})=\begin{cases}\pm\left[+r _{r}\int_{r_{\mathrm{e}}}^{\infty}\eta h^{-1}\mathcal{R}^{-1/2}\,\mathrm{d}r \right]\,,&k^{r}_{\mathrm{e}}>0\,;\;r_{\mathrm{e}}>\{r_{\mathrm{h}}\text{ if }\eta<\eta_{\mathrm{ps}},\;r_{\mathrm{tp}}(\eta)\text{ if }\eta\geq\eta_{ \mathrm{ps}}\}\\ \pm\left[-r_{r}\int_{r_{\mathrm{e}}}^{r_{\mathrm{tp}}(\eta)}\eta h^{-1} \mathcal{R}^{-1/2}\,\mathrm{d}r+r_{r}\int_{r_{\mathrm{tp}}(\eta)}^{\infty} \eta h^{-1}\mathcal{R}^{-1/2}\,\mathrm{d}r\right]\,,&k^{r}_{\mathrm{e}}\leq 0\,; \text{if }\eta>\eta_{\mathrm{ps}}\text{ and }\;r_{\mathrm{e}}\geq r_{\mathrm{tp}}(\eta)\end{cases} \tag{6}\] where \(\lambda_{\mathrm{f}}\) is the elapsed affine time (or total path-length; see, e.g., [48; 57]) and \(\mathcal{R}\) is defined as, \[\mathcal{R}(\eta,r):=(k^{r}/E)^{2}=g^{-1}\left[1-\eta^{2}fh^{-1}\right]\,. \tag{7}\] The equation above is an "energy equation," \(\dot{r}^{2}-E^{2}\mathcal{R}=0\), with \(\mathcal{R}\) playing the role of an effective potential for null geodesics. As anticipated, we can see explicitly from eq. 6 that photon orbits with \(\eta=0\) do not experience polar deflections \(\dot{\mathcal{A}}\vartheta:=|\dot{\mathcal{A}}\vartheta^{\pm}|=0\), and are radial orbits. For equatorial orbits, one can simply replace \(\vartheta\) and \(\eta\) by \(\varphi\) and \(\xi\) respectively in the discussion above. We adopt here the convention of defining, without loss of generality (due to spherical symmetry), the inclination of the observer to be zero [86]. This has the simplifying consequence that all photons arriving at this observer move only on meridional planes through the bulk of space. A photon emitted from an azimuthal angle \(\varphi\), with initially positive (\(+\)) or negative (\(-\)) polar velocity \(k^{\vartheta}_{\mathrm{e}}\), appears on the image plane at the polar coordinate \(\psi^{\pm}\) (see, e.g., [19; 20; 21; 69]; see also the rotation parameter \(\delta_{0}\) in [68]), \[\psi^{\pm}(\varphi)=\begin{cases}3\pi/2-\varphi\,,&k_{\rm e}^{\,\theta}<0\\ \pi/2-\varphi\,,&k_{\rm e}^{\,\theta}>0\end{cases}\,. \tag{8}\] The radius on the image plane at which the photon arrives is given by its angular momentum, i.e., \(\eta\) is also the apparent impact parameter of the photon [20], and can be used as the radial coordinate on the image plane. Thus, as an example, the radius of the shadow boundary curve on the image plane is simply the angular momentum of the photon on a circular orbit \(\eta=\eta_{\rm ps}\). Putting everything together, photons emitted from an initial location \((r_{\rm e},0\leq\vartheta_{\rm e}\leq\pi,0\leq\varphi_{\rm e}<2\pi)\) that reach this observer appear on the image plane at \((\eta,\psi^{\pm}(\varphi_{\rm e}))\), with \(\eta\) determined by eq. 6 where \(\not{\Delta}\theta^{\pm}=-\vartheta_{\rm e}\mod 2\pi\). The modulo piece in the equation above is central to the notion of higher-order images. There exist photons emitted from the same initial spatial location and captured by an observer at the same final spatial location that have orbits differing in total angular deflection, elapsed affine time (or total path-length), and elapsed coordinate time (\(\not{\Delta}t=t(\lambda_{\rm f})-t(0)\)). Photons that make it to our observer in particular, present on the \(z-\)axis, experience angular deflections of, \[\not{\Delta}\theta_{m}^{\pm}=-\vartheta_{\rm e}\pm 2\pi m\,,\ \ 0<\vartheta_{\rm e }<\pi\,, \tag{9}\] where \(m\) is a positive integer and is called the winding number (see, e.g., [85; 84; 15]), and each distinct orbit is indexed by the pair \((m,\pm)\). From eq. 8 we can see that of these photons those with initially negative (\(\mathrm{sgn.}[k_{\rm e}^{\,\theta}]<0;-\)) and with positive (+) polar velocities arrive on a single line in the image plane but on opposite sides of the origin. The image plane origin is defined as the location on the image plane from which the member of the ingoing PNC at that event enters the past light cone there. The total angular deflection for this class of orbits increases in the sequence \(\{(0,-),(1,+),(1,-),(2,+),\cdots\}\) taking values in \(\{(0,\pi),(\pi,2\pi),(2\pi,3\pi),(3\pi,4\pi),\cdots\}\) respectively. If we index these sets by \(n=\{0,1,2,3,\cdots\}\), then \(n\) defines the order of the image and we can rewrite eq. 9 as, \[\not{\Delta}\theta_{n}^{\pm}=\pi/2-\vartheta_{\rm e}+(-1)^{n+1}(2n+1)\pi/2\,, \ \ 0<\vartheta_{\rm e}<\pi\,. \tag{10}\] The superscript \(\pm\) in the equation above indicates that \(\not{\Delta}\theta^{\pm}\) is a signed quantity (unlike its unsigned counterpart introduced above \(\not{\Delta}\vartheta=|\not{\Delta}\theta^{\pm}|\)) and serves as a useful label to track the associated initial polar velocity, which determines the side of the image plane origin at which the photon appears. All of these photons not only arrive at different locations on the image plane of the observer (different \(\not{\Delta}\vartheta\) implies different \(\eta\)) but at different times when emitted from the same event [88; 87; 60]. Therefore, these different photon orbits connect the same initial and final spatial locations but not the same two events in spacetime. Photons forming increasingly higher-order images have concomitantly longer paths and take longer before getting to the observer when emitted from the same event. Furthermore, by extension, we can define the \(n^{\rm th}-\)order image of an extended source of emission as being formed by the set of photons that have index \(n\). Finally, for the case when \(\vartheta_{\rm e}=0\), pairs of photon orbits of orders \(2n+1\) and \(2n+2\) have identical orbits under reflections across the axis of the observer. In fact, due to the associated symmetries of this configuration, a point source present at such a location (called a caustic) does not form just two images but an entire ring, called an Einstein ring or a critical curve (cf. [54]). All of the photons that form this ring on the image plane arrive at precisely the same time when emitted from the same event due to identical orbits. Caustics are defined as locations in the past light cone of the observer that have divergent magnifications and critical curves are the maps of these points on the image plane [54; 38]. There is another (half-)line of caustics at \(\vartheta_{\rm e}=\pi\) and pairs of photon orbits of orders \(2n\) and \(2n+1\) are identical. Motivated by Fig. 2, we introduce the convention of indexing the critical curves by \(n\), where the total gravitational lensing experienced by the photons that form it is given as \(\not{\Delta}\vartheta=n\pi\). Then odd-order critical curves are formed by the caustic \(\vartheta_{\rm e}=\pi\) whereas the even-order critical curves are formed by the other caustic at \(\vartheta_{\rm e}=0\). With this, it becomes clear that the \(n=\infty\) critical curve coincides with the shadow boundary curve [57]. Figure 1 shows the first four order (\(n=0-3\)) images of emitters located at different radii (\(r_{\rm e}=2M^{+},2.9M,3.1M,6M\)) and at different colatitudes or equivalently at different inclinations relative to the asymptotic observer present on the \(+z-\)axis (\(\vartheta=0\)) that collects these photons, i.e., \(\vartheta_{\rm e}=0^{\circ},15^{\circ},30^{\circ},\cdots,180^{\circ}\). We can see how increasingly higher-order images appear increasingly closer to the shadow boundary (left to right panels), independently of the location of the emitter. Therefore, it is also clear that images of arbitrary emitters of increasingly higher orders occupy increasingly narrower regions on the image plane. It is worth considering the \(n=0\) image of emitters located at \(r_{\rm e}=6M\) in some more detail (bottom-left panel). Photons that experience increasingly larger angular deflections (emitted from larger colatitudes) initially approach the shadow boundary, consistent with our rule of thumb, from the inside (\(\eta\to\eta_{\rm ps}^{-}\)). However, this trend reverses briefly as photons start to appear outside the shadow boundary. This trend reverses yet again as photons experience even greater angular deflections and appear increasingly closer to the shadow boundary, now from the outside (\(\eta\to\eta_{\rm ps}^{+}\)). This illustrates how photons emitted from the same radial location \(r_{\rm e}\) and which appear at the same radius on the image plane \(\eta\) can differ in the amount of angular deflection they experience due to different initial conditions (emission angular position, sign of initial radial velocity). This map from \(r\mapsto\eta\) is non-bijective only for photons that are emitted from outside the photon sphere \(r>r_{\rm ps}\) due to their orbits permitting radial turning points. This is clear to see from the following figure (see the bottom right panel of Fig. 2). Furthermore, we emphasize that this is a generic feature of all order images. However, the band of emission radii for which this map is non-bijective shrinks exponentially with increasing image order. For an image of order-\(n\) this band is bounded from above (\(r_{\rm ps}<r_{\rm e}\leq r_{\rm m;C^{T}}\)) by the radius \(r_{\rm m;C^{T}}\) at which a photon emitted with zero radial velocity (\(k_{\rm e}^{r}=0\)) experiences an angular deflection of exactly \(n\pi\), i.e., Figure 1: _Photon orbits connecting point sources to an observer in a Schwarzschild black hole spacetime_. In the top-left panel, we show a set of photon orbits corresponding to photons emitted from just outside the horizon, \(r_{\rm e}=2M^{+}\), and which reach an asymptotic observer present on the \(+z-\)axis. The different colors indicate that they are emitted from different colatitudes \(\theta_{\rm e}=0^{\circ},15^{\circ},30^{\circ},\cdots,180^{\circ}\). Across the top row, in each panel, lines of the same color represent photon orbits that begin from the same initial spatial positions. Thus, a countably infinite number of photon orbits connect any specific pair of emitter and observer. The order \(n\) of the photon orbit increases from left to right across columns, i.e., the path length of photons with the same line color increases left to right, as does its total angular deflection and the time measured by an asymptotic static observer between emission and detection. Across rows, the radial location of the emitters is varied, \(r_{\rm e}=2M^{+},2.9M,3.1M,6M\). The locations of the event horizon, of the photon shell, and of the innermost stable circular orbit (ISCO) are at \(r_{\rm h}=2M\) (black circle), \(r_{\rm ps}=3M\) (dashed blue circle), and \(r_{\rm ps}=6M\) (dotted green circle). As a rule of thumb, larger light bending occurs for photons with dimensionless angular momenta close to the critical value \(p\,\rho/E=\sqrt{27}M=\eta_{\rm ps}\), shown as a vertical red line in all panels. These appear close to the shadow boundary on the image plane (\(\eta\approx\eta_{\rm ps}\)) and necessarily access the close vicinity of the photon shell. Finally, it is also clear to see that higher-order images (across each row) become increasingly compact, i.e., they occupy increasingly compact regions on the image plane. \(\Delta\vartheta(\eta_{n_{i};\mathrm{CT}^{\mathrm{T}}},r_{n;\mathrm{CT}^{\mathrm{T}}})=n\pi\), where \(\eta_{n_{i};\mathrm{CT}}\) is a solution of the radial turning point equation, \(k^{r}(\eta,r_{n;\mathrm{CT}^{\mathrm{T}}}(\eta))=0\) (see the right panel of Fig. 13). We will, in this work, consider the impact of varying the morphological parameters of the emitting region on its image and, in particular, on the structure of the observed photon ring. Towards this end, and as noted above, we model the emitting region as a conical torus. In the top-left panel of Fig. 2 we show the bounding surfaces of our fiducial torus model (cf. Sec. II.4). In the top-right panel of Fig. 2, we show the spatial orbits of null geodesics in a meridional plane in a Schwarzschild BH Figure 2: _Photon orbits and higher-order images of emitters in a Schwarzschild black hole (BH) spacetime._ The top-left panel shows our simple conical emission torus model. The top-right panel displays a set of photon orbits reaching an observer on the \(+z-\)axis. The event horizon and the photon sphere are shown as black and blue circles respectively. The green and purple lines represent a cross-section of the torus. The bottom-left figure shows the observer’s screen image of the torus, with red and blue shading indicating regions collecting \(n=0\) and \(n=1\) photons, corresponding to the first and second photon subrings. The bottom-right panel presents the angular deflection for photons emitted from different radii and with impact parameters or angular momenta. Photons emitted towards and away from the BH are shown in dashed and solid lines respectively. Photons emitted from the conical torus occupy the darker-shaded regions. The dashed black line in the \(n=0\) region tracks the “inner shadow size” [89]. In all panels, the vertical red lines show the size of the BH shadow. spacetime. If we take this to be the \(yz-\)plane (\(\varphi=\pi/2\), \(3\pi/2\)), then these photons appear on the image plane Cartesian \(\alpha-\)axis see (eq. 8; see also the bottom-left panel). Photons having larger impact parameters \(\eta\) naturally arrive at larger radii on the image plane; These clearly do not approach the BH closely and deviate only slightly from a straight line, \(\mathbf{\Delta}\theta\approx\pi\). Similarly, photons with small impact parameters also undergo nearly no deflection. On the other hand, photons that appear in the region \(\delta\mathcal{E}_{\infty}\) (\(\eta\approx\eta_{\text{ps}}\)) can be strongly-lensed by the BH, \(\mathbf{\Delta}\theta\gg\pi\). This region \(\delta\mathcal{E}_{\infty}\) on the image plane (\(|\bar{\eta}|\ll 1;\bar{\eta}:=\eta/\eta_{\text{ps}}-1\)) is called the photon ring [57; 58]. Photons that are strongly-lensed necessarily access the close vicinity of the photon shell \(\delta S\) (\(r_{\text{e}}\approx r_{\text{ps}}\)) somewhere along their orbit (cf. also Fig. 1). In the bottom-left panel of Fig. 2, we show the image of the solid conical torus as seen by an observer lying along the \(+z-\)axis, viewing the faux emitting region face-on. We have also shown the region on the image plane occupied by the \(n=0\) or direct image as well as the \(n=1\) or first-order image, which collect photons that undergo deflections between \(0\) and \(\pi\) and between \(\pi\) and \(2\pi\) respectively. It is clear to see how a higher-order image is a demagnified (or thinner) version of a lower-order image. Finally, in the bottom-right panel of Fig. 2 (see also [90]), we show the total angular deflection due to gravitational lensing \(\mathbf{\Delta}\theta\) or the net change in the polar coordinate experienced by photons on meridional orbits when emitted from spherical shells of varying radii \(r=r_{\text{e}}\), as given by eq. (6). Those that are emitted in the radially-outward (\(k_{e}^{r}>0\)) and -inward (\(k_{e}^{r}<0\)) directions are shown in dotted and dashed lines respectively. These two lines meet at the angular momentum \(\eta_{\text{tp}}(r_{\text{e}})=\sqrt{h(r_{\text{e}})/f(r_{\text{e}})}\) for which \(r=r_{\text{e}}\) is the orbital radial turning point. This figure illustrates how all the photons emitted from any particular radius \(r=r_{\text{e}}\) in the bulk appear inside the radius \(\eta=\eta_{\text{tp}}(r_{\text{e}})\) on the boundary. While photons emitted from inside the photon sphere (\(r<r_{\text{ps}}\)) always appear inside the shadow boundary (\(\eta<\eta_{\text{ps}}\)), those that are emitted from outside the photon sphere can appear either outside or inside the shadow boundary (see also Fig. 1). This last set of photon orbits warrants further interest: For example, there exist photons that are emitted from a shell of radius \(r_{\text{e}}=6M\) which appear in the region \(\eta<\sqrt{27}M\), all emitted radially outwards (\(k_{e}^{r}>0\)). While in this case they are all only weakly-lensed (\(\mathbf{\Delta}\theta<\pi\)) by the gravity of the BH, this is not necessarily true in general: Photons emitted from \(r_{\text{e}}\gtrsim r_{\text{ps}}\) can undergo large deflections and their higher-order images can appear inside the shadow boundary (see the discussion below in Sec. II.2). We direct the reader to see Appendix A for an analytical classification of the different kinds of photon orbits that play a role in image formation. Overall, this figure neatly shows the gravitationally-lensed sizes on the observer's image plane (\(\eta;x-\)axis) of emitting sources of various shapes and sizes (\(r_{\text{e}}\)) and also elucidates how a detailed study of photon orbits in the simplest of spacetimes (static and spherically-symmetric) can be richly illuminating. ### Fundamental Universal Scaling Relations in the Photon Ring Now that we have qualitatively introduced the notion of a photon ring, following Refs. [58; 68], here we will see how it can be identified quantitatively as a region on the image plane where the total deflection angle increases logarithmically as \(\mathbf{\Delta}\theta\propto\ln|\eta-\eta_{\text{ps}}|\propto\ln|\bar{\eta}|\). An analytic approximation for the net angular deflection of photons on orbits that approach the vicinity of the photon sphere in arbitrary static and spherically-symmetric spacetimes was reported in [55]. With the aim of extending the analysis of [55], to characterize all of the aforementioned quantities (elapsed affine time, elapsed coordinate time, total deflection angle), we now introduce a general integral expression (12) that can be used to obtain these quantities by making appropriate choices for the associated integrand. As we will see below, this allows us to easily demonstrate the universal features that photon rings exhibit. Towards this end, we first introduce the path-independent integral (with \(\dot{Q}(\eta,r)\) a regular function), \[\Delta Q(\eta,r_{1},r_{2}):=\int_{r_{1}}^{r_{2}}\dot{Q}(\eta,r)\mathcal{R}^{-1 /2}(\eta,r)\ \mathrm{d}r\,, \tag{11}\] Since by definition \(\mathcal{R}(\eta,r_{\text{tp}}(\eta))=0\), all integrals of the type \(\Delta Q(\eta,r_{\text{tp}}(\eta),r_{2})\) have divergent integrands with poles at precisely the same location. Depending on the radial-falloff of \(\dot{Q}\), these may or may not be finite but the dominant contribution to these assuredly comes from the region close to \(r_{\text{tp}}(\eta)\). We demonstrate in Appendix A how these dominant pieces can be characterized by elliptic functions in general, for arbitrary static and spherically-symmetric spacetimes. The path-independent integral expression above (11) enables the following compact notation for path-dependent integrals along photon orbits, \[\mathbf{\Delta}Q(\eta,r_{\text{e}}) := \oint_{\lambda_{1}}^{\lambda_{2}}\dot{Q}(\eta,r(\lambda))\ \mathrm{d}\lambda=\oint_{r_{\text{e}}}^{\infty}\dot{Q}\ \mathcal{R}^{-1/2}\ \mathrm{d}r \tag{12}\] \[= \begin{cases}\Delta Q(\eta,r_{\text{e}},\infty)\,,&k_{e}^{r}>0\,; \,r_{\text{e}}>\{r_{\text{h}}\ \text{if}\ \eta<\eta_{\text{ps}},\ r_{\text{tp}}(\eta)\ \text{if}\ \eta\geq\eta_{\text{ps}}\}\\ \Delta Q(\eta,r_{\text{tp}}(\eta),r_{\text{e}})+\Delta Q(\eta,r_{\text{tp}}( \eta),\infty)\,,&k_{e}^{r}\leq 0\,;\text{if}\ \eta>\eta_{\text{ps}}\ \text{and}\ \ r_{\text{e}}\geq r_{\text{tp}}(\eta)\end{cases}\,. \tag{13}\] Equations 11 and 12 yield the (unsigned) polar angle shift equation (6) for \(\dot{Q}=|\dot{\vartheta}/E|\). Furthermore, the elapsed coordinate and affine times are simply given by these two equations when \(\dot{Q}=|\dot{t}/E|\) and \(\dot{Q}=|1/E|\) respectively. In terms of a pair of conformal variables, namely the fractional deviations of the impact parameter and the radial coordinate from their critical values respectively, \[\bar{\eta}:=\eta/\eta_{\rm ps}-1\,,\ \ \bar{r}:=r/r_{\rm ps}-1\,, \tag{14}\] it is clear that the limit \(\bar{r}\to 0\) takes us to the photon sphere in the bulk and the limit \(\bar{\eta}\to 0\) sends us to the shadow boundary on the image plane. Photon orbits that terminate on the image plane in the close vicinity of the critical curve (\(|\bar{\eta}|\ll 1\)) and which access the close vicinity of the photon sphere somewhere along the orbit (\(|\bar{r}(\lambda)|\ll 1\) for some \(\lambda\)) experience strong gravitational-lensing. More specifically, these are photons that were either emitted from well inside the photon sphere with an initial positive radial velocity (\(r_{\rm h}<r_{\rm e}\lessneq r_{\rm ps};k_{\rm e}^{r}>0;\bar{\eta}<0\)), from close to the photon sphere or a radial turning point with initially nonnegative radial velocity (\(r_{\rm e}\approx r_{\rm ps}\) or \(r_{\rm tp};k_{\rm e}^{r}\geq 0\)), or from well outside the photon sphere with an initially negative radial velocity (\(r_{\rm e}\gtrsim r_{\rm tp}(\eta);k_{\rm e}^{r}<0;\bar{\eta}>0\)). These correspond to photon orbits of \(\rm types\) A, C, and E outlined in Appendix A respectively (see also the related discussion in Sec. II.2). For such orbits, the leading-order behavior in \(\bar{\eta}\) of arbitrary path-integrals \(\Delta Q\) for small \(|\bar{\eta}|\) is given as (see Appendix A), \[\dot{\Delta}Q(\bar{\eta},\tilde{x}_{\rm e},\tilde{x}_{\rm o})\approx\dot{ \Delta}Q_{\rm D}(\bar{\eta},\tilde{x}_{\rm e},\tilde{x}_{\rm o})=\left\{ \begin{array}{ll}-\dot{Q}(0,0)\,\frac{\sqrt{g_{\rm ps}}}{\kappa_{\rm ps}} \left[\ln|\bar{\eta}|+\tilde{c}\right]\,,&\mbox{[types A, E]}\\ -\dot{Q}(0,0)\,\frac{\sqrt{g_{\rm ps}}}{2\kappa_{\rm ps}}\left[\ln|\bar{\eta}| +\tilde{c}_{\rm o}\right]\,,&\tilde{x}_{\rm e}\approx 0\quad\mbox{[ type C]}\end{array}\right.\,, \tag{15}\] where the subscript \(\rm ps\) denotes that the function is evaluated at \(r=r_{\rm ps}\) and we have absorbed the dependence on the locations of the emitter (\(r_{\rm o}\)) and the observer (\(r_{\rm o}\)) locations into \(\tilde{c}=-\ln\left[2\kappa_{\rm ps}^{2}r_{\rm ps}^{2}|\tilde{x}_{\rm e}| \tilde{x}_{\rm o}|\right]\) and \(\tilde{c}_{\rm o}=-\ln\left[2\kappa_{\rm ps}^{2}r_{\rm ps}^{2}\tilde{x}_{\rm o }^{2}\right]\). Here \(\tilde{x}\) is yet another conformal radial variable that is adapted to the specific photon orbit (fixed \(\eta\)) under study and is given as [55], \[\tilde{x}:=\begin{cases}1-r_{\rm ps}/r=1-1/(1+\bar{r})\,,&\bar{\eta}<0\\ 1-r_{\rm tp}/r=1-(1+\bar{r}_{\rm tp})/(1+\bar{r})\,,&\bar{\eta}\geq 0\end{cases}\,. \tag{16}\] The constant \(\kappa_{\rm ps}\) is called the null geodesic phase space Lyapunov exponent (see Sec. II.4 below) and is defined in terms of the spacetime metric functions as (see also Appendix A), \[\kappa_{\rm ps}^{2}:=-\frac{1}{2r_{\rm ps}^{2}}\left(\frac{\partial_{\mu}^{2 }f_{\rm ps}}{f_{\rm ps}}-\frac{\partial_{\mu}^{2}h_{\rm ps}}{h_{\rm ps}} \right)=-\frac{1}{2}\left(\frac{\partial_{r}^{2}f_{\rm ps}}{f_{\rm ps}}- \frac{\partial_{r}^{2}h_{\rm ps}}{h_{\rm ps}}\right)\,. \tag{17}\] For the Schwarzschild BH spacetime, \(\kappa_{\rm ps}=1/(\sqrt{3}M)\) (see also [57]). Equation 15 presents a powerful closed-form expression for the logarithmically-divergent piece of \(\dot{\Delta}Q\) valid for photon orbits that undergo strong gravitational lensing. Furthermore, since we have pulled \(\dot{Q}\) entirely outside the integral in eq. 15, the universal behavior of various important quantities such as the total affine time \(\dot{\Delta}\lambda\), the total coordinate time \(\dot{\Delta}t\), and the total angular deflection \(\dot{\Delta}\vartheta\) becomes apparent, \[E\dot{\Delta}\lambda^{\pm} \approx -\frac{\sqrt{g_{\rm ps}}}{\kappa_{\rm ps}}\left[\ln|\bar{\eta}|+ \tilde{c}_{\lambda}\right]\,,\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\dot{Q}=\frac{1}{E}\;;\] \[\dot{\Delta}t^{\pm} \approx -\frac{1}{f_{\rm ps}}\frac{\sqrt{g_{\rm ps}}}{\kappa_{\rm ps}} \left[\ln|\bar{\eta}|+\tilde{c}_{\tilde{t}}\right]\;=-\frac{\pi\eta_{\rm ps}} {\gamma_{\rm ps}}\left[\ln|\bar{\eta}|+\tilde{c}_{\tilde{t}}\right]\;=:t_{\rm \tilde{c},\rm ps}\left[\ln|\bar{\eta}|+\tilde{c}_{\tilde{t}}\right]\qquad \qquad\dot{Q}=\frac{1}{f}\;; \tag{18}\] \[\dot{\Delta}\vartheta^{\pm} \approx +\frac{\eta_{\rm ps}}{h_{\rm ps}}\frac{\sqrt{g_{\rm ps}}}{\kappa_ {\rm ps}}\left[\ln|\bar{\eta}|+\tilde{c}_{\vartheta}\right]\;=:\mp\frac{\pi}{ \gamma_{\rm ps}}\left[\ln|\bar{\eta}|+\tilde{c}_{\vartheta}\right]\,,\qquad \qquad\qquad\qquad\qquad\dot{Q}=\pm\frac{\eta}{h}\,.\] Here \(\tilde{c}\) are some constants that depend on the radial locations of the observer and emitter. We have also introduced the purely spacetime-dependent lensing Lyapunov exponent \(\gamma_{\rm ps}\) (21) and the Lyapunov time (51), which we will discuss further below. Thus, the circular null geodesic, \((\eta,r)=(\eta_{\rm ps},r_{\rm ps})\), determined purely by the spacetime geometry, plays a key role in determining the behavior of all of these quantities for photon orbits that arrive in the photon ring \(\delta\mathcal{C}_{\infty}\) and which access the region close to the photon sphere \(\delta\rm S\). Following [58] we will define the photon ring as the region on the image plane in which the logarithmic divergence of the lensing angle sets in, i.e. when \(\dot{\Delta}\vartheta\propto\ln|\bar{\eta}|\). In writing the equation above (18) and henceforth, we restrict to photon orbits that are \(\rm type\) A and \(\rm E\) respectively for brevity, since the analysis to cover \(\mathrm{type}\)\(\mathrm{C}\) orbits follows straightforwardly (e.g., we have to account for the factor of \(1/2\)). All qualitative statements however apply to all three cases. We now report in Fig. 3 the deflection angles experienced by photons in a Schwarzschild BH spacetime with different impact parameters that access the close vicinity of the photon sphere in the bulk computed exactly via the equation given in eq. 6 and via its approximate version given above in eq. 18. This figure locates the onset of the logarithmic scaling of \(\not{\Delta}\theta\) with \(|\bar{\eta}|\), and thus the photon ring on the image plane, as being the region \(|\bar{\eta}|\lesssim 10^{-1}\) for \(\bar{\eta}>0\) and \(|\bar{\eta}|\lesssim 10^{-2}\) for \(\bar{\eta}<0\). We also note that the constant offset between the exact and approximate curves is accounted for by the regular piece \(\Delta Q_{\mathrm{R}}\) (\(\dot{Q}=\eta/h\)), as discussed in Appendix A. From eq. 10 and the universal lensing angle scaling relation (18), it is clear that the successive order images with the same polarity (i.e., both + or both \(-\)) of any point source seen by any observer satisfy, \[\not{\Delta}\theta^{\pm}_{n+2}-\not{\Delta}\theta^{\pm}_{n}=\pm 2\pi\approx \mp\frac{\eta_{\mathrm{ps}}}{h_{\mathrm{ps}}}\frac{\sqrt{\mathcal{S}_{\mathrm{ ps}}}}{\mathcal{\kappa}_{\mathrm{ps}}}\ln\left[\frac{\bar{\eta}_{n+2}}{\bar{ \eta}_{n}}\right]\,, \tag{19}\] or equivalently, \[\frac{\bar{\eta}_{n+2}}{\bar{\eta}_{n}}\approx\mathrm{e}^{-2\mathcal{\gamma} _{\mathrm{ps}}}\,, \tag{20}\] where \[\mathcal{\gamma}_{\mathrm{ps}} := \pi\,\frac{h_{\mathrm{ps}}}{\eta_{\mathrm{ps}}}\frac{\mathcal{ \kappa}_{\mathrm{ps}}}{\sqrt{\mathcal{S}_{\mathrm{ps}}}}=\pi\sqrt{f_{\mathrm{ ps}}g_{\mathrm{ps}}^{-1}h_{\mathrm{ps}}}\mathcal{\kappa}_{\mathrm{ps}}\] \[= \pi\left[\frac{f_{\mathrm{ps}}h_{\mathrm{ps}}}{2\mathcal{\kappa} _{\mathrm{ps}}}\left(\frac{\mathcal{G}_{n}^{2}h_{\mathrm{ps}}}{h_{\mathrm{ps }}}-\frac{\mathcal{G}_{\mathrm{ps}}^{2}}{\bar{\eta}_{n}}\right)\right]^{1/2}\,.\] We will refer to this scaling exponent \(\mathcal{\gamma}_{\mathrm{ps}}\) as the lensing Lyapunov exponent. Again, this exponent is dependent only on the metric functions and their derivatives evaluated at the photon sphere. For the Schwarzschild BH spacetime, we find \(\mathcal{\gamma}_{\mathrm{ps}}=\pi\) (see, e.g., [53; 55]). For discussions on the lensing Lyapunov exponent in Kerr spacetimes, see Refs. [57; 58; 91; 68]. While Ref. [59] evaluated \(\mathcal{\gamma}_{\mathrm{ps}}\) numerically for a variety of static and spherically-symmetric spacetimes, Ref. [92] has also recently obtained an analytic expression for the lensing Lyapunov exponent in arbitrary static spacetimes (21). Notice how equation 20 is independent of the locations of the emitter and observer. With this, we can also obtain the time delay between the arrival of the \(n^{\mathrm{th}}\) and (\(n\) + 2)\({}^{\mathrm{th}}\) order images on the image plane as being, \[\not{\Delta}t^{\pm}_{n+2}-\not{\Delta}t^{\pm}_{n}\approx-\frac{\pi\eta_{ \mathrm{ps}}}{\gamma_{\mathrm{ps}}}\ln\left[\frac{\bar{\eta}_{n+2}}{\bar{\eta} _{n}}\right]\approx 2\pi\eta_{\mathrm{ps}}\,. \tag{22}\] This is a remarkable result: A clean detection of the time delay between higher-order images can yield an independent Figure 3: _Logarithmic divergence of the deflection angle with photon angular momentum locates the photon ring on the image plane._ The total angular deflection (\(\not{\Delta}\theta\)) experienced by a photon depends on its emission location (\(r_{\mathrm{e}}\)), its apparent impact parameter (\(\eta\)), as well as the sign of its initial radial velocity (\(k_{c}^{\prime};\pm\)). Photons emitted at the photon sphere in a Schwarzschild BH spacetime (\(r_{\mathrm{e}}=r_{\mathrm{ps}}=3M\)) with critical impact parameter (\(\eta=\eta_{\mathrm{ps}}=3\sqrt{3}M\)) undergo infinite deflection due to gravitational lensing. Photons with similar initial conditions experience large but finite deflections, accessing the region near the BH (\(|\bar{r}|\ll 1\)) along their orbits. The exact angular deflection is shown in solid lines. This can be compared to the approximation obtained in eq. 15, shown here in dashed lines. The logarithmic divergence occurs for small \(|\bar{\eta}|\) values. The left panel shows photons appearing inside the shadow boundary, \(\bar{\eta}<0\), i.e., in the “inner photon ring,” whereas the right panel shows photons appearing in the “outer photon ring”. The lensing Lyapunov exponent, \(\mathcal{\gamma}_{\mathrm{ps}}\), here takes value \(\mathcal{\gamma}_{\mathrm{ps}}=\pi\). estimate of the shadow size of the central compact object. This can be achieved, e.g., by analyzing the light curve associated with a hotspot or flaring event in the vicinity of an ultracompact object. Furthermore, this is independent (in GR) of the frequency at which these observations are made. We explore this further below in Sec. I.2.1. For completeness, we note that the path length of a photon that forms the \((n+2)-\)order image exceeds that of the one that forms the order\(-n\) image by, \[E\left(\not{\Delta}\lambda_{n+2}^{\pm}-\not{\Delta}\lambda_{n}^{\pm}\right) \approx-\frac{\sqrt{g_{\rm ps}}}{\kappa_{\rm ps}}\ln\left[\frac{\bar{\eta}_{n +2}}{\bar{\eta}_{n}}\right]\approx 2\pi\sqrt{\frac{f_{\rm ps}}{h_{\rm ps}}}\approx \frac{2\pi}{\eta_{\rm ps}}\,. \tag{23}\] We can also explain in full generality how the relationship between the locations of appearance of consecutive order images (opposite polarities) is dependent only on the lensing Lyapunov exponent and the relative inclination of the source and emitter, but not on their radial locations. From (see eqs. 10 and 18) \[\not{\Delta}\theta_{n+1}^{\pm}+\not{\Delta}\theta_{n}^{\mp}=-2\theta_{\rm e}+ \pi[1\pm 1]\approx\mp\frac{\pi}{\gamma_{\rm ps}}\ln\left[\frac{\bar{\eta}_{n+1} }{\bar{\eta}_{n}}\right] \tag{24}\] we find that \[\frac{\bar{\eta}_{n+1}}{\bar{\eta}_{n}}\approx{\rm e}^{-\gamma_{\rm ps}}\cdot {\rm e}^{\pm\gamma_{\rm ps}(2\theta_{\rm e}/\pi-1)}\,. \tag{25}\] We can recover eq. 20 from eq. 25 by seeing that, e.g., \((\not{\Delta}\theta_{n+2}^{+}+\not{\Delta}\theta_{n+1}^{-})-(\not{\Delta} \theta_{n+1}^{-}+\not{\Delta}\theta_{n}^{+})=(-2\theta_{\rm e}+2\pi)-(-2\theta _{\rm e})\). With this, we can also obtain the time delay and the path-length difference between the arrival of the consecutive order images on the image plane as being, \[\not{\Delta}t_{n+1}^{\pm}-\not{\Delta}t_{n}^{\mp} \approx -\frac{\pi\eta_{\rm ps}}{\gamma_{\rm ps}}\ln\left[\frac{\bar{ \eta}_{n+1}}{\bar{\eta}_{n}}\right] \tag{26}\] \[\approx \pi\eta_{\rm ps}\left[1\mp\left(\frac{2\theta_{\rm e}}{\pi_{n}} -1\right)\right]\] \[E\left(\not{\Delta}\lambda_{n+1}^{\pm}-\not{\Delta}\lambda_{n}^{\mp}\right)\] (27) \[\approx \frac{\pi}{\eta_{\rm ps}}\left[1\mp\left(\frac{2\theta_{\rm e}} {\pi}-1\right)\right]\,.\] Finally, for an equatorially-located emitter (\(\theta_{\rm e}=\pi/2\)) viewed by our observer on the \(z-\)axis ("face-on") in particular, eq. 25 simplifies to (see also [70]), \[\frac{\bar{\eta}_{n+1}}{\bar{\eta}_{n}}\approx{\rm e}^{-\gamma_{\rm ps}}\,. \tag{28}\] #### iii.2.1 Implications for Hotspots Flaring events associated with Sgr A\({}^{*}\) are observed across a multitude of wavelengths (see, e.g., [93; 94]). Such sources of compact flux have been modeled in practice using hotspots (see, e.g., [95; 96; 97]). As a first approximation, if we consider the hotspot to be a point source, and consider the case of one orbiting the central compact object on an equatorial Keplerian orbit of radius \(r=r_{\rm K}\), then the angle between its \(n^{\rm th}\) and \((n+2)^{\rm th}\) order images on the image plane at a particular time \(t\) can then be obtained as, \[\not{\Delta}\psi_{2}^{\pm}(r_{\rm K}) = \Delta\psi_{n+2}^{\pm}-\Delta\psi_{n}^{\pm}=-\Omega_{\rm K}(r_{ \rm K})2\pi\eta_{\rm ps}\] \[= -\left[\frac{\partial_{\rm K}f(r_{\rm K})}{\partial_{\rm K}h(r_{ \rm K})}\right]^{1/2}2\pi\eta_{\rm ps}\,,\] where \(\Omega_{\rm K}=u^{\varphi}/u^{t}\) is the angular velocity along the circular timelike geodesic, with \(u^{\alpha}\) its four-velocity. Again, remarkably, this is independent of the inclination of the observer. For an emitter on the innermost stable circular orbit (ISCO; \(r_{\rm K}=6M\)) in a Schwarzschild BH spacetime, \(\Delta\psi_{2}^{\pm}\approx-127.3^{\circ}\). If we consider the case of an observer viewing the orbiting spot from a nearly face-on inclination, from \(28\) we find the separation angle between consecutive order images on the image plane to be given simply as \[\Delta\psi_{1}^{\pm}=\Delta\psi_{n+1}^{\pm}-\Delta\psi_{n}^{\mp}=-\pi+\Delta \psi_{2}^{\pm}/2\,. \tag{30}\] Therefore, for an emitter located at about five Schwarzschild radii away, \(r_{\rm K}=10M\)[88; 98], we obtain \(\Delta\psi_{2}^{\pm}\approx-59.2^{\circ}\) and \(\Delta\psi_{1}^{\pm}\approx-209.6^{\circ}\). The inclination of the observer used in Refs. [88; 98] was \(\theta_{\rm e}=20^{\circ}\), for which they obtain \(\Delta\psi_{1}\approx+150^{\circ}\), which is well approximated by our simple estimate, demonstrating the power of the universal relations obtained above. As alluded to above, the time delay between the appearance of higher-order images can also be used to obtain information regarding the background spacetime geometry (see, e.g., [99]), and follow-up work to study the feasibility of such prospects will be reported elsewhere. We can understand the time delay more simplistically as follows. The orbital time \(t_{\rm orb}\) of a photon moving on a circular meridional (i.e., \({\rm d}\varphi=\xi=0\)) orbit is given as, \[t_{\rm orb;ps}=\frac{2\pi}{\Omega_{\rm ps}}=2\pi\eta_{\rm ps}\,, \tag{31}\] where \(\Omega_{\rm ps}\) is its angular velocity and is given as (see eq. 2). \[\Omega_{\rm ps}=\frac{k^{\,\theta}}{k^{\prime}}=\left[\frac{\mathcal{G}_{tt}}{ \mathcal{G}_{\,\theta\,\theta}}\right]\bigg{|}_{r_{\rm ps}}\eta_{\rm ps}= \frac{1}{\eta_{\rm ps}}\,. \tag{32}\] The characteristic time delay \(t_{\rm d;ps}\), which yields an approximate measure of the time elapsed between the appearance of consecutive order images on the image plane (see eq. 26), is then simply the half-orbital time of an orbiting photon, \[t_{\rm d;ps}:=\frac{t_{\rm orb;ps}}{2}=\pi\eta_{\rm ps}\,. \tag{33}\] For a Schwarzschild BH, \(t_{\rm d;ps}=\pi\sqrt{27}M\) (see also eq. 72 of [68]). Thus, a measure of the time delay between successive order images yields an approximate, and independent, direct estimate of the BH shadow size. We note that there is yet another distinct fundamental time scale associated with the photon sphere of a BH, called the Lyapunov time \(t_{\ell;\rm ps}\), which we will discuss below in eq. 51. ### Universal Scaling Relations of Intensity and Flux Density in the Photon Ring Neglecting scattering effects, the specific intensity (\(\mathrm{W}\ \mathrm{m}^{-2}\mathrm{sr}^{-1}\mathrm{Hz}^{-1}\)) at a point \((\eta,\psi)\) on the image plane due to emission from an optically-transparent (negligible absorption) region is determined by integrating the (appropriately simplified) radiative transfer equation as [100; 101]\(I_{\nu}(\eta,\psi)=\int_{0}^{\lambda_{f}}\Gamma^{2}j_{\nu}\ \mathrm{d}\lambda\), where \(\Gamma=\nu_{\mathrm{o}}/\nu_{\mathrm{e}}=k_{\alpha}u_{\mathrm{o}}^{\alpha}/k_ {\alpha}u_{\mathrm{e}}^{\alpha}\) is the redshift factor, which accounts for both gravitational and Doppler redshifts, and \(j_{\nu}\) is the monochromatic emission coefficient (\(\mathrm{W}\ \mathrm{m}^{-3}\mathrm{sr}^{-1}\mathrm{Hz}^{-1}\)). The slash indicates, as usual, that the integral is to be evaluated along the photon orbit \(x^{\mu}(\lambda)\) that terminates on the image plane at \((\eta,\psi)\), and the quantities in the integrand depend on the orbit implicitly as \(\Gamma=\Gamma(\lambda)=\Gamma(\eta,x^{\alpha}(\lambda))\) and \(j_{\nu}=j_{\nu}(\lambda)=j_{\nu}(\eta,\psi,r(\lambda))\). Furthermore, when constructing the intensity profile on the image plane of an asymptotic static observer (\(u_{\mathrm{o}}^{\mu}=\delta_{t}^{\mu}\)), the redshift factor reduces to \(\Gamma=-E/k_{\alpha}u_{\mathrm{e}}^{\alpha}\). Thus, adopting these reasonable simplifications, we can write the specific intensity at a point (\(\eta,0\leq\psi<2\pi\)) on the image plane of an asymptotic static observer as (see also [23; 27; 30; 48; 102]), \[I_{\nu}(\eta,\psi)=\int_{0}^{\lambda_{f}}\Gamma^{2}j_{\nu}\ \mathrm{d}\lambda\,;\quad\Gamma=-E/(k_{\alpha}u_{\mathrm{e}}^{\alpha})\,. \tag{34}\] Since without loss of generality, we can restrict our considerations to meridional photon orbits (\(\phi=0\)), the dependence of the integrand on \(\varphi(\lambda)\) is trivial (both \(\Gamma\) and \(j_{\nu}\)), i.e., it is simply determined by the image plane polar angle \(\psi\). Furthermore, in the absence of (radial) turning points, there is a bijective map between the affine parameter \(\lambda\) and the radial coordinate \(r\) along the orbit, which allows expressing \(t(\lambda)\) and \(\vartheta(\lambda)\) as \(t(\lambda)=t(r)\) and \(\vartheta(\lambda)=\vartheta(r)\) instead. Thus, on sections of the photon orbit with no turning points, we can write \(\Gamma=\Gamma(\eta,\psi,r(\lambda))\) and \(j_{\nu}=j_{\nu}(\eta,\psi,r(\lambda))\). With all this, similar to eq. 6, we can "unslash" the integral in eq. 34 and rewrite it simply as (cf. [30]), \[I_{\nu}(\eta,\psi)=\begin{cases}\int_{r_{\mathrm{h}}}^{\infty}\Gamma^{2}j_{ \nu}\mathcal{R}^{-1/2}\ \mathrm{d}r\,,&\eta<\eta_{\mathrm{ps}}\\ \int_{r_{\mathrm{ps}}}^{\infty}\Gamma^{2}j_{\nu}\mathcal{R}^{-1/2}\ \mathrm{d}r\,,&\eta=\eta_{\mathrm{ps}}\\ -\int_{\infty}^{r_{\mathrm{bp}}(\eta)}\Gamma^{2}j_{\nu}\mathcal{R}^{-1/2}\ \mathrm{d}r+\int_{r_{\mathrm{bp}}(\eta)}^{\infty}\Gamma^{2}j_{\nu}\mathcal{R}^ {-1/2}\ \mathrm{d}r\,,&\eta>\eta_{\mathrm{ps}}\end{cases}\,, \tag{35}\] where the lower limits \(r_{\mathrm{h}}^{+}\) and \(r_{\mathrm{ps}}^{+}\) indicate that the integral is evaluated from just outside the event horizon and from just outside the photon sphere respectively. Written this way, eq. 35 cannot be put in the form of the path-dependent general integral given in 12. Nevertheless, we can still rewrite it in terms of the path-independent general integral defined in eq. 11 with \(\dot{Q}=\Gamma^{2}j_{\nu}\) (compare against eq. 12 of [30]). We are now eminently poised to use the universal relations obtained in eq. 15 to find the leading-order behaviour in \(\bar{\eta}\) of the specific intensity profile in the photon ring. For an emitting region extending over \(r_{\mathrm{in}}\leq r\leq r_{\mathrm{out}}\), i.e., the emission coefficient \(j_{\nu}\) takes nonzero values only in this range, for an outer boundary outside the photon sphere \(r_{\mathrm{out}}>r_{\mathrm{ps}}\), depending on the location of the inner boundary \(r_{\mathrm{in}}\), we can write (via eq. 12), \[I_{\nu}(\bar{\eta},\psi)\approx\begin{cases}-\Gamma^{2}(\eta_{\mathrm{ps}}, \psi,r_{\mathrm{ps}})j_{\nu}(\eta_{\mathrm{ps}},\psi,r_{\mathrm{ps}})\frac{ \sqrt{g_{\mathrm{ps}}}}{k_{\mathrm{ps}}}\left[\ln|\bar{\eta}|+K_{1}(\psi) \right]\,,&r_{\mathrm{in}}\leq r_{\mathrm{ps}}\,;\quad\bar{\eta}>0\\ -\Gamma^{2}(\eta_{\mathrm{ps}},\psi,r_{\mathrm{ps}})j_{\nu}(\eta_{\mathrm{ps}}, \psi,r_{\mathrm{ps}})\frac{\sqrt{g_{\mathrm{ps}}}}{k_{\mathrm{ps}}}\left[\ln| \bar{\eta}|+K_{2}(\psi)\right]\,,&r_{\mathrm{in}}<r_{\mathrm{ps}}\,;\quad\bar{ \eta}<0\\ -\Gamma^{2}(\eta_{\mathrm{ps}},\psi,r_{\mathrm{ps}})j_{\nu}(\eta_{\mathrm{ps}}, \psi,r_{\mathrm{ps}})\frac{\sqrt{g_{\mathrm{ps}}}}{k_{\mathrm{ps}}}\left[\frac{ \ln|\bar{\eta}|}{2}+K_{3}(\psi)\right]\,,&r_{\mathrm{in}}=r_{\mathrm{ps}}\,; \quad\bar{\eta}<0\\ \tilde{K}(\psi)\,,&r_{\mathrm{in}}>r_{\mathrm{ps}}\end{cases}\,, \tag{36}\] where the \(K_{i}(\psi)\) and \(\tilde{K}(\psi)\) are independent of \(\bar{\eta}\), and we will not devote any further attention to these. Therefore, the dependence of the specific intensity profile in the photon ring on the plasma physics is only through the values of the redshift and the emission coefficient evaluated in the plasma frame along photons on bound orbits, as well as on \(K_{i}\) and \(\tilde{K}\). We see from above that in the absence of substantial emission from inside or on the photon sphere, the characteristic logarithmic scaling with \(\bar{\eta}\) is absent: This is interesting to compare against the figures in Refs. [24; 30]. While Sec. 5.1 of [27] presents a result qualitatively similar to eq. 36, the analysis here demonstrates the generalizing power of eq. 12: We are able to obtain _all_ universal characteristics of the photon ring in one stroke. When the emission is sourced by a hot and turbulent plasma that is being accreted onto the UCO, \(j_{\nu}(\lambda)\) is in general a function of time, and computing the intensity profile at a particular time in the frame of the observer via eq. 35 necessitates accounting for the retarded-time state of the plasma (or the retarded emission coefficient; "slow-light" [103]). All of this is already encoded in \(j_{\nu}(\lambda)\) and we are able to bypass a (relatively more) tedious numerical computation due to the analysis presented in Appendix A in arriving at eq. 36. The specific flux or flux density (\(\bar{\rm W}\,{\rm m}^{-2}{\rm Hz}^{-1}\)) on the image plane through a ring bounded by two closed curves \(\eta_{\rm in}(\psi)\) and \(\eta_{\rm out}(\psi)\) is then (see, e.g., [70]), \[F_{\nu} = \frac{1}{D^{2}}\int_{0}^{2\pi}\int_{\eta_{\rm in}(\psi)}^{\eta_{ \rm out}(\psi)}I_{\nu}(\eta,\psi)\;\eta\;{\rm d}\eta\;{\rm d}\psi\] \[= \frac{\eta_{\rm ss}^{2}}{D^{2}}\int_{0}^{2\pi}\bar{\eta}_{\eta_{ \rm in}(\psi)}^{\bar{\eta}_{\rm out}(\psi)}\;I_{\nu}(\bar{\eta},\psi)\;[1+ \bar{\eta}]\;{\rm d}\bar{\eta}\;{\rm d}\psi\;,\] where \(D\) is the distance of the ultracompact object from the observer. In particular, when computing the flux density through a photon subring, where the bounding curves are close to each other, \(|\bar{\eta}_{\rm out}-\bar{\eta}_{\rm in}|\ll 1\), and are also close to the shadow boundary, \(|\bar{\eta}_{\rm in},\bar{\eta}_{\rm out}|\ll 1\), we can use eq. 36 to simplify eq. 37 as, \[F_{\nu}\approx\begin{cases}-\frac{\eta_{\rm ss}^{2}}{D^{2}}\frac{\sqrt{8\eta_{ \rm ss}}}{\kappa_{\rm ps}}\int_{0}^{2\pi}\Gamma^{2}(\eta_{\rm ps},\psi,r_{\rm ps })j_{\nu}(\eta_{\rm ps},\psi,r_{\rm ps})\;[K_{i}(\psi)+\ln\left(\bar{\eta}_{ \rm in}(\psi)\right)\,[\bar{\eta}_{\rm out}(\psi)-\bar{\eta}_{\rm in}(\psi)] \;\;{\rm d}\psi\;,&r_{\rm in}\leq r_{\rm ps}\\ \frac{1}{D^{2}}\int_{0}^{2\pi}\bar{K}(\psi)\;[\bar{\eta}_{\rm out}(\psi)-\bar {\eta}_{\rm in}(\psi)]\;\;{\rm d}\psi\;,&r_{\rm in}>r_{\rm ps}\end{cases}\;. \tag{38}\] If we define \(\bar{w}(\psi)=\bar{\eta}_{\rm out}(\psi)-\bar{\eta}_{\rm in}(\psi)\), then for approximately concentric curves, \(|{\cal M}_{\psi}(=\partial_{\psi}\bar{w})|\ll 1\), we can simplify the above as, \[F_{\nu}\approx[J]_{\psi}\;(\bar{\eta}_{\rm out}-\bar{\eta}_{\rm in})\;, \tag{39}\] where we have used \([\cdot]_{\psi}\) to denote integration over the image plane polar angle, \[[J]_{\psi}=\begin{cases}-\frac{\eta_{\rm ss}^{2}}{D^{2}}\frac{\sqrt{8\eta_{ \rm ss}}}{\kappa_{\rm ps}}\int_{0}^{2\pi}\Gamma^{2}(\eta_{\rm ps},\psi,r_{\rm ps })j_{\nu}(\eta_{\rm ps},\psi,r_{\rm ps})\;[K_{i}(\psi)+\ln\left(\bar{\eta}_{ \rm in}(\psi)\right)\;\;{\rm d}\psi\;,&r_{\rm in}\leq r_{\rm ps}\\ \frac{1}{D^{2}}\int_{0}^{2\pi}\bar{K}(\psi)\;{\rm d}\psi\;,&r_{\rm in}>r_{\rm ps }\end{cases}\;, \tag{40}\] This last approximation may not be appropriate when considering the shapes of photon subrings cast on the image plane of a highly-inclined observer (cf. Fig. 5 below), and the perpendicular magnification \({\cal M}_{\psi}\) (cf. [53]) can play a key role in determining the subring asymmetry. Now, from eq. 39 we can obtain the following general (i.e., for all \(r_{\rm in}\)) subring flux density scaling relation, \[\frac{F_{\nu;n+1}}{F_{\nu;n}} \approx \frac{\bar{\eta}_{\rm ss+1;out}-\bar{\eta}_{\rm ss+1;in}}{\bar{ \eta}_{\rm ss+1;out}-\bar{\eta}_{\rm ss+1;in}}=\frac{\bar{\eta}_{\rm ss+1}}{ \bar{w}_{n}}\] \[= \frac{\bar{w}_{\rm ss+1}}{\bar{w}_{n}}\approx{\rm e}^{-\gamma_{\rm ps }}\cdot{\rm e}^{\pm\gamma_{\rm ps}(2\,\partial_{\pi}/\pi-1)}\;.\] We can see from above that in general (independently of the emitting region morphology) the photon subring flux density ratio is simply the ratio of the widths \(w_{n}=\eta_{\rm ps}\bar{w}_{n}\) of the subrings (see eq. 52 below), as noted also in [58]. In the above, we should choose the positive sign (\(+\)) for even \(n\) and the negative sign (\(-\)) for odd \(n\) (see eq. 10 and eq. 25). ### The Fundamental Photon Sphere Lyapunov Exponent from Phase Space To cleanly identify the motivation behind calling a scaling exponent such as \(\gamma_{\rm ps}\) a Lyapunov exponent, we switch briefly to the phase space picture. The null geodesic Hamiltonian flow equation in a static and spherically-symmetric spacetime is given as, \[\dot{q}^{a}=X_{\cal X}^{a}:=\Omega_{\Pi}^{ab}\partial_{b}{\cal H}=\begin{bmatrix} \mathcal{G}^{rr}\,p_{r}\\ -(\partial_{r}\mathcal{G}^{rr})(p_{r}^{2}/2)-\partial_{r}\mathcal{G}\end{bmatrix}\;, \tag{42}\] where \(q^{a}=(r,p_{r})\) is a vector in phase space (\(a=0,1\)), \(X_{\cal X}\) is the symplectic Hamiltonian vector field, \(\Omega_{\Pi}\) is the standard symplectic form, and \(\mathcal{H}=\mathcal{G}^{\mu\nu}p_{\mu}p_{\nu}/2=:\mathcal{G}^{rr}\,p_{r}^{2}/2+ \mathcal{G}^{\prime}(r)\) is the Hamiltonian for null geodesic motion (see, e.g., [104]). In the last equation we have introduced \(\mathcal{G}(r):=-\mathcal{E}^{2}\mathcal{G}_{rr}\mathcal{G}/2\). Fixed points of the flow \(\dot{q}_{\star}^{a}=0\) occur in phase space when \(X_{\cal X}(q_{\star}^{a})=0\). With the deviation vector \(\zeta^{a}(\lambda)=q^{a}(\lambda)-q_{\star}^{a}(\lambda)\) in phase space, we can then write the linearized Hamiltonian flow equation at a critical point as (see, e.g., [105; 106]), \[\dot{\zeta}^{a}=(L_{\cal X\star})^{a}_{\;\;b}\,\zeta^{b}\;, \tag{43}\] where the linearized Hamiltonian \(L_{\cal X\star}\) is a constant matrix and is given as, \[(L_{\cal X\star})^{a}_{\;\;b} := \partial_{b}X_{\cal X}^{a}=\Omega_{\Pi}^{ac}\partial_{c}\partial_{b }{\cal H}(q_{\star}^{a})\] \[= \begin{bmatrix}\partial_{r}\mathcal{G}^{rr}\cdot p_{r}&\mathcal{G} ^{rr}\\ -\left(\partial_{r}^{2}\mathcal{G}^{rr}\cdot(p_{r}^{2}/2)+\partial_{r}^{2} \mathcal{G}\right)&-\partial_{r}\mathcal{G}^{rr}\cdot p_{r}\end{bmatrix}\;.\] Clearly, the linearized Hamiltonian is just the symplectic Hessian of the Hamiltonian at the critical point. In the present context, however, the Hamiltonian is additionally a constant of the motion, i.e., \(\mathcal{H}=0\). This implies \(p_{r}=\pm_{r}E\mathcal{G}_{rr}\sqrt{\mathcal{R}}\), using which we can rewrite the Hamiltonian vector field as (see also [22]), \[X^{a}_{\mathcal{H}}=\begin{bmatrix}\pm_{r}E\sqrt{\mathcal{R}}\\ -(E^{2}/2)\mathcal{G}_{rr}\partial_{r}\mathcal{R}\end{bmatrix}\,. \tag{45}\] Thus, for "standard" static and spherically-symmetric spacetimes (i.e., only one fixed point), the fixed point \(q^{a}_{\star}=(r_{\rm ps},\eta_{\rm ps})\) satisfies \(\mathcal{R}(r_{\rm ps},\eta_{\rm ps})=0=\partial_{r}\mathcal{R}(r_{\rm ps}, \eta_{\rm ps})\), i.e., \[\eta_{\rm ps}=\sqrt{\frac{h_{\rm ps}}{f_{\rm ps}}}\,;\quad\frac{\partial_{r} f(r_{\rm ps})}{f(r_{\rm ps})}=\,\frac{\partial_{r}h(r_{\rm ps})}{h(r_{\rm ps})}\,. \tag{46}\] Therefore, the critical point in phase space corresponds to the circular null geodesic. We can now write the linearized Hamiltonian vector field as being given by (see also [56]), \[(L_{\mathcal{H}\star})^{a}_{\,\,b} =\,\begin{bmatrix}0&\mathcal{G}^{rr}\\ -\partial_{r}^{2}\mathcal{V}&0\end{bmatrix}\] \[=\,\begin{bmatrix}0&0&\mathcal{G}^{rr}(r_{\rm ps})\\ E^{2}\mathcal{G}_{rr}(r_{\rm ps})\left(\partial_{r}^{2}\mathcal{R}(\eta_{\rm ps },r_{\rm ps})/2\right)&0\end{bmatrix}\] \[=\,\begin{bmatrix}0&\mathcal{G}^{rr}(r_{\rm ps})\\ E^{2}\mathcal{G}_{rr}(r_{\rm ps})(\kappa_{\rm ps}^{2}/g_{\rm ps})r_{\rm ps}^ {2}&0\end{bmatrix}\,.\] Putting everything together, we have, \[\begin{bmatrix}r_{\rm ps}\tilde{r}\\ \tilde{p}_{r}\end{bmatrix} =\,\begin{bmatrix}\mathcal{G}^{rr}(r_{\rm ps})p_{r}\\ E^{2}\mathcal{G}_{rr}(r_{\rm ps})(\kappa_{\rm ps}^{2}/g_{\rm ps})r_{\rm ps}^ {3}\tilde{r}\end{bmatrix}\] \[=\,\begin{bmatrix}\pm_{r}E\sqrt{\mathcal{R}(\eta_{\rm ps},r_{ \rm ps}(1+\bar{r}))}\\ E^{2}(\kappa_{\rm ps}^{2}/f_{\rm ps})r_{\rm ps}^{3}\tilde{r}\end{bmatrix} =\,\begin{bmatrix}\pm_{r}E(\kappa_{\rm ps}/\sqrt{g_{\rm ps}})r_{\rm ps}\tilde {r}\\ E^{2}(\kappa_{\rm ps}^{2}/f_{\rm ps})r_{\rm ps}^{3}\tilde{r}\end{bmatrix}\,.\] The \(0-\)component of the equation above can be used to obtain the solution, \[\bar{r}(x)=\bar{r}(0)\exp\left[\pm_{r}E(\kappa_{\rm ps}/\sqrt{g_{\rm ps}}) \lambda\right]\,. \tag{48}\] Therefore, \(\hat{\kappa}_{\rm ps}=\kappa_{\rm ps}/\sqrt{g_{\rm ps}}\) is the (fundamental) Lyapunov exponent that governs the evolution of orbits, in phase space, that have initial conditions that differ slightly from that of the photon on the bound planar orbit. This affine Lyapunov exponent also determines the stability of the fixed point, i.e., if the real part of \(\kappa_{\rm ps}\) is positive, then the fixed point is an unstable one. Photonics emitted with initial conditions close to those of the bound ones (the phase space is also the space of initial data and by closeness, we mean \(|\zeta(0)|\ll 1\)) peel away from the photon sphere exponentially quickly, at a rate determined by \(\hat{\kappa}_{\rm ps}\) in unit affine time (for \(E=1\)). As noted above, for the Schwarzschild BH spacetime, we find that \(\hat{\kappa}_{\rm ps}=1/(\sqrt{3}M)\). Furthermore, we recognize also that the \(1-\)component of the equation above is the Jacobi or geodesic deviation equation, \(\mathrm{d}^{2}\hat{\zeta}^{a}/\mathrm{d}\lambda^{2}=-\mathcal{R}^{a}_{\mu\nu}k ^{\mu}_{\rm ps}\tilde{\zeta}^{\rho}k^{\nu}_{\rm ps}\)[107], where \(\mathcal{R}\) is the Riemann tensor, \(\hat{\zeta}^{a}\) is an appropriate deviation vector between null geodesics in spacetime, and \(k^{\mu}_{\rm ps}\) is tangent to the circular null geodesic in particular. The \(r-\)component of the deviation equation is instructive, \[\frac{\mathrm{d}^{2}\hat{\zeta}^{r}}{\mathrm{d}\lambda^{2}}=-\left[\mathcal{R }^{r}_{trt}\left(k^{t}_{\rm ps}\right)^{2}+\mathcal{R}^{r}_{\partial\,r\, \partial}\left(k^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ \(0\leq\dot{\iota}\leq\pi\) of the observer and this emitting region. This configuration can be equivalently imagined as an emitting region composed of a series of conical surfaces with half-opening angles \(\vartheta\) in \(\pi/2-\vartheta_{1/2}\leq\vartheta\leq\pi/2+\vartheta_{1/2}\) and of radial extent \(r_{\rm in}\leq r\leq r_{\rm out}\), viewed by an observer present at an inclination of \(\ell\) relative to the common axis of the cones. In particular, a geometrically-thin Novikov-Thorne (NT; [71]) accretion disk around a Schwarzschild BH is given by the choices \(r_{\rm in}=6M\), \(r_{\rm out}=\infty\), and \(\vartheta_{1/2}=0\). An observer that views this NT disk face-on is located at a relative inclination \(\dot{\iota}=0\). Furthermore, to characterize the morphology of the emitting region of a Bondi-Michel spherical accretion process [73] we can simply set \(r_{\rm in}=2M\), \(r_{\rm out}=\infty\), and \(\vartheta_{1/2}=\pi/2\). Before entering into a detailed but important quantitative discussion on the variation of the characteristics of the photon ring and its component subrings, there are striking qualitative features that are worth emphasizing. When emission from an NT thin-disk around a Schwarzschild BH is viewed face-on, all of the collected emission originates from \(\vartheta_{\rm e}=\pi/2\), and the net angular deflection experienced by all the photons that appear on the observer's screen is given simply as \(\Delta\vartheta=-\pi/2\mod 2\pi\). With this, by varying over the emission radius \(6M=r_{\rm in}\leq r_{\rm e}<r_{\rm out}=\infty\) and the initial radial velocity of the photons, we can use eq. 6 to find construct the the \(n^{\rm th}\)-photon subring. The inner \(\eta=\eta_{\pi;{\rm in}}\) and outer \(\eta=\eta_{\pi;{\rm out}}\) boundaries of the \(n^{\rm th}\)-photon subring are the gravitationally-lensed locations of the inner \(r_{\rm in}\) and outer \(r_{\rm out}\) boundaries of the emitting region. We show the first four photon subrings in the left panel of Fig. 4 and since the associated image on the sky is circularly-symmetric, we represent a quarter of each photon subring in each sector. As we go clockwise from the top-left sector (which shows the \(n=1\) subring) the radial scale is zoomed by a factor of \(\approx e^{\pi}\approx 23\). This already demonstrates how the widths of subsequently higher-order images scale via eq. 18. To be precise (see, e.g., [55, 58, 69, 70]), \[\frac{w_{n+1}}{w_{n}}=\frac{\eta_{n+1;{\rm out}}-\eta_{n+1;{\rm in}}}{\eta_{n ;{\rm out}}-\eta_{n;{\rm in}}}=\frac{\bar{\eta}_{n+1;{\rm out}}-\bar{\eta}_ {n+1;{\rm in}}}{\bar{\eta}_{n;{\rm out}}-\bar{\eta}_{n;{\rm in}}}\approx{\rm e }^{-\gamma_{\rm pw}}\,. \tag{52}\] It is clear then that the photon subrings in the image of an NT emission disk around a Schwarzschild BH exhibit self-similar or conformal scaling (in the conformal variable \(\bar{\eta}\)), with nonzero spacing between the subrings. Furthermore, notice how the photon ring lies entirely outside the \(n=\infty\) critical curve (\(\bar{\eta}_{n;{\rm in}}>0\)). That is, while this configuration of the emitting region casts an outer photon ring, an inner photon ring is absent. This is easily seen from Fig. 2: Photons emitted from \(r>6M\) with \(\eta<\eta_{\rm ps}\) (\(\bar{\eta}<0\)) are never lensed by more than \(\pi\) (i.e., \(\Delta\vartheta<\pi\)) and can therefore never contribute to the formation of an inner photon ring. On the other hand, for a spherical emitting region, sources are located everywhere in \(0\leq\vartheta_{\rm e}\leq\pi\) and \(2M<r_{\rm e}<\infty\). We show the first four subrings for this emitting region in the right panel of Fig. 4. The salient differences in the photon ring structure in this case from the thin disk case are as follows. Due to emission sourced from \(r<r_{\rm ps}\), an inner photon ring is also present in the image. Furthermore, while the subrings still scale self-similarly and this scaling is described by eq. 52, they all overlap. In this way, we see that the qualitative topological structure of the photon ring fundamentally encodes the morphology of the emitting region, and in particular the scale height and the inner boundary of the emitting region. The photon ring region is not always circularly symmetric as is the case, e.g., in Fig. 4. Even when a perfectly axisymmetric emitting region is viewed at an inclination, the shapes of subrings are not perfect annuli on the image plane. To see this, we consider in Fig. 5 the first-order images of one-dimensional emission rings present in the bulk of a Schwarzschild BH spacetime when viewed at different inclinations \(\dot{\iota}\) by an asymptotic observer present on the positive \(z-\)axis. The (spacelike) normal \({}_{\rm d}\) to the plane of any of these rings lies in the \(yz-\)plane, i.e., it can be expressed in isotropic coordinates (cf. [86]) as \({\rm n}_{\rm d}\propto(\sin\dot{\iota})\partial_{y}+(\cos\dot{\iota})\partial _{z}\). The asymmetry of the \(n=1\) subring is then clearly evident. For any one of these emission rings, an \(n=1\) photon emitted from closest to the observer (positive \(z\), negative \(y\)) is emitted in the negative \(z\) direction, executes about a half-loop around the BH in the bulk, and appears on the image plane on the positive \(y\) axis. Ones that are emitted from the furthest away from the observer (negative \(z\), positive \(y\)) appear on the negative \(y\) axis. The first of these two photons undergoes larger angular deflection and thus appears closer to the shadow boundary curve, which is shown as a red circle. This figure demonstrates how for compact emission regions \(r_{\rm out}\lesssim 100M\) (see the dot-dashed lines), the maximum variation in the conformal radius of the \(n=1\) image is at most about \(|\bar{\eta}_{1}|\lesssim 3\) for all inclinations. Furthermore, for moderate viewing angles \(\dot{\iota}\lesssim 60^{\circ}\) (see the purple lines), even for extremely large emission regions \(r_{\rm out}\lesssim 10^{4}M\) (see the solid purple lines) the maximal variation in the conformal radius of the \(n=1\) image is \(|\bar{\eta}_{1}|\lesssim 1\). Together these give us a rough quantitative sense of the magnitude of the asymmetry and (de)magnification we can expect in the first-order image of an accreting supermassive BH. While the qualitative variations in the subring asymmetry due to variations in the size of the ring in the bulk and the observer viewing angle are clear to see in all of the panels of this figure, we will not enter into a detailed quantitative study of the same in this work. In summary, the sizes and shapes of the photon subrings, which are demagnified and time-delayed images of the emitting region in the vicinity of a BH depend on the extent and morphology of the emitting region as well as, as we shall see below in Sec. III, on the spacetime geometry of the BH. On the other hand, like the size of the shadow boundary \(\eta_{\rm ps}\), the lensing Lyapunov exponent \(\gamma_{\rm ps}\) which relates different order images depends purely on the metric and can be used to discern between spacetimes cleanly. This exponent has recently been obtained approximately for various spherically-symmetric BH spacetimes in [59]. In stationary and axisymmetric spacetimes, photon orbits are generically nonplanar, and the number of exponents necessary to describe such strong-lensing features increases with orbital degrees of freedom. For a detailed study of the critical exponents associated with photon orbits in the Kerr BH spacetime, we direct the reader to see, e.g., Refs. [58, 62, 68]. Now to quantitatively study how the sizes and widths of photon subrings can vary in images of Schwarzschild BHs (fixed spacetime geometry) with the emitting region morphology, we consider systematically a sequence of emitting region morphologies. Before entering into that discussion, we will first introduce in Sec. II.1 the useful notion of the photon ring calibration factors, which will capture the variation of the sub-ring characteristics and scaling relations due to the emitting region relative to the purely spacetime-metric dependent quantities. In Sec. II.2 we will consider the variation in the subring characteristics for geometrically-thin (\(\vartheta_{1/2}=0\)) emitting regions viewed face-on (\(\dot{\iota}=0\)) due to varying inner \(r_{\rm in}\) and outer \(r_{\rm out}\) boundaries. In Sec. II.3 we will consider the scenario of thin-disks (\(\vartheta_{1/2}=0\)) viewed at an inclination (\(\dot{\iota}\neq 0\)). Finally, in Sec. II.4 we allow the geometrical-thickness (\(\vartheta_{1/2}\neq 0\)) to also vary and analyze how it affects the structure of the photon ring. ### Photon Ring Calibration Factors As discussed above, the properties of the photon subrings, such as their sizes and widths, depend on material properties or "gastrophysics" such as the morphology of the emitting region, associated plasma emissivity, optical depth, velocity, magnetic fields etc. Furthermore, since photon subrings are higher-order images of the emitting material on the image plane caused by strong gravitational lensing, the spacetime geometry has a role in shaping the photon ring as well. As we will see below, accessing increasingly higher-order images allows disentangling gravitational effects from other effects with increasing ease. To quantify the impact of the diversity of gastrophysical effects on photon ring characteristics and to cleanly delineate the influence of gastrophysics from spacetime geometry, we leverage the fruitful vocabulary of calibration factors developed in Figure 4: _Variation in the structure of the photon ring in a Schwarzschild BH spacetime with varying morphology of the emitting region._ In the left panel, we show the structure of the observed photon ring when the emitting source is a geometrically-thin disk with its inner boundary at the ISCO radius \(r_{\rm in}=6M\) and outer boundary at some large radius (\(r_{\rm out}=2\times 10^{4}M\) here), and is viewed “face-on”, i.e., by an observer lying along the axis of the disk. Successful models for quasar spectra invoke precisely such configurations (see, e.g., [72]). In the panel on the right, we show the photon ring cast by a spherically symmetric emitting region that extends all the way down to the horizon, \(r_{\rm in}=2M\). The topological structure of the photon ring region is significantly different in the two cases. In the case of emission from a geometrically-thin disk, the subrings are all of finite width, separated on the image plane, and appear outside the shadow boundary, which is shown as a red line in all sectors. In the case of a spherical emission zone, firstly, the \(n=1\) photon subring is non-compact. This is to be expected since, in this case, it comprises photons that undergo angular deflections anywhere between \(\pi<\Delta\vartheta<2\pi\), and photons emitted from arbitrarily large radii with concomitantly large impact parameters \(\eta\) easily undergo a total deflection of \(\pi\) (cf. Fig. 2). Furthermore, the subrings are now not spatially separated, i.e., they all overlap, and also straddle the shadow boundary. Finally, due to emission from the line of caustics \(\vartheta_{\rm e}=0,\pi\) in this case, we also see the formation of Einstein rings or critical curves (shown in the bounding dashed and solid lines in this figure). Therefore, the change in geometrical thickness and in the inner boundary of the emission region is responsible for the qualitatively different topological characteristics of the respective photon rings. Figure 5: _The first-order image of a ring in a Schwarzschild black hole spacetime._ We show in the top-left panel sets of rings of two different radii (\(r\approx 2M\) in dashed and \(r=6M\) in dotted lines) present in the bulk of a Schwarzschild BH spacetime, whose event horizon is shown as a gray sphere. We allow the inclination \(\dot{\iota}\) of the normal \(\mathrm{n_{d}}\) to the plane of the ring, which lies in the \(yz\)–plane, relative to the \(+z\)–axis to take values \(\dot{\iota}=0^{\circ},30^{\circ},45^{\circ},60^{\circ}\), and \(\approx 90^{\circ}\). Naturally, these rings share an axis parallel to the \(x-\)axis, and different line colors indicate different inclinations, which are consistent across all panels. For an asymptotic (\(r=\infty\)) observer present on the \(+z-\)axis, we also depict the gravitationally-lensed first (\(n=1\)) and second (\(n=2\)) order images of these rings (the \(n=2\) images essentially lie on top of each other due to drastic demagnification). We always show in bright green the circular shadow boundary curve or its radius \(\eta_{\mathrm{ps}}=\sqrt{27}M\). In the top-right panel, we show additionally the first-order images of rings that are larger in size (\(r=10^{2}M\) in dot-dashed and \(r=10^{4}M\) in solid lines) to enable comparisons of their shapes and sizes. The bottom-left panel demonstrates how these shapes and sizes change under the Mobius or conformal transformation \(\{\alpha,\beta\}\mapsto(\bar{\alpha},\bar{\beta})\). Finally, in the bottom-right panel, we show the variation in the fractional or conformal radius \(\bar{\eta}_{1}\) with the image plane polar angle \(\psi\) of the \(n=1\) image of a ring of emission. In sited demonstrates how the median (over the polar angle) diameter on the image plane is independent of the inclination of the ring in the bulk. Seen together, we can establish that the images of larger rings in the bulk are concomitantly larger and that the maximal deviation from the shadow boundary curve occurs along the projection of the normal \(\mathrm{n_{d}}\) to the plane of the ring (see the top-right and bottom-left panels). This figure qualitatively demonstrates how accretion disks can cast asymmetric photon subrings when viewed at an inclination. [12]. The \(\alpha_{1}-\)calibration factor introduced there related the diameter \(d_{\rm m}\) of the emission ring in the image of Sgr A* to the diameter of its shadow boundary \(d_{\rm sh}\) as, \(\alpha_{1}=\left<d_{\rm m}\right>_{\phi}/\left<d_{\rm sh}\right>_{\phi}\), where we have used \(\left<d\right>_{\phi}\) to indicate the median value of a polar curve \(d(\psi)\) over the image plane polar angle \(0\leq\psi<2\pi\). This calibration factor provides insights into the physical state of the accreting system: Images of accreting BHs for which \(\alpha_{1}-1\) is close to zero are (retarded time) snapshots of the dynamical flow when the largest amount of emission (the emissivity peak) is sourced extremely close to the photon shell in the bulk (see also Refs. [28; 29; 30]). In a similar vein, we now introduce the subring diameter calibration factors \(\alpha_{1;n}\) as \[\alpha_{1;n}\coloneqq\left<d_{n}\right>_{\psi}/\left<d_{\rm sh}\right>_{\phi}= 1+\left<\bar{\eta}_{n;{\rm out}}\right>_{\phi}\,, \tag{53}\] where \(d_{n}(\psi)=\eta_{n;{\rm out}}(\psi)+\eta_{n;{\rm out}}(\psi+\pi)\) is the diameter of the \(n^{\rm th}\) photon subring, with \((\eta,\psi)=(\eta_{n;{\rm out}}(\psi),\psi)\) describing the outer edge of the order\(-n\) image of the emitting source, or equivalently, the order\(-n\) image of its outer boundary. In writing the above, we have used the fact that in static and spherically-symmetric spacetimes the shadow boundary curve is perfectly circular, \(\left<d_{\rm sh}\right>_{\phi}=2\eta_{\rm ps}\). We use the median here for consistency with [12] but this can be replaced with any characteristic measure of the diameter in principle. As above, if \(\alpha_{1;n}-1\) is small, then the emissivity peak is located close to the photon shell in the bulk. However, as we will establish below, \(\alpha_{1;n}-1\) is typically substantially smaller than \(\alpha_{1}-1\), meaning that a measurement of the diameter of the first subring will yield a more precise inference of the shadow diameter than is currently available. Furthermore, the fractional variation in the subring diameter due to varying emitting region morphology is simply the difference between the maximum and minimum values that the relevant subring calibration factor takes over the range of morphological parameters, \[\left(\max.\,\left[\left<d_{n}\right>_{\psi}\right]-\min.\,\left[ \left<d_{n}\right>_{\psi}\right]\right)/\left<d_{\rm sh}\right>_{\psi}\] \[=\,\max.\,\left[\alpha_{1;n}\right]-\min.\,\left[\alpha_{1;n} \right]=:\Delta\alpha_{1;n}\,. \tag{54}\] As we shall show below, the fractional subring diameter variation diminishes exponentially with increasing image order as \(\Delta\alpha_{1;n=1}\approx{\rm e}^{-\gamma_{\rm ps}}\Delta\alpha_{1;n}\), in line with our expectation, meaning that the variations in the emitting region morphology become concomitantly suppressed. Combining the two statements above, with increasing order of image, on the one hand, we can obtain increasingly better estimates of the shadow boundary, whereas, on the other, the impact of the gastrophysics on determining the subring diameter becomes increasingly unimportant. Equivalently, by measuring the diameters of increasingly higher-order photon subrings (i.e., with increasing \(n\)), we obtain increasingly accurate (\(\alpha_{1;n}-1\to 0\)) as well as increasingly precise (\(\Delta\alpha_{1;n}\to 0\)) estimates of the shadow boundary diameter \(d_{\rm sh}\), which depends purely on the spacetime geometry. Thus, accessing increasingly higher-order subrings allows for "increasingly direct" measurements of the shadow size of the astrophysical ultracompact object. Finally, due to the approximate (25) scaling relations between the fractional diameters of the subrings (see eq. 28), \[(d_{n+1}/d_{\rm sh}-1)/(d_{n}/d_{\rm sh}-1)=\bar{\eta}_{n+1;{\rm out}}/\bar{ \eta}_{n;{\rm out}}\approx{\rm e}^{-\gamma_{\rm ps}}\,, \tag{55}\] the calibration factors (53) corresponding to consecutive subrings will also obey an approximate scaling relation, \[(\alpha_{1;n+1}-1)/(\alpha_{1;n}-1)\approx{\rm e}^{-\gamma_{\rm ps}}\,. \tag{56}\] Thus, a measurement of two subring diameters will yield an approximate measurement of the lensing Lyapunov exponent \(\gamma_{\rm ps}\). To better understand, quantitatively, the ability of subring measurements to yield the lensing Lyapunov exponent, we introduce two error functions as follows, \[\mathrm{Err.}(\gamma_{\rm ps};d_{n}) :=\left|\ln\left|\left<\bar{\eta}_{n;{\rm out}}\right>_{\phi}/ \left<\bar{\eta}_{n+1;{\rm out}}\right>_{\phi}\right|/\gamma_{\rm ps}-1\right|\,, \tag{57}\] \[=\,\left|\ln\left|(\alpha_{1;n}-1)/(\alpha_{1;n+1}-1)\right|/ \gamma_{\rm ps}-1\right|\,,\] \[\mathrm{Err.}(\gamma_{\rm ps};w_{n}) :=\,\left|\ln\left|\left<w_{n}\right>_{\phi}/\left<w_{n+1}\right>_{ \phi}\right|/\gamma_{\rm ps}-1\right|\,, \tag{58}\] where \(w_{n}\) is the width of the \(n^{\rm th}-\)subring (52). The first error function \(\mathrm{Err.}(\gamma_{\rm ps};d)\) measures the ability to extract the lensing Lyapunov exponent from a measurement of the diameters of two consecutive order subrings whereas the second \(\mathrm{Err.}(\gamma_{\rm ps};w)\) can be used to estimate the same when using the widths of two consecutive order subrings instead. We note that with additional knowledge of the inclination of the observer, the scaling relations (55, 52) used to construct the error functions (57) can be modified to reflect the corrections obtained in eq. 25 to further reduce the error in extracting the lensing Lyapunov exponent. ### Subring Variation with Varying Boundaries of a Thin-Disk Emitting Region viewed Face-On For the configuration of a geometrically-thin source of emission viewed by an observer present at zero inclination (i.e., the observer lies along the spacelike normal \(n_{\rm d}\) to the thin disk, \(n_{\rm d}\propto\partial_{z}\)), the image contains no Einstein rings due to the absence of emission from the line of caustics. All photons are emitted from a colatitude of \(\vartheta_{\rm e}=\pi/2\), and the net angular deflection experienced by the photons forming the \(n^{\rm th}-\)subring on the observer's screen is \(\Delta\vartheta_{n}=n\pi+\pi/2\) (10). Due to the symmetry of the configuration, the photon subrings on the image plane are all circularly-symmetric. Since this configuration is, in general, composed of equatorial emitters present at different radii outside the horizon, i.e., \((r,\vartheta,\varphi)=(r_{\rm e}>r_{\rm h},\pi/2,0\leq\varphi_{\rm e}<2\pi)\), we analyze first the image formation of arbitrary equatorial emitters. The order\(-n\) image of any such emitter appears on the image plane at \((\eta,\psi)=(\eta_{n}(r_{\rm e},\vartheta_{\rm e}),\psi^{\pm}(\varphi_{\rm e}))\). Due to the trivial relation between the image polar location \(\psi_{\rm e}\) and the emission source azimuthal location \(\varphi_{\rm e}\) (8), analyzing image formation reduces to understanding the relationship between the image radial coordinate \(\eta_{\rm e}\) and the emission source radial coordinate \(r_{\rm e}\). For the order\(-n\) image, this relationship is given by solving the integral equation (6), \[\Delta\vartheta(\eta_{n;{\rm e}}(r_{\rm e}),r_{\rm e})=(2n+1)\pi/2\,. \tag{59}\] We can see from Fig. 6 how photons that are emitted from inside the photon sphere in the bulk appear always inside the shadow boundary on the image plane, consistent with Fig. 1 as well as the bottom right panel of Fig. 2. Furthermore, \(n\geq 1\) photons emitted from outside the photon sphere typically appear outside the shadow boundary curve. However, and as noted above, this is not always true: Photons emitted from just outside the photon shell can appear inside the shadow boundary. This is clearly demonstrated for the \(n=1\) image by the blue solid line in the right panel here. Conversely, we iterate that while all of the photons that appear outside the shadow boundary were emitted from outside the photon sphere (see also the left panel of Fig. A1 of [30]), the photons that appear inside the shadow boundary need not have been emitted from inside the photon sphere. Recent work has demonstrated how observing multiple flaring events can be used to tomographically map the spacetime geometry of astrophysical ultracompact objects such as M87\({}^{*}\) or Sgr A\({}^{*}\) (see, e.g., [95]). With such exciting possibilities on the horizon, a holistic qualitative understanding of the different types of photon orbits that participate in image formation can be useful. It is clear that the \(n=1\) photons that appear sufficiently well outside the shadow boundary (\(\bar{\eta}_{1:\mathrm{e}}\gtrsim 0\)) were necessarily emitted from well outside the photon sphere (\(\bar{r}_{\mathrm{e}}\gtrsim 0\)) towards the BH (\(k_{\mathrm{e}}^{r}<0\)). These are what we call \(\mathrm{type}\) E orbits, above and in Appendix A. On the other hand, the \(\mathrm{type}\) A photons appear inside the shadow boundary and were necessarily emitted from well inside the photon sphere in the radially outward direction (\(k_{\mathrm{e}}^{r}>0\)). Furthermore, while all of the photons that appear inside the shadow boundary were all emitted with initial positive radial velocity (\(k_{\mathrm{e}}^{r}>0\); dashed lines in Fig. 2), the photons that appear outside the shadow boundary need not have been emitted in the radially-inward direction (see the dotted lines in Fig. 2). This brings us unavoidably to the technical but insightful discussion of the \(\mathrm{type}\) C orbits. We remind the reader that the universal scaling relations for these types of orbits differ from the other (\(\mathrm{A},\mathrm{E}\)) ones (see Appendix A and the discussion in Sec. I.2). We can understand the \(\mathrm{type}\) C orbits more intuitively as follows. The total angular deflection experienced by photons emitted from their radial turning points (i.e., with exactly zero radial velocity at emission), \(\not{\Delta}\theta_{\mathrm{tp}}(\bar{\eta})=\not{\Delta}\theta(\bar{\eta},r_{ \mathrm{tp}}(\bar{\eta}))\), increases as photons appear increasingly closer to the shadow boundary \(\bar{\eta}\to 0^{+}\) (see the right panel of Fig. 13). Thus, for this configuration of source and detector, we should be able to find a photon orbit with some angular momentum \(\bar{\eta}=\bar{\eta}_{1:\mathrm{C}^{\mathrm{T}}}>0\) such that \(\not{\Delta}\theta_{\mathrm{tp}}(\bar{\eta}_{1:\mathrm{C}^{\mathrm{T}}})=3 \pi/2\). This is the Figure 6: _Image locations of equatorial emitters in a Schwarzschild BH spacetime._ In the left panel here, we show the variation in the (conformal) radius \(\bar{\eta}_{1}\) of the first-order image on the image plane with changing distance \(r_{\mathrm{e}}\) of an equatorial emitter from the BH. The shadow boundary on the image plane is at \(\bar{\eta}=0\) and the photon shell is located at \(r=r_{\mathrm{ps}}=3M\). Since the faraway observer is on the \(+z-\)axis, the net deflection experienced by photons forming the order-\(n\) image is \(\not{\Delta}\theta_{n}=n\pi+\pi/2\). Since the conformal radii of consecutive-order images are predicted to follow a scaling relation (28), \(\bar{\eta}_{n+1}=\bar{\eta}_{n}\mathrm{e}^{-\gamma_{\mathrm{ps}}}\), we also show the scaled conformal radii of the \(n=2,3,4\) images. Here, for the Schwarzschild spacetime, the lensing Lyapunov exponent takes value \(\gamma_{\mathrm{ps}}=\pi\). As can be seen nearly from this panel, this scaling relation holds very well even for the \(n=1,2\) images for emitters present close to the BH, \(r_{\mathrm{e}}\lesssim 10M\). The inset zooms the \(y-\)axis by \(\sim\mathrm{e}^{\pi}\approx 23\) to make the size of the \(n=2\) image and its scaling relation with the \(n=3\) image easier to see. In the panel on the right, we zoom in on the shaded regions in the left panel and switch to the conformal bulk radial coordinate, \(\bar{r}=r/r_{\mathrm{ps}}-1\), for the \(x-\)axis. This clearly demonstrates how \(n=1\) photons that are emitted from just outside the photon sphere (\(\bar{r}>0\)) can appear inside the shadow boundary (\(\bar{\eta}_{1}<0\)). The dashed purple line shows the solution to the radial turning point equation \(k^{r}=0=\mathcal{R}(\bar{\eta},\bar{r})\) and the inset plays a similar role to its counterpart. This figure naturally relates the areal radii of the inner \(r_{\mathrm{in}}\) and outer \(r_{\mathrm{out}}\) boundaries of a geometrically-thin accretion disk present outside a Schwarzschild BH to their gravitationally-lensed order\(-n\) conformal radii on the image plane, \(\bar{\eta}_{n:\mathrm{in}}=\bar{\eta}(r_{\mathrm{e}}\!=\!r_{\mathrm{in}})\) and \(\bar{\eta}_{n:\mathrm{out}}=\bar{\eta}(r_{\mathrm{out}})\) respectively. Thus, the observed order\(-n\) diameter of such a thin disk is \(d_{n}=2\eta_{\mathrm{ps}}(1+\bar{\eta}_{n:\mathrm{out}})\) and its width is \(w_{n}=\eta_{\mathrm{ps}}(\bar{\eta}_{n:\mathrm{out}}-\bar{\eta}_{n:\mathrm{in}})\), with \(\eta_{\mathrm{ps}}=3\sqrt{3}M\approx 5.2M\) here. \(\mathrm{type}\;\mathrm{C}^{\mathrm{T}}\) orbit and its radius \(\bar{\eta}=\bar{\eta}_{1;\mathrm{C}^{\mathrm{T}}}\) on the image plane, located at the intersection of the blue and dashed-purple (the "turning point line") curves in the right panel of Fig. 6, demarcates the region that collects \(n=1\) photons moving on \(\mathrm{type}\;\mathrm{E}\) orbits (\(\bar{\eta}>\bar{\eta}_{1;\mathrm{C}^{\mathrm{T}}}\)) from that which collects all the other types of \(n=1\) photon orbits. If a photon appears in the region \(0<\bar{\eta}<\bar{\eta}_{1;\mathrm{C}^{\mathrm{T}}}\) and was emitted from its turning point, it would have been lensed through an angle larger than \(3\pi/2\), and cannot participate in the \(n=1\) image formation. For any \(\bar{\eta}>0\), only photons emitted with initially positive radial velocities undergo smaller angular deflections than those emitted with zero radial velocities. Thus, the photons forming the \(n=1\) image in the region \(0<\bar{\eta}<\bar{\eta}_{1;\mathrm{C}^{\mathrm{T}}}\) must have been emitted from outside the photon sphere in the radially outward direction; These are the \(\mathrm{type}\;\mathrm{C}^{+}\) orbits. Furthermore, particularly clearly visible in the right panel of Fig. 6 are the photons that appear on (\(\bar{\eta}=0\)) and just inside (\(\bar{\eta}\lesssim 0\)) the shadow boundary, all of which originated from just outside the photon sphere (also all emitted with positive radial velocities): These are the \(\mathrm{type}\;\mathrm{C}^{0}\) and the \(\mathrm{type}\;\mathrm{C}^{-}\) photons respectively. Photons that were emitted from just inside the photon sphere and which appear inside the shadow boundary are also of \(\mathrm{type}\;\mathrm{C}^{-}\). The blue-shaded region of this panel houses the \(\mathrm{type}\;\mathrm{E}\) and all \(\mathrm{type}\;\mathrm{C}\) orbits whereas the red-shaded region is composed only of the \(\mathrm{type}\;\mathrm{C}^{-}\) and the \(\mathrm{type}\;\mathrm{A}\) photon orbits. In the previous paragraph, we have discussed the distinctive features of the different \(\mathrm{type}\;\mathrm{C}\) photon orbits that form the \(n=1\) image, namely where they are sourced from in the bulk and with what velocities, and also where they appear on the image plane. We now provide a simplified summary of the organization of photon orbits of all types on the image plane. Photons that form the inner photon ring (\(\bar{\eta}<0\)) have orbits of \(\mathrm{type}\;\mathrm{A}\) or \(\mathrm{C}^{-}\) whereas those that form the outer photon ring can be of \(\mathrm{type}\;\mathrm{C}^{+},\mathrm{C}^{\mathrm{T}}\), or \(\mathrm{E}\), in the sequence of increasing distance from the center of the image plane. The \(\mathrm{type}\;\mathrm{C}^{0}\) photon appears exactly on the shadow boundary (\(\bar{\eta}=0\)) and demarcates the outer and inner sections of the photon subring. Finally, we note that this organization is generically true for arbitrary higher-order images and also holds qualitatively for arbitrary relative inclinations of the source and the observer and for arbitrarily geometrically-thick sources. Since the configuration of a geometrically-thin disk of emission viewed by an observer lying along the axis of the disk can be parametrized simply by two quantities, namely the location of its inner boundary \(r=r_{\mathrm{in}}\) and that of its outer one \(r=r_{\mathrm{out}}\), Fig. 6 naturally also captures the variation in the locations of the edges of its first four photon subrings, which are perfect annuli \(\eta_{n;\mathrm{in}}\leq\eta\leq\eta_{n;\mathrm{out}}\) on the image plane. Clearly, and as discussed above at length, the inner edge of the order\(-n\) subring lies inside the \(n=\infty\) critical curve (\(\bar{\eta}_{n;\mathrm{in}}<0\)) only when the inner boundary of the emitting region lies inside the photon sphere or outside but very close to it, i.e., \(r_{\mathrm{in}}\lesssim r_{\mathrm{ps}}^{+}\). If the outer boundary of the emitting region lies well outside the photon sphere (\(r_{\mathrm{out}}\gtrsim r_{\mathrm{ps}}\)), the outer edge of the \(n-\)ring lies outside the shadow boundary (\(\bar{\eta}_{n;\mathrm{out}}>0\)). The maximal sets of the equatorial emission photon rings, corresponding to the emission region located anywhere between the event horizon and infinity, are referred to as (equatorial) lensing bands (e.g., [67], also studied by [59]). Due to circular symmetry on the image plane, the median diameter of the \(n^{\mathrm{th}}\)-photon subring is simply given as \(\left<d_{n}\right>_{\phi}=2\eta_{\mathrm{ps}}(1+\left<\bar{\eta}_{n;\mathrm{ out}}\right>_{\phi})=2\eta_{\mathrm{ps}}(1+\bar{\eta}_{n;\mathrm{out}})\). Therefore, we can simply read off from the figure the range of values that the \(n=1\) subring diameter can take, for a thin-disk emission source in a Schwarzschild BH spacetime when viewed by an observer face-on. This figure also directly provides the \(n=1\) calibration factor since \(\alpha_{1;1}=1+\bar{\eta}_{n;\mathrm{out}}\) for this configuration, which takes values \(-3.5\times 10^{-2}\lesssim\alpha_{1;1}-1\lesssim 18.7\times 10^{-2}\). The maximal fractional variation in the \(n=1\) subring diameter, \(\left[d_{1}(\infty)-d_{1}(r_{\mathrm{h}})\right]/d_{\mathrm{sh}}=\bar{\eta}_{1 }(\infty)-\bar{\eta}_{1}(2M)\), then is simply (54), \(\Delta\alpha_{1;1}=\max\).\(\left[\alpha_{1;1}\right]-\min\).\(\left[\alpha_{1;1}\right]\approx 0.2\) (or equivalently, a \(20\%\) variation). Similarly, for the higher-order subrings we can find the ranges for the calibration factors to be, \(-1.6\times 10^{-3}\lesssim\alpha_{1;2}-1\lesssim 6.1\times 10^{-3}\), \(-0.7\times 10^{-4}\lesssim\alpha_{1;3}-1\lesssim 2.6\times 10^{-4}\), \(-0.3\times 10^{-5}\lesssim\alpha_{1;4}-1\lesssim 1.1\times 10^{-5}\). As promised, we see that \(\alpha_{1;n+1}\approx\mathrm{e}^{-\gamma_{\mathrm{ps}}}\Delta\alpha_{1;n}\)), i.e., these variations are suppressed by a factor of \(e^{\pi}\approx 23\) per increasing subring order. Changes in the subring widths \(w_{n}=\eta_{n;\mathrm{out}}-\eta_{n;\mathrm{in}}\) naturally follow identical trends. These bounds on \(\alpha_{1;n}\) serve two simultaneous purposes. First, these constitute useful sanity checks for current and future imaging techniques that extract subring diameters from synthetic images as well as, eventually, from actual images. Second, if we know the emitting region around an ultracompact object to be geometrically-thin, say from modeling the spectral data associated with the source, then we can establish whether or not it is a Schwarzschild BH (a new "weak" null test) by checking the consistency of the measured \(n=1\) photon subring diameter with the range of \(\alpha_{1;1}\) obtained above. Clearly, the detection of each subring yields an independent null test of the spacetime geometry, with the precision of the null test increasing with increasing subring order. Finally, already when using the two lowest order subrings diameters, we report the fractional error in recovering the lensing Lyapunov exponent to be at most about \(\mathrm{Err.}(\gamma_{\mathrm{ps}};d_{1})=9\%\), as discussed in the figure. It is also apparent from the figure that this error decreases rapidly as we access increasingly higher-order subrings. This establishes the goodness of the approximate conformal diameter (28) and width-scaling (52) relations obtained above. Subring Variation with Varying Boundaries of a Thin-Disk Emitting Region and Varying Observer Inclination In Sec. II.2 above, we have considered the image characteristics of a geometrically-thin emission disk viewed by an observer face-on, i.e., their inclination is \(\dot{\iota}=0\) or \(\pi\). We now extend our analysis to the case when the observer is present at an inclination of \(0<\dot{\iota}<\pi\) w.r.t. the space-like normal \(n_{\mathrm{d}}\) to the thin-disk (see also [69]). The image and consequently the photon ring is no longer circularly-symmetric. Since the thin-disk can be decomposed into a series of concentric great-circles with radii in the range \(r_{\mathrm{in}}\leq r_{\mathrm{e}}\leq r_{\mathrm{out}}\), we can considerably reduce the problem of its image formation by working Figure 7: _Variation in the asymmetry of higher-order images due to observer inclination._ Since a thin emission disk can be decomposed into a series of great circles, we focus here on the properties of circular emitters. This also makes understanding images of hotspots that move (temporarily) on nearly circular orbits around a BH intuitive. The top-left panel shows the relation between the colatitudes \(\theta_{\rm e}\) and the azimuthal positions \(\varphi_{\rm e}\) of emitters present on a circle for different inclinations \(\dot{\iota}\) of its plane relative to the observer (see Fig. 2). The top-right panel shows the total angular deflection \(\dot{\Delta}\theta_{n}\) experienced by an order-\(n\) photon emitted by such a circular emitter as a function of its image plane polar angle \(\psi\). This shows how photons that appear on the \(+\alpha-\)axis (i.e., at \(\psi=0\)) undergo the minimal angular deflection for inclinations \(\dot{\iota}<\pi/2\) and vice versa. It is easy to see from Fig. 2 that the normal to the plane of the ring \(n_{\rm d}\) is projected onto the \(-\alpha-\)axis on the image plane, i.e., it lies along \(\psi=\pi\). The bottom-left panel shows the variation of the image radius \(\eta_{1}\) of the \(n=1\) image with the image plane polar angle \(\psi_{1}\) for circular emitters of varying radii and inclinations. The general rule of thumb applies very well: Photons undergoing greater angular deflection appear closer to the shadow boundary (shown as a horizontal red solid line here). Images of circular emitters are asymmetric on the image plane of an inclined observer. The bottom-right panel measures the asymmetry of the \(n=1\) image for circular emitters of varying proper radii \(r\) in the bulk when viewed from different inclinations (cf. also [58]). This panel shows how for circular emitters of small proper radii \(r\), the image asymmetry remains very small, \(\Delta\lambda_{1}\lesssim 0.1\). Independently of size, the \(n=1\) image of a moderately geometrically-thick emitting region [109; 110; 111] present outside a Schwarzschild BH viewed approximately face-on should generically have small asymmetry, \(\Delta\lambda\lesssim 0.15\). These conditions are likely met for M87\({}^{\circ}\) for which \(\dot{\iota}\approx 17^{\circ}\)[112] and its image asymmetry could help constrain its spin (see also [113]). with circular rings of emission. Each of these circles can be parametrized using polar coordinates \((\vartheta_{\rm e}(s),\varphi_{\rm e}(s))\), and all we will need to solve the reduced problem is to find the relation \(\vartheta_{\rm e}=\vartheta_{\rm e}(\dot{\iota},\varphi_{\rm e})\). Without loss of generality due to rotational freedom of the asymptotic static observer's spatial triad, we set the normal \(n_{\rm d}\) to the plane of the disk/circle to lie in the first quadrant of the \(yz-\)plane (\(y>0;z>0\)), as pictured in the top left panel of Fig. 5. Therefore, in isotropic Cartesian coordinates (cf. [86]), it is explicitly given as \(n_{\rm d}^{\tilde{\mu}}\propto(0,0,\sin\dot{\iota},\cos\dot{\iota})\). Furthermore, since an arbitrary point on a 2-sphere of radius \(r_{\rm e}\) can be represented as \(r_{\rm e}^{\tilde{\mu}}\propto(0,r_{\rm e}\sin\vartheta_{\rm e}\cos\varphi_{ \rm e},r_{\rm e}\sin\vartheta_{\rm e}\sin\varphi_{\rm e},r_{\rm e}\cos\vartheta _{\rm e})\), the parametric equation for a great circle inclined relative to an observer on the \(+z-\)axis is given via \(n_{\rm d}\cdot r_{\rm e}=0\) as (for \(i\neq\pi/2\)) as, \[\vartheta_{\rm e}(\dot{\iota},\varphi_{\rm e})=\begin{cases}\begin{cases}- \arctan[\cot\dot{\iota}\cdot\csc\varphi_{\rm e}]+\pi\,,&0\leq\varphi_{\rm e} <\pi\\ -\arctan[\cot\dot{\iota}\cdot\csc\varphi_{\rm e}]\,,&\pi\leq\varphi_{\rm e} <2\pi\\ -\arctan[\cot\dot{\iota}\cdot\csc\varphi_{\rm e}]\,,&0\leq\varphi_{\rm e}<\pi \\ -\arctan[\cot\dot{\iota}\cdot\csc\varphi_{\rm e}]+\pi\,,&\pi\leq\varphi_{\rm e} <2\pi\end{cases}\,,&\left[\pi/2<\dot{\iota}<\pi\right]\,,\end{cases} \tag{60}\] In writing the above, we have ensured that the emission source colatitudes lie in the principal sheet of the spherical-polar coordinate system, \(0\leq\vartheta_{\rm e}\leq\pi\). When the observer lies in the plane of the disk, \(\dot{\iota}=\pi/2\), most of its image appears on the \(\beta-\)axis. In addition, however, due to emission sources being present at both caustics, the image also contains Einstein rings, or critical curves, of all orders. Photons emitted from \(\varphi=\varphi_{\rm e}\) appear on the image plane at \(\psi_{\rm e}(\varphi_{\rm e})\), with this map determined straightforwardly by eq. 8. This equation makes clear that photons that appear on the image plane at \(\psi=\pi/2\), \(3\pi/2\), i.e., on the image plane \(\beta-\)axis, were emitted from \(\varphi_{\rm e}=\pi,0\) respectively. Now, from eq. 60 we can also see that these photons are emitted from colatitudes of \(\vartheta_{\rm e}=\pi/2\). Thus, the image plane \(\beta-\)axis collects photons that deflect by exactly \(\dot{\Delta}\vartheta_{n}=n\pi+\pi/2\), i.e., by a precise number of "half-loops" around the black hole. The \(n^{\rm th}-\)order images of the inner \(r=r_{\rm in}\) and outer \(r=r_{\rm out}\) boundaries of the thin-disk in the bulk form the inner \(\eta_{n;{\rm in}}(\dot{\iota},\psi_{\rm e})\) and outer \(\eta_{n;{\rm out}}(\dot{\iota},\psi_{\rm e})\) edges of the \(n^{\rm th}-\)order photon subring. These closed curves on the image plane can be obtained by solving the following equations as usual, \[\dot{\Delta}\vartheta\left(\eta_{n;{\rm in}}(\dot{\iota},\psi_{ \rm e}),r_{\rm in}\right) = \dot{\Delta}\vartheta_{n}(\vartheta_{\rm e}(\dot{\iota},\psi_{\rm e }))\,, \tag{61}\] \[\dot{\Delta}\vartheta\left(\eta_{n;{\rm out}}(\dot{\iota},\psi_{ \rm e}),r_{\rm out}\right) = \dot{\Delta}\vartheta_{n}(\vartheta_{\rm e}(\dot{\iota},\psi_{ \rm e}))\,. \tag{62}\] We note that for small relative inclinations \(\dot{\iota}\approx 0\) or \(\dot{\iota}\approx\pi\), the emission is sourced from colatitudes \(\vartheta_{\rm e}(\psi)\approx\pi/2-\dot{\iota}\cos\psi\) or \(\vartheta_{\rm e}(\psi)\approx\pi/2-(\dot{\iota}-\pi)\cos\psi\) respectively. Thus, we recover the approximation described in Sec. III D of [57] for \(\dot{\iota}\approx 0\). To relate the two expressions explicitly, we simply have to rotate the normal to the disk around the \(+z-\)axis appropriately. Equivalently, a standard rotation of the Bardeen Cartesian axes by some appropriate angle also does the trick. The top left panel of Figure 7 shows the polar coordinates of rings of emitters of arbitrary radii for a number of the observer viewing angles \(\dot{\iota}\) in arbitrary static and spherically-symmetric spacetimes (60). For this configuration, the normal to the plane of the ring \(n_{\rm d}\) appears on the image plane as pointing along the negative \(\alpha-\)axis (see Fig. 5). This is therefore the direction on the image plane along which the ring is most stretched, as \(yz-\)plane (\(y>0;z>0\)), as pictured in the top left panel of Fig. 5. Therefore, in isotropic Cartesian coordinates (cf. [86]), it is explicitly given as \(n_{\rm d}^{\tilde{\mu}}\propto(0,0,\sin\dot{\iota},\cos\dot{\iota})\). Furthermore, since an arbitrary point on a 2-sphere of radius \(r_{\rm e}\) can be represented as \(r_{\rm e}^{\tilde{\mu}}\propto(0,r_{\rm e}\sin\vartheta_{\rm e}\cos\varphi_{ \rm e},r_{\rm e}\sin\vartheta_{\rm e}\sin\varphi_{\rm e},r_{\rm e}\cos\vartheta _{\rm e})\), the parametric equation for a great circle inclined relative to an observer on the \(+z-\)axis is given via \(n_{\rm d}\cdot r_{\rm e}=0\) as (for \(i\neq\pi/2\)) as, \[\vartheta_{\rm e}(\dot{\iota},\varphi_{\rm e})=\begin{cases}\begin{cases}- \arctan[\cot\dot{\iota}\cdot\csc\varphi_{\rm e}]+\pi\,,&0\leq\varphi_{\rm e} <\pi\\ -\arctan[\cot\dot{\iota}\cdot\csc\varphi_{\rm e}]\,,&\pi\leq\varphi_{\rm e}<2\pi \end{cases}\,,&\left[0<\dot{\iota}<\pi/2\right],\\ \begin{cases}-\arctan[\cot\dot{\iota}\cdot\csc\varphi_{\rm e}]\,,&0\leq\varphi_{ \rm e}<\pi\\ -\arctan[\cot\dot{\iota}\cdot\csc\varphi_{\rm e}]+\pi\,,&\pi\leq\varphi_{\rm e}<2 \pi\end{cases}\,,&\left[\pi/2<\dot{\iota}<\pi\right],\end{cases} \tag{63}\] In writing the above, we have ensured that the emission source colatitudes lie in the principal sheet of the spherical-polar coordinate system, \(0\leq\vartheta_{\rm e}\leq\pi\). When the observer lies in the plane of the disk, \(\dot{\iota}=\pi/2\), most of its image appears on the \(\beta-\)axis. In addition, however, due to emission sources being present at both caustics, the image also contains Einstein rings, or critical curves, of all orders. Photons emitted from \(\varphi=\varphi_{\rm e}\) appear on the image plane at \(\psi_{\rm e}(\varphi_{\rm e})\), with this map determined straightforwardly by eq. 8. This equation makes clear that photons that appear on the image plane at \(\psi=\pi/2\), \(3\pi/2\), i.e., on the image plane \(\beta-\)axis, were emitted from \(\varphi_{\rm e}=\pi,0\) respectively. Now, from eq. 60 we can also see that these photons are emitted from colatitudes of \(\vartheta_{\rm e}=\pi/2\). Thus, the image plane \(\beta-\)axis collects photons that deflect by exactly \(\dot{\Delta}\vartheta_{n}=n\pi+\pi/2\), i.e., by a precise number of "half-loops" around the black hole. The \(n^{\rm th}-\)order images of the inner \(r=r_{\rm in}\) and outer \(r=r_{\rm out}\) boundaries of the thin-disk in the bulk form the inner \(\eta_{n;{\rm in}}(\dot{\iota},\psi_{\rm e})\) and outer \(\eta_{n;{\rm out}}(\dot{\iota},\psi_{\rm e})\) edges of the \(n^{\rm th}-\)order photon subring. These closed curves on the image plane can be obtained by solving the following equations as usual, \[\dot{\Delta}\vartheta\left(\eta_{n;{\rm in}}(\dot{\iota},\psi_{\rm e }),r_{\rm in}\right) = \dot{\Delta}\vartheta_{n}(\vartheta_{\rm e}(\dot{\iota},\psi_{\rm e }))\,, \tag{64}\] \[\dot{\Delta}\vartheta\left(\eta_{n;{\rm out}}(\dot{\iota},\psi_{ \rm e}),r_{\rm out}\right) = \dot{\Delta}\vartheta_{n}(\vartheta_{\rm e}(\dot{\iota},\psi_{\rm e }))\,. \tag{65}\] We note that for small relative inclinations \(\dot{\iota}\approx 0\) or \(\dot{\iota}\approx\pi\), the emission is sourced from colatitudes \(\vartheta_{\rm e}(\psi)\approx\pi/2-\dot{\iota}\cos\psi\) or \(\vartheta_{\rm e}(\psi)\approx\pi/2-(\dot{\iota}-\pi)\cos\psi\) respectively. Thus, we recover the approximation described in Sec. III D of [57] for \(\dot{\iota}\approx 0\). To relate the two expressions explicitly, we simply have to rotate the normal to the disk around the \(+z-\)axis appropriately. Equivalently, a standard rotation of the Bardeen Cartesian axes by some appropriate angle also does the trick. The top left panel of Figure 7 shows the polar coordinates of rings of emitters of arbitrary radii for a number of the observer viewing angles \(\dot{\iota}\) in arbitrary static and spherically-symmetric spacetimes (60). For this configuration, the normal to the plane of the ring \(n_{\rm d}\) appears on the image plane as pointing along the negative \(\alpha-\)axis (see Fig. 5). This is therefore the direction on the image plane along which the ring is most stretched, as \(yz-\)plane (\(y>0;z>0\)), as pictured in the teristics encode the extent of the physical region that sources the observed emission as well as the spacetime geometry, for static and spherically-symmetric spacetimes, and are relatively insensitive to the inclination of the observer. The error in the approximate conformal scaling relations (see eqs. 28 and 52) between the subring diameters (left) and the subring widths (right) is also clear to see from the color bar in the respective panels. Together with our findings in Sec. II.2 above, this figure establishes that, for thin disk emitting regions, a clean detection of multiple subrings can yield a precise estimate (\(\lesssim 9\%\)) of the lensing Lyapunov exponent, independently of the boundaries of the emitting region and the viewing angle. For reasonably compact thin disks, we get even better estimates of the lensing Lyapunov exponent in general. ### Subring Variation with Radial-Size and Geometrical-Thickness of the Emitting Region Since the ultracompact objects M87\({}^{*}\) and Sgr A\({}^{*}\) host geometrically-thick emitting regions [11; 5], we now analyze the variation in the morphology of the photon ring when the scale-height of the emitting region is nonzero. We will restrict our analysis here to the case of a face-on asymptotic observer (\(\dot{\epsilon}=0\) or \(\pi\)) for brevity. To enable this study, we parametrize the emitting region using three parameters, two that control the inner \(r_{\rm in}\) and outer \(r_{\rm out}\) boundaries of the emitting region, and a third \(\vartheta_{1/2}\) that modifies its scale-height. Thus photons are emitted from the region \(r_{\rm in}\leq r_{\rm e}\leq r_{\rm out}\) and colatitudes of \(\pi/2-\vartheta_{1/2}\leq\vartheta_{\rm e}\leq\pi/2+\vartheta_{1/2}\) (or equivalently from latitudes between \(\pm\vartheta_{1/2}\)). In the top-left panel of Fig. 2, we show Figure 8: _Variation in photon subrings cast by a geometrically-thin disk in a Schwarzschild BH spacetime on the screen of an inclined observer._ In the left panel here we show the change in the (scaled) median conformal radii of the first three (\(n=1-3\)) photon subrings with changing inclination of the observer \(\dot{\epsilon}\) on the \(x-\)axis and with varying size of the disk \(r_{\rm out}\). The right panel displays the median widths of these subrings with varying inner boundary of the emitting region, \(r_{\rm in}\), on the \(y-\)axis for a fixed large outer boundary, \(r_{\rm out}=2\times 10^{4}M\). Together, these panels show the maximal variation in the median diameters, e.g., \(\langle 4_{1}\rangle_{\phi}=\left(2\eta_{\rm ps}(1+\bar{\eta}_{1})\right)_{\phi}\), and in the median widths of the subrings cast by a thin emission disks in a Schwarzschild BH spacetime. In each of the panels, as indicated in the text, the solid lines show the variation in the \(n=1\) subring characteristic, while the dashed (\(n=2\)) and dotted (\(n=3\)) lines show the same but for the next pair of higher-order subrings, scaled appropriately by the lensing Lyapunov exponent, \(\gamma_{\rm ps}\) (\(=\pi\) here). The squiggly lines indicate the locations of the event horizon (\(2M\); black), the photon sphere (\(3M\); blue), and the ISCO (\(6M\); green). This figure demonstrates that the deviations from circularity in the shapes of the higher-order images due to the observer’s inclination do not impact the median diameter or the median width of the subring. This is reminiscent of the independence of the shadow boundary with the observer inclination \(\dot{\epsilon}\) in static and spherically-symmetric spacetimes. From the panel on the left, we see that the fractional radius of the \(n=1\) subring always remains small \(|\left<\bar{\eta}_{1}\right>_{\phi}|<0.2\), independently of the compactness \(r_{\rm out}\) of the emitting region in the bulk, and a measurement of the subring diameter can yield an accurate estimate of the shadow diameter. Here we have introduced the \(n=1\) subring calibration factor \(\alpha_{1;1}\) to enable easy estimations of increase in accuracy from current estimates of the shadow size (see Fig. 7 of [12]). The right panel estimates the widths of the \(n=1\) subrings to be at most \(\approx 1M\). For more realistic outer boundaries, we can easily infer an upper bound on the subring width from the left panel, \(\langle w_{n}\rangle_{\psi}=\left<\eta_{n;{\rm out}}-\eta_{n;{\rm in}}\right>_{ \psi}=\left<\eta_{\rm ps}\bar{\eta}_{n;{\rm out}}\right>_{\psi}\). Finally, the color bar conveniently shows the error in recovering the lensing Lyapunov exponent from a joint measurement of the first two subrings (cf. eq. 57), when using either their diameters (left) or their widths (right), to be at most about \(10\%\). Subring asymmetry due to observer inclination is discussed in figures 5 and 7. the surfaces of such a conical-wedge-disk, with parameters \(\{r_{\rm in},r_{\rm out},\vartheta_{1/2}\}=\{2M,18M,\pi/10\}\). A cross-section of the same is shown in the top-right panel there. The regions with relatively darker shading in the bottom-right panel of Fig. 2 capture the properties of the photons that form the photon ring (for \(n\geq 1\)) on the image plane for this particular thick-disk configuration in a Schwarzschild BH spacetime. The image morphology of this thick disk is again circularly-symmetric, and higher-order images are annuli that are concentric with the shadow boundary or the \(n=\infty\) critical curve \(\mathscr{C}_{\infty}\). We can locate the inner \(\eta=\eta_{n;\rm in}\) and outer \(\eta=\eta_{n;\rm out}\) edges of the \(n^{\rm th}-\)order image from the following equations, \[\eta_{n;\rm in} = \min.\left\{\eta_{n-{\rm FF};\rm in},\eta_{n-{\rm BF};\rm in}, \eta_{n-{\rm tp};\rm in}\right\}\,, \tag{64}\] \[\eta_{n;\rm out} = \max.\left\{\eta_{n-{\rm FF};\rm out},\eta_{n-{\rm BF};\rm out}, \eta_{n-{\rm tp};\rm out}\right\}\,,\] where we have defined \(\eta_{n-{\rm FF};\rm in}=\eta_{n-{\rm FF}}(r_{\rm in}),\eta_{n-{\rm BF};\rm in }=\eta_{n-{\rm BF}}(r_{\rm in}),\eta_{n-{\rm tp};\rm in}=\eta_{n-{\rm tp}} (r_{\rm in})\). These are determined by solving the following integral equations, \[\hat{\Delta\vartheta}\left(\eta_{n-{\rm FF}},r_{\rm e}\right) = n\pi+\left(\pi/2-\vartheta_{1/2}\right)\,, \tag{65}\] \[\hat{\Delta\vartheta}\left(\eta_{n-{\rm BF}},r_{\rm e}\right) = n\pi+\left(\pi/2+\vartheta_{1/2}\right)\,,\] \[\hat{\mathcal{R}}\left(\eta_{n-{\rm tp}},r_{\rm e}\right) = 0\,, \left[\text{defined only when }r_{\rm e}>r_{\rm ps}\right]\] for \(r_{\rm e}=r_{\rm in}\). The last of these is relevant for eq. 64 only when \(n\pi+(\pi/2-\vartheta_{1/2})\leq\hat{\Delta\vartheta}\left(\eta_{n-{\rm tp}},r_{\rm e}\right)\leq n\pi+(\pi/2+\vartheta_{1/2})\). This analysis applies analogously for the outer boundary of the emitting region \(r_{\rm e}=r_{\rm out}\). We have used \(n-{\rm FF}\) to denote the (relative) "front face" of the emitting region for the order\(-n\) image and \({\rm BF}\) similarly for the "back face." Without going into any detail, we will simply note here that similar equations (64, 65) can be used to determine the edges of the photon subrings when a thick-disk is viewed from an inclination (cf. eq. 61). As noted above, since the diameter \(d_{n}\) of the \(n^{\rm th}-\)order image is given as \(d_{n}=2\eta_{\rm ps}(1+\bar{\eta}_{n;\rm out})\), the left panel of Fig. 9 captures the variation in the diameters of the first four subrings with varying outer boundary \(r_{\rm out}\) and varying disk-height \(\vartheta_{1/2}\) of the emitting region in the bulk of a Schwarzschild BH spacetime. We allow \(\vartheta_{1/2}\) to take all possible values, \(0\leq\vartheta_{1/2}\leq\pi/2\), and \(r_{\rm out}\) to take a large range of values, \(2M<r_{\rm out}<3\times 10^{3}M\). The line \(\vartheta_{1/2}=0\) shows the same information as Fig. 6. In particular, the solid contour lines show the variation in \(\bar{\eta}_{1}=\alpha_{1;1}-1\) as being, \(-0.01\lesssim\alpha_{1;1}-1<\infty\). The increase in the subring diameter with an increase in the size of the emission zone (\(r_{\rm out}\)) is natural: Smaller emission disks cast smaller images. The increase in the subring diameter with the increasing geometrical thickness of the disk is also to be expected and can be understood broadly as follows. While photons that appear in the first-order image of a geometrically-thin emission disk (\(\vartheta_{1/2}=0\)) all experience a net angular deflection of \(\hat{\Delta\vartheta}=3\pi/2\), the \(n=1\) image of a geometrically-thick emission disk of scale-height \(\vartheta_{1/2}\) contains additionally photons that undergo smaller deflections. Thus, photons emitted from the same outer boundary radial location \(r=r_{\rm out}\) in the thick-disk case can undergo smaller deflection before appearing in the \(n=1\) image as compared to the \(n=1\) image of the thin-disk. Since \(n\geq 1\) photons that are emitted from the same radial location, outside the photon sphere, but which undergo smaller angular deflections typically appear at larger impact parameters (see Fig. 2), the diameter of photon subring increases with scale-height. For large disks, in particular, \(r_{\rm out}\gg 10^{2}M\), the subring diameter changes essentially only with the scale-height of the disk. Therefore, this figure makes clear that overall the disk scale-height plays a significant role in determining the sizes of the photon rings. In Sec. 3.2 of [109] (see also Refs. [110, 111]), the effective scale-heights \(h/r\) for hot accretion flows around Kerr BHs were studied extensively and it was found that \(h/r\lesssim 0.2\), which translates to \(\vartheta_{1/2}\lesssim\pi/10\). For such moderate geometrical-thicknesses (to the left of the purple/magenta lines in the panels), developing the logic from Sec. II.2 above, we can now propose a stronger null test of the Schwarzschild BH metric that requires lesser prior knowledge of the morphology of the emitting region. If a \(n=1\) subring diameter measurement yields an \(\alpha_{1;1}\) value that lies outside the range \(-0.01\lesssim\alpha_{1;1}-1\lesssim 0.3\), then the spacetime geometry is likely not well described by the Schwarzschild metric (if we believe that current state-of-the-art GRMHD simulations provide sufficiently accurate models for these systems). Therefore, for realistic, moderate geometrical-thicknesses, we can obtain an accurate (\(|\alpha_{1;1}-1|\lesssim 0.3\)) and precise (\(\Delta\alpha_{1;1}\lesssim 0.3\)) inference of the size of the shadow \(\eta_{\rm ps}\) from a measurement of the \(n=1\) subring diameter. Notice how a measurement of even the lowest order subring can yield a "more direct" constraint on the shadow size that is of comparable accuracy to what is currently achievable using \(\alpha_{1}-\)calibration techniques [32, 12]. Note however that for configurations with substantial emission sourced from very large distances from the BH and from close to the line of sight (\(r_{\rm out}\gg 10^{3}\) and \(\vartheta_{1/2}\to\pi/2\)), the diameter of the \(n=1\) subring may diverge, \(\bar{\eta}_{1;\rm out}\to\infty\). Such measurements could reveal the emitting region to be extremely geometrically-thick or that it has a strong jet component. The feasibility of detecting the \(n^{\rm th}-\)subring has been discussed, e.g., in [58], and measurements of the characteristics of the \(n=1\) subring appear to be within reach, with longer (space-based) baselines. In the left panel of Fig. 9, we also show in the dashed lines the isocontours of the next pair of subleading-order subring diameters, scaled appropriately by the lensing Lyapunov exponent \(\gamma_{\rm ps}\), as indicated in the text. The variation in the conformal radius of the \(n=2\) subring, is \(-0.001\lesssim\bar{\eta}_{2}=\alpha_{1;2}-1\lesssim 0.03\). Thus while the trends in the \(n=2\) subring diameter with a varying outer boundary of the emission zone (\(y-\)axis) and with varying geometrical-thickness (\(x-\)axis) follow those described above for the \(n=1\) case, higher-order (\(n>1\)) subrings always remain compact on the image plane. Thus, with absolutely no knowledge of the emitting region morphology, a fully agnostic null test of the Schwarzschild BH geometry in the strong-field regime becomes possible. A measurement of the \(n=2\) subring diameter would return an exquisite, unprecedentedly precise (\(|\alpha_{1;2}-1|\lesssim 3\times 10^{-3}\)) and accurate (\(\Delta\alpha_{1;2}\lesssim 3\times 10^{-3}\)) measurement of the shadow size of an astrophysical UCO. Finally, the color bar there indicates the error in obtaining the Schwarzschild value of the lensing Lyapunov exponent, \(\gamma_{\rm ps}=\pi\), for a given set of morphological parameters from a joint measurement of multiple photon subrings. We find that for realistic emission region morphologies a joint measurement of the first and second subring diameters yields the lensing exponent to within \(\lesssim 20\%\). It is worth emphasizing that the left panel of Fig. 9 actually essentially tells us everything there is to know about the first-order images of an emitter present at an arbitrary location \((r_{\rm e},\theta_{\rm e})\) in a Schwarzschild BH spacetime. This can be seen simply by recognizing that we can make the replacement \(\theta_{\rm e}={\rm pv}[\pi/2+\theta_{1/2}]\) for \(r_{\rm h}<r_{\rm e}<r_{\rm ps}\) and \(\theta_{\rm e}={\rm pv}[\pi/2-\theta_{1/2}]\) for \(r_{\rm e}>r_{\rm ps}\), where \({\rm pv}[\cdot]\) indicates that we should use the principal value, \(0\leq\theta_{\rm e}\leq\pi\). Thus, a \(n=1\) photon emitted from a location \((r_{\rm e},\theta_{\rm e},\varphi_{\rm e})\) appears on the image plane at a location \((\bar{\eta}_{1},\pi/2-\varphi_{\rm e})\), with \(\bar{\eta}_{1}\) shown in the left top panel. Reversing this logic, we can identify the initial spatial locations of all of the \(n=1\) photons that appear on the image plane at a particular point \((\bar{\eta}_{1},\psi)\). The \(\bar{\eta}_{1}-\)isocontours can be used to compute a complicated surface \(r_{\rm e}(\theta_{\rm e})\) from which photons must be emitted so that they appear at the same location on the image plane. Such computations may have useful applications in efforts to tomographically map out spacetime by observing flaring events near supermassive UCOs [95]. As noted above, another potential (albeit harder; [58]) observable characteristic of a photon subring is its width \(w_{n}\), which is given as \(w_{n}=\eta_{\rm ps}(\bar{\eta}_{n;{\rm out}}-\bar{\eta}_{n;{\rm in}})\). In the right panel of Fig. 9, we show the variation in the width of the \(n=1\) subring with changes in the morphological structure of a geometrically-thick emission disk viewed by an observer on the axis of the disk, for an outer boundary located at \(r_{\rm out}=2\times 10^{4}M\). It is clear from this panel that for moderate geometrical-thicknesses (\(\theta_{1/2}\lesssim\pi/10\)), the \(n=1\) subring width can be as large as one Figure 9: _Photon subring variation with varying sizes and thicknesses of a thick emission disk in a Schwarzschild BH spacetime._ Same as Fig. 8 but for geometrically-thick disks which we model as a conical torus (see the Fig. 2), viewed face-on (\(i=0\)). The \(x-\)axis now shows the variation in the scale-height or half-opening angle of the disk, i.e., the disk surfaces are at latitudes of \(\theta=\pm\theta_{1/2}\). We see from the left panel how the \(n=1\) subring diameter is sensitive to variations in the disk scale-height. Indeed, the \(n=1\) subring can become noncompact (\(\bar{\eta}_{1}\to\infty\)) when the emitting region has large scale height (\(\vartheta_{1/2}\neq\pi/2\)) and extends to extremely large radii. Nevertheless, for “realistic” scale-heights (\(\vartheta_{1/2}\lesssim\pi/10\)) and compactnesses (\(r_{\rm out}\lesssim 20M\)) [109, 110], we see that the \(n=1\) subring diameter closely tracks the shadow diameter, \(|\bar{\eta}_{1}|\lesssim 0.2\), similar to Fig. 8. The \(n=2\) subring, on the other hand, always remains compact and we can read off the maximum variation in its diameter \(\Delta\alpha_{1;2}\approx 0.7\rm{e}^{-\pi}\), from the dashed isolines, to be at most about \(3\%\). In the panel on the right, since we have chosen a large outer boundary to report the maximal variation in the subring widths, we find that the variation with the location of the inner boundary is muted. As with the left panel, and unsurprisingly, we find that the geometrical-thickness of the emitting region plays an important role in determining the subring widths. Again, the \(n=1\) subring width can be infinite whereas the variation in the \(n=2\) width is extremely small: \((0.8\,\,0\,3)\times\rm{e}^{-\pi}\approx 0.03\) to \(0.12M\). For the realistic emitting region morphologies alluded to above however, the maximal \(n=1\) subring width is approximately \(w_{n}\lesssim\eta_{\rm ps}\bar{\eta}_{1}\approx\sqrt{2}7M\times 0.2\approx 1M\). Furthermore, the color bars in each panel show how the lensing Lyapunov exponent can realistically be obtained with an error of \(\lesssim 10\%\) by comparing the \(n=1\) and \(2\) subring diameters (left) or widths (right). The red squares show the morphological parameters for a Novikov-Thorne thin-disk, \(\{r_{\rm in},r_{\rm out},\theta_{1/2}\}=\{r_{\rm ISCO},\infty,0\}\), and the cyan circles show the same for a Bondi-Michel spherical emission region, \(\{r_{\rm in},r_{\rm out},\theta_{1/2}\}=\{r_{\rm h},\infty,\pi/2\}\). Finally, since radio interferometers are sensitive to angular sizes in practice, we note that a ring of width \(w_{n}\approx 1M\) has an angular thickness of one angular gravitational radius \(\theta_{\rm g}=GM/c^{2}D=M/D\) on the sky. For M87\({}^{\rm a}\) and for Sgr A\({}^{\rm a}\), these have been inferred by the EHT to be \(\theta_{\rm g}=3.3^{+0.4}_{-0.4}\rm{\mu s}\)[1] and \(\theta_{\rm g}=4.8^{+1.1}_{-0.7}\rm{\mu s}\)[7] respectively. gravitational radius, \(w_{1}\approx 1M\) (note however that this is due to the extremely large outer boundary \(r_{\rm out}\)). We can also see from the color bars that we should expect similar or lower errors when extracting the lensing Lyapunov exponent when using the widths of the subrings as compared to when using their diameters. Putting everything from Sec. II together, we find that median subring diameters and widths in static and spherically-symmetric spacetimes are roughly independent of the observer inclination. Furthermore, both these characteristics depend acutely on the geometrical thickness of the emitting region: Thicker emitting regions generically lead to larger and wider photon rings. While larger emitting regions obviously cast larger photon subrings on the image plane, subring diameters Figure 10: We summarize here, for easy access, the variation in the \(n=1\) subring radius in the top row with varying sizes \(r_{\rm out}\) of geometrically-thin (left panels) and geometrically-thick (right panels) emission disks. The bottom row shows the variation in the conformal radius, with smaller conformal radii indicating that the subrings lie closer to the shadow boundary, which is determined purely by the spacetime. The panels on the left show the impact of varying observer inclination whereas the ones on the right focus on varying scale heights of the emission disk. We highlight the importance of the scale-height in such considerations by choosing identical color bars across columns. are fairly insensitive to the exact location of the outer boundary of the emitting region if it extends over a large volume (\(r_{\rm out}\gg 10^{2}\)_M_). This is summarized succinctly in Fig. 10 for easy access. Finally, the subring widths also depend on the location of the inner boundary of the emitting region. ## III Photon Rings in the Images of Static Black Holes and Tests of the Spacetime Metric We have thus far modeled the impact of varying the emitting region morphology on the observed photon rings for a Schwarzschild BH spacetime rather exhaustively. We will now consider the variation in the characteristics of photon subrings due to variations in the spacetime geometry. We restrict our considerations to BH spacetimes and to enable a systematic study, we employ a parametrization framework. In particular, we will use here the Rezzolla-Zhidenko (RZ; [74]) parametrization scheme but our analysis can easily be extended to other popular parametrization frameworks [114; 115]. A number of well-known static BH spacetimes, that arise as solutions in distinct alternative theories of gravity (and fields), have been approximated to very high accuracy using a small number (\(\lesssim\!\!11\)) of RZ metric deviation, or expansion, parameters [116]; This is possible since the RZ framework exploits the fantastic convergence properties of Pade approximants. The ambit of the RZ framework has also been extended to arbitrary static spacetimes (including non-BHs) there. Furthermore, it has also been demonstrated that when using it to approximate observables, such as the shadow size, of known solutions, the accuracy required to enable comparisons against EHT measurements can be achieved with even fewer (\(\lesssim\!\!3\)) RZ parameters [117]. Recently, we have used the RZ metric as a parametric metric, and systematically explored the impact of varying the BH spacetime geometry on the observed image, when using a simple Bondi-Michel-like spherical accretion-emission model [30]. When using the RZ metric as a parametric metric, as we shall do so below, constraints need to be imposed on the ranges of the theoretically-permissible RZ metric deviation parameters [74], which depend, in general, on the family \(\mathcal{M}\) of RZ metric in use [30]. Since we have already discussed at length the impact of modifying the emission morphology on the properties of the photon ring, here we will use a fiducial configuration of a moderately geometrically-thick disk with morphological parameters \(\{r_{\rm in},r_{\rm out},\theta_{1/2},\dot{r}\}=\{r_{\rm h},3r_{\rm ISCO},\pi/1 0,0\}\). Since the inferences we make below vis-a-vis the spacetime geometry may be biased by this choice of fiducial morphology, we also consider the case of Novikov-Thorne thin-disks in Appendix B to allow such concerns. Following [30], we will consider three classes of two-parameter RZ BH metric families, \(\{\mathcal{M}(a_{0},a_{1}),\mathcal{M}(\epsilon,a_{1}),\mathcal{M}(\epsilon,a _{0})\}\), all with \(-g_{tt}g_{rr}=g(r)=1\) and with \(a_{2}=0\). The metric functions in Ref. [74] are related to the ones used here simply as, \(N^{2}(r)=f(r)\) and \(B^{2}(r)=g(r)\). The remaining metric functions for this family of BHs are given as \(h(r)=r^{2}\) and \[f(r) =1-\frac{2M}{r}+\frac{4a_{0}}{(1+\epsilon)^{2}}\frac{M^{2}}{r^{2}} \tag{66}\] \[+\frac{8(\epsilon-a_{0}+a_{1})}{(1+\epsilon)^{3}}\frac{M^{3}}{r^ {3}}-\frac{16a_{1}}{(1+\epsilon)^{4}}\frac{M^{4}}{r^{4}}\,,\] where \(M\) is the ADM mass of the BH spacetime. For the \(\mathcal{M}(a_{0},a_{1})\) family of BH models, we set \(\epsilon=0\), and similarly set \(a_{0}\) or \(a_{1}\) to vanish for the other families. The outermost horizon \(r=r_{\rm h}\) is located at [74], \[r_{\rm h}:=\frac{2M}{1+\epsilon}\,. \tag{67}\] We emphasize that the condition that \(r=2M/(1+\epsilon);\ \epsilon>-1\) locate the largest root of \(f(r)\) automatically imposes the aforementioned non-trivial constraints on the RZ parameter space. Furthermore, these spacetimes contain strong curvature singularities at their centers of space \(r=0\) and are regular everywhere else [30]. Of all RZ BHs with the same mass only those for which \(\epsilon=0\) have the same horizon size as the Schwarzschild BH, and the size of the horizon is controlled solely by the parameter \(\epsilon\). The first post-Newtonian (PN) parameter is determined by both \(\epsilon\) and \(a_{0}\)[30]. Moreover, this "1PN" parameter determines a particular combination of the parametrized post-Newtonian (PPN) parameters, \(\beta_{\rm PPN}-\gamma_{\rm PPN}=2a_{0}/(1+\epsilon)^{2}\). For \(b_{2}=0\) (\(B=1\) assures this), we obtain \(\gamma_{\rm PPN}=1\), in which case constraints obtained by solar system measurements [118] on \(\beta_{\rm PPN}\) can be translated into a constrain on these RZ parameters as \(|2a_{0}/(1+\epsilon)^{3}|\lesssim 10^{-4}\). Thus, finding similar constraints on \(\beta_{\rm PPN}\) via black hole imaging measurements in the strong gravity near supermassive UCOS can help us compare the strength of obtained constraints across several magnitudes in field strength [12]. Finally, the higher-order parameter \(a_{1}\) only affects higher PN coefficients. Since, and as can be expected from the discussion above in Sec. II, the diameters of the photon subrings cast by these moderately-thick emission disks on the observer's sky are tied closely to the shadow diameter of the RZ BHs, whose variations have been reported in Fig. 4 of [30], for the sake of brevity, we focus here exclusively on their widths instead. In Fig. 11, we show the variation in the widths of the first three subrings with varying metric deviation parameters for the aforementioned fiducial emission region morphological parameters and families of RZ BHs. We uniformly sample the parameter spaces with a resolution of \(0.01\) for the \(\mathcal{M}(a_{0},a_{1})\) RZ BH models (yielding \(\approx\!35000\) different grid points), \(0.01\) for the \(\mathcal{M}(\epsilon,a_{1})\) RZ BH models (\(\approx\!30000\) grid points), and \(0.015\) for the \(\mathcal{M}(\epsilon,a_{0})\) RZ BH models (\(\approx\!13000\) grid points), to ensure convergence. The hatched regions in all panels are disallowed by the EHT \(1\sigma\) shadow size measurement for Sgr A\({}^{*}\)[12]. As noted above, to mitigate biases that may be introduced in our broad inferences due to fixing the emitting region morphology we construct in Appendix B an equivalent figure for the case of Novikov-Thorne thin disks \(\{r_{\rm in},r_{\rm out},\theta_{1/2},\dot{r}\}=\{r_{\rm ISCO},\infty,0,0\}\) as well. Therefore, there, the inner and outer boundaries of the disks are different as are the scale-heights. The morphology of the photon ring is also different since the Figure 11: _Variation in subring widths of moderately geometrically-thick emission disks with spacetime geometry for a face-on observer._ The top-left panel shows the variation in the widths of the first photon subring over a BH parameter space for a fixed emission region morphology. The hatched regions, demarcated by the dot-dashed lines, are disallowed by the \(1\sigma\) shadow size measurement of Sgr A\({}^{*}\) that has been inferred by the EHT (see eq. 12 of [12]; See [30] for the M87\({}^{\prime\prime}\) disallowed band). It is clear that the shadow size measurement imposes nontrivial constraints generically on the BH parameter spaces. However, these extend to infinity in certain directions. This panel demonstrates clearly how with prior (theoretical) knowledge of the morphology of the emitting region, an additional measurement of the width of the first subring would further reduce the allowed band of BH parameters. In particular, due to the subring width isocontours being nonparallel to the shadow size isocontours, the region of allowed metric parameters can become compact. Across the panels, we show the variation in the (scaled) widths of the first three subrings for emission disks with morphological parameters \(\{r_{\rm in},r_{\rm out},\vartheta_{1/2},\,r\}=\{r_{\rm h},3r_{\rm DISC},\pi/10,0\}\) in solid, dashed, and dotted lines respectively, with changing metric deviation parameters for three different families of Rezzolla-Zhidenko BHs. All of these BHs have the same mass \(M,\,\epsilon\) exclusively determines the size of the event horizon as \(r_{\rm h}=2M/(1+\epsilon)\), \(a_{0}\) controls the magnitude of deviation of the first post-Newtonian (PN) parameter, and \(a_{1}\) influences only higher-order PN parameters (cf. [30]). The top-right panel shows BHs that have the same horizon size as that of the Schwarzschild BH, \(r_{\rm h}=2M\). The bottom-left panel shows BHs that have different horizon sizes, all satisfying the solar system 1PN constraints (since \(a_{0}=0\)). The bottom-right panel tests for the violation of the 1PN constraints in strong gravity for BHs of varying horizon sizes. In all panels, the Schwarzschild BH is located by the white cross, present at \((0,0)\). Finally, the possibility of inferring the lensing Lyapunov exponent from a joint measurement of multiple subring widths is clear to see from the color bars. We generically find an error of \(\lesssim 10\%\) in extracting the lensing Lyapunov exponent across all BH parameters for this configuration of the emission disk, which is motivated by GRMHD simulations (see, e.g., Ref. [109]). See also the related figure 14. subrings cast by NT thin-disks lie entirely outside their shadow boundaries in general. The variations in the widths of the \(n=1\) subrings across all panels of Fig. 11 fall roughly within the range \(0.3M\lesssim w_{1}\lesssim 2.0M\). From the top-left panel, it appears that the subring widths depend more sensitively on \(a_{0}\) as compared to \(a_{1}\), and increase with decreasing \(a_{0}\). Similarly, from the top-right panel, it appears that the subring widths depend more sensitively on \(\epsilon\) as compared to \(a_{1}\). Finally, from the bottom-left panel, it appears that \(\epsilon\) plays a marginally stronger role than \(a_{0}\) in determining the subring widths. These trends can be used, in principle, to infer constraints on the PN parameters. It is also apparent that for extremely large BHs (\(\epsilon\rightarrow-1\); see eq. 67), \(\epsilon\) plays a dominant role, and that larger BHs casting wider subrings. For small deviations in the size of the BH event horizon (\(\epsilon\approx 0\)), this trend seems to reverse: Smaller BHs cast wider subrings. These are, of course, only rough trends, and establishing a serious link between the horizon size and the subring widths will require a careful disentangling of the possible confounding effects, including but not limited to (a) the choice of parametrization scheme and (b) fixing the emitting region morphological parameters. The latter concern may be somewhat alleviated by comparing against Fig. 14, but establishing such a link rigorously is beyond the scope of the present work. We also show in the bottom row of Fig. 14 the variation in the radii of the photon sphere and of the timelike innermost stable circular orbits for these BH models, which are determined purely by the \(tt-\)component of the metric tensor for static and spherically-symmetric spacetimes in areal-polar coordinates (see, e.g., [116]). As discussed in Fig. 11, if the width of the lowest-order subring can be extracted from black hole imaging observations, combining it with the shadow size measurement can already yield a new and precise null test of the spacetime geometry. Such a metric test would allow us to break persisting degeneracies in such (two-dimensional) BH parameter spaces, which generically remain when using only the shadow size measurement (see, e.g., [51, 30]). Finally, we also establish that it is possible to obtain accurate (with an error \(\lesssim 10\%\)) inferences of the lensing Lyapunov exponent, which is determined purely by the spacetime geometry, by measuring the widths of a pair of subrings. As we shall see below, this sets up yet another test of the spacetime geometry. In perhaps the central figure of this work, we now show in Fig. 12 the variation in the purely metric-dependent observables with varying spacetime geometry. We define the three deviation parameters as follows, \[\delta= \frac{d_{\rm sh}}{d_{\rm sh;Schw}}-1 = \frac{\eta_{\rm ps}}{\sqrt{27}M}-1\,,\] \[\delta_{\gamma}= \frac{\gamma_{\rm ps}}{\gamma_{\ell;Schw}}-1 = \frac{\gamma_{\rm ps}}{\pi}-1\,,\] \[\delta_{t}= \frac{t_{\ell;{\rm ps}}}{t_{\ell;{\rm ps;Schw}}}-1 = \frac{t_{\ell;{\rm ps}}}{\sqrt{27}M}-1\,.\] The deviation parameter \(\delta\) measures fractional deviations in the shadow diameter of an arbitrary BH from the Schwarzschild value (see, e.g., [12]). This is not to be confused with the rotation parameter in Refs. [62, 68]. The deviation parameters \(\delta_{\gamma}\) and \(\delta_{t}\) capture the fractional deviations in the lensing Lyapunov exponent \(\gamma_{\rm ps}\) and the Lyapunov time \(t_{\ell;{\rm ps}}\) of a BH from the Schwarzschild values. The solid contour lines in Fig. 12 then show the fractional variation in the lensing Lyapunov exponent \(\delta_{\gamma}\) whereas the dashed isocontours show the fractional variation in the shadow size \(\delta\). To avoid confusion, we do not show here the fractional variation in the Lyapunov time \(\delta_{t}\), which can potentially be extracted from lightcurves of hotspots orbiting a UCO, but note that this is an equally promising observable. This will be explored in greater detail elsewhere. The isocontours corresponding to these different observables do not overlap and a measurement of any pair of these can constitute a null test of the Schwarzschild metric in strong gravity of unprecedented precision. This is especially true when considering the variation in the horizon sizes of nonspinning BHs (see the top-right and bottom-left panels). As noted above, the EHT measurement of the shadow sizes of M87\({}^{*}\) and Sgr A\({}^{*}\) significantly constrain the range of metric deviation parameters, as indicated by the \(\delta=0.05\) and \(-0.14\) lines for the latter (see eq. 12 of [12]). However, these allowed bands of the parameter spaces remain noncompact (see also Refs. [12, 51, 32]). For example, it may be possible for \(a_{0}\) to take unboundedly large values, \(a_{0}=-\infty\) (\(\mathscr{M}(a_{0},a_{1})\)) or \(a_{0}=+\infty\) (\(\mathscr{M}(\epsilon,a_{0})\)). This would hold true also for the other metric deviation parameters (\(\epsilon>-1\) always for BHs however), simply due to the fact that constraining two-dimensional parameter spaces with a single observable is not, in general, possible. An additional measurement of the lensing Lyapunov exponent, which is likely possible with a joint measurement of the first and second subring diameters or widths, can significantly reduce these allowed regions. Indeed, it can render the allowed regions on the (2d) metric deviation parameter spaces compact. Furthermore, if such a region does not contain the Schwarzschild values (\(\epsilon=a_{0}=a_{1}=0\)), then we obtain near-refutable evidence of the nonzero spin of the ultracompact object. Alternatively, this could potentially also be interpreted as a precise and accurate smoking-gun signature of a violation of general relativity, as well as of a number of alternative theories of gravity that admit the Schwarzschild BH metric as a solution. Thus, a measurement of the lensing Lyapunov exponent, which is likely possible with large baselines involving space-borne radio telescopes, can yield irrefutable evidence of the Kerr metric as being an accurate descriptor of the space-time geometries of astrophysical ultracompact objects or provide insight into necessary modifications of general relativity in the strong-field regime. Furthermore, there is a close connection between the Lyapunov exponent and the quasinormal mode frequencies that gravitational wave detectors can access during the ringdown phase of a binary black hole merger. We have also considered the case of RZ BHs that have identical \(tt-\)metric components to that of a Schwarzschild BH but differ in their \(rr-\)components, i.e. [116, 74], \[g_{rr}=\left(1-\frac{2M}{r}\right)\left[1+\frac{2b_{0}}{r}+\frac{4b_{1}}{r^{2} }\right]^{2}\,. \tag{68}\] Such BHs have a horizons located at \(r_{\rm h}=2M\), photon spheres Figure 12: _Variation in the relevant purely spacetime-dependent quantities for parametrized BH metrics._ We show here the fractional deviations in the shadow size (\(\eta_{\rm ps}\)) and the lensing Lyapunov exponent (\(\gamma_{\rm ps}\)) from their Schwarzschild values of \(3\sqrt{3}M\) and \(\pi\) in dashed and solid lines respectively for four different families of RZ black holes. The fractional deviation of the shadow boundary has already been inferred (\(1\sigma\)) from the 2017 EHT image of M87\({}^{\pi}\) to be \(\delta=-0.01^{+0.17}_{-0.17}\)[6, 32] and from the 2017 EHT image of Sgr A\({}^{*}\) to be \(\delta=-0.04^{+0.09}_{-0.10}\) (see eq. 12, or Table 2, of [9]), albeit with disparate methodologies. The lensing Lyapunov exponent can potentially be inferred from future measurements of the diameters or widths of, or flux densities through, the first pair of photon rings cast by a source of emission close to a compact object. This may be possible from images of M87\({}^{\pi}\) or Sgr A\({}^{*}\) with space-borne radio baselines (see, e.g., [58]). Each of these observables involves a different combination of the metric functions and their derivatives (72), as evidenced by their respective isocontours in these panels which intersect at unique locations generically in these BH metric deviation parameter spaces. This demonstrates, quite strikingly, how combining measurements of these observables, and potentially others (such as the Lyapunov time (51), which could be inferred from the light curves of infalling compact luminous sources [108]), would yield stringent and ultraprecise tests of general relativity in the strong-field regime. Each of these BH parameter spaces samples a qualitatively different type of metric deviation from the Schwarzschild spacetime (cf. [30]); e.g., while the \(a_{i}-\)parameters control the \(tt-\)component of the metric, the \(b_{i}-\)parameters control its \(rr-\)component. It will also be possible to test the compatibility of alternate static and spherically-symmetric (non-Schwarzschild) BH metrics with future constraints on the lensing Lyapunov exponent [50]. Since these spacetimes can arise as nonempty solutions to the Einstein equations or to the equations of motion of alternative theories of gravity and fields (see, e.g., Refs. [12, 33, 116] for further details), it will also be possible to test alternatives to GR. located at \(r_{\rm ps}=3M\), and have shadow sizes of \(\eta_{\rm ps}=\sqrt{27}M\). Due to the shadow size of a static and spherically-symmetric BH being determined solely by the \(tt-\)component of the metric in areal-radial coordinates (see, e.g., [32]), deviations in the \(rr-\)component of the metric due, e.g., to non-zero \(b_{0}\) or \(b_{1}\), remain unconstrained by current EHT measurements. The condition that \(B(r)\) be nonvanishing everywhere (\(r>0\)) imposes the condition that \(b_{1}>b_{0}^{2}/4\). This is necessary to ensure that the proper volume of space inside a finite coordinate radius \(r\) is nonzero everywhere (see, e.g., [30]). As is evident from the bottom-right panel here, inferring the lensing Lyapunov exponent grants us access to a fundamentally new aspect of the spacetime geometry of astrophysical BHs. ## IV Summary and discussion ### Theoretical Underpinning An emitting source that is present in the vicinity of an ultra-compact object (UCO), such as a black hole, can cast multiple images on the celestial plane of an asymptotic, static observer, due to light bending by the UCO. The orbits of photons that connect the spatial locations of a pair of emitters and observers can be ordered by the size of their total path lengths \(\not{\Delta}l\). Photons with larger path lengths take concomitantly longer times \(\not{\Delta}t\) to arrive at the observer and also experience greater angular deflections \(\not{\Delta}\theta\), due to gravitational lensing. Since all of the photon orbits that connect any such pair are all spatially-planar (in static and spherically-symmetric spacetimes), we can order these by the orbital half-winding index \(n\) that is given as \[n=\left\lfloor\not{\Delta}\theta/\pi\right\rfloor, \tag{69}\] with \(\left\lfloor\cdot\right\rfloor\) the usual floor function. The photon that undergoes a total lensing of \(0<\not{\Delta}\theta_{0}<\pi\) has the smallest path length through spacetime and arrives earliest on the screen of the observer. This \(n=0\) photon forms the direct, or \(n=0\), image of the emitter and is only weakly-lensed. On the other hand, a higher-order (\(n\geq 1\)) photon undergoes strong gravitational lensing, \(n\pi<\not{\Delta}\theta_{n}<(n+1)\pi\), and executes \(n\) half-loops around the central object. For spherically-symmetric spacetimes, we can define the \(+z-\)axis of the UCO-centered coordinate system as the direction pointing toward the observer, in which case photon orbits that connect emitters present at arbitrary locations in space and this polar observer are all meridional planar (constant-\(\varphi\)) orbits. The radius, or the apparent impact parameter, \(\eta\) at which meridional photons arrive on the image plane is determined solely by the ratio of their conserved angular momentum \(p\,\varphi\) and energy \(|p_{\ell}|\) as \(\eta=p\,\varphi/|p_{\ell}|\). The photon sphere \(\mathrm{S}\) is a location in the bulk of space, at \(r=r_{\rm ps}\), that harbors photons moving on (unstable) circular orbits, and the shadow boundary curve (or equivalently the \(n=\infty\) critical curve) \(\mathcal{C}_{\infty}\) is its gravitationally-lensed projection on the image plane, which appears as a perfect circle for such an asymptotic, static observer, at a location \(\eta=\eta_{\rm ps}\). These locations are determined purely by the spacetime metric, which in arbitrary spherical-polar coordinates, are given as \[\partial_{r}\left.\left[\frac{\not{\Delta}\,\varphi\,\varphi}{\not{\Delta}t }\right]\right|_{r=r_{\rm ps}}=0\;;\;\;\eta_{\rm ps}=\left.\left[-\frac{\not{ \Delta}\,\varphi\,\varphi}{\not{\Delta}t}\right]^{1/2}\right|_{r=r_{\rm ps}}. \tag{70}\] The vicinity of the photon sphere \(\delta\mathrm{S}\) (\(|\vec{r}=r/r_{\rm ps}-1|\ll 1\)) in the bulk is naturally a region of strong gravitational lensing, which photons forming higher-order (\(n\geq 1\)) images necessarily access, before arriving on the image plane to the shadow boundary \(\delta\mathcal{C}_{\infty}\) (\(|\bar{\eta}=\eta/\eta_{\rm ps}-1|\ll 1\)). This region \(\delta\mathcal{C}_{\infty}\) on the image plane of an observer is called the photon ring [58]. The total angular deflection experienced by photons that appear in the photon ring and the elapsed time between their emission and detection grow logarithmically respectively as (see Sec. I.2 and Appendix A), \[\not{\Delta}\theta\approx-\frac{\pi}{\gamma_{\rm ps}}\ln|\bar{\eta}|+\tilde{c} _{\theta}\;;\;\;\not{\Delta}t\approx-\frac{\pi\eta_{\rm ps}}{\gamma_{\rm ps}} \ln|\bar{\eta}|+\tilde{c}_{\ell}\;, \tag{71}\] where the \(\tilde{c}\) are some constants, and \(\gamma_{\rm ps}\) is the lensing Lyapunov exponent [57; 58]. The Lyapunov time \(t_{\ell;{\rm ps}}\) is yet another, related, fundamental constant that determines the universal late-time luminosity decay of a compact source of emission that is falling into a compact object [108]. Both of these critical exponents are determined purely by the spacetime geometry as (see Secs. I.2, I.4), \[\gamma_{\rm ps} = \pi\left.\left[\frac{\not{\Delta}t}{2\not{\Delta}t_{r}}\partial_{ r}^{2}\left(\frac{\not{\Delta}\,\varphi}{\not{\Delta}t}\right)\right]^{1/2} \right|_{r=r_{\rm ps}}\] \[=\] \[t_{\ell;{\rm ps}} = \left.\left[-\frac{\not{g}_{\ell}^{2}}{2\not{g}\not{\Delta}t_{r} \not{\Delta}\theta_{r}^{2}}\left(\frac{\not{\Delta}\,\varphi}{\not{\Delta}t} \right)\right]^{-1/2}\right|_{r=r_{\rm ps}}\] \[= \frac{\pi\eta_{\rm ps}}{\gamma_{\rm ps}}\;=\;\left.\left[-\frac{ \not{g}_{\ell}}{\not{g}_{\ell}\pi}\right]^{1/2}\right|_{r=r_{\rm ps}}\;;\] \[\kappa_{\rm ps} = \left.\left[\frac{\not{g}_{\ell}}{2\not{g}\not{\Delta}\theta_{r} ^{2}}\left(\frac{\not{\Delta}\,\varphi}{\not{\Delta}t}\right)\right]^{1/2} \right|_{r=r_{\rm ps}}\;, \tag{74}\] and can be written in terms of the (more fundamental) phase space Lyapunov exponent \(\kappa_{\rm ps}\), which determines the divergence of null geodesics at the photon sphere both in phase space as well as in spacetime (see Sec. I.4). That is, \(\kappa_{\rm ps}\) appears in the geodesic deviation equation for a congruence of photon orbits at the photon sphere (49). Furthermore, another equation from the phase space perspective shows how this exponent is also linked with the Raychaudhuri equation. Thus, we expect that it characterizes the expansion rate of a meridional null congruence at the photon sphere. In this way, it becomes evident that these exponents measure aspects of the curvature tensor. For the Schwarzschild BH spacetime, these Lyapunov exponents take the values \(\kappa_{\rm ps}=1/(\sqrt{3}M),\;\gamma_{\rm ps}=\pi\), and \(t_{\ell;{\rm ps}}=3\sqrt{3}M\). These equations (70, 72) are quite insightful: Together they demonstrate how the shadow radius (\(\eta_{\rm ps}\)) is sensitive to the first derivatives of the \(tt-\) and \(\vartheta\vartheta-\)components of the spacetime metric tensor whereas either Lyapunov exponent (\(\gamma_{\rm ps}\) or \(t_{\ell;\mathrm{ps}}\)) captures aspects of the curvature at the photon sphere in the bulk. While these three gauge-invariant observables feature different combinations of the metric functions and their derivatives, only the lensing Lyapunov exponent and the Lyapunov time are dependent on the \(rr-\)component. This last statement warrants further emphasis. An arbitrary static and spherically-symmetric metric can be described by two functions \(\{g_{tt},g_{rr}\}\), best seen in areal-radial or "curvature" coordinates, where \(g_{\,\partial\,\partial}\) is fixed to \(g_{\,\partial\,\partial}=r^{2}\) (see, e.g., Refs. [116, 119]). In these coordinates, the locations of the photon sphere in the bulk as well as of the shadow boundary on the image plane, are determined solely by the \(tt-\)component of the metric tensor and its derivatives (see eq. 70; See also [32]). Thus, conversely, a measurement of the shadow diameter of a static and spherically-symmetric spacetime can be used to impose constraints on its \(tt-\)component in areal-polar coordinates [32, 12]. Similarly, with future measurements of the lensing Lyapunov exponent \(\gamma_{\mathrm{ps}}\), we can expect to obtain new strong-field constraints on the \(rr-\)component of the metric tensor as well. As can be seen from eq. 71, to measure the lensing Lyapunov exponent is to quantify the effects of light-bending, as well as of the time-delays (33), experienced by photons that access the close vicinity \(\delta S\) of supermassive ultracompact objects. Clearly, such a measurement would be the strong-field analogue of the solar system measurements of light-bending and of Shapiro time-delays. Naturally, from these, we should expect to obtain new strong-field constraints on \(\gamma_{\mathrm{PPN}}\), the parametrized post-Newtonian (PPN) parameter that controls deviations of \(g_{rr}\) from its Schwarzschild value (in isotropic coordinates; See, e.g., Ch. 4 of [120] and Sec. 4 of [118]). ### Observational Implications Now that we have summarized what our purely metric-dependent target observables are, we will now discuss how we can infer these from "more direct" observables in practice. For a compact source of emission located at a polar angle \(\theta=0<\vartheta_{\mathrm{e}}<\pi\), we can express the additional deflection experienced by the order\(-(n+1)\) photon relative to the order\(-n\) one straightforwardly as (10), \(\hat{\Delta}\theta_{n+1}-\hat{\Delta}\theta_{n}=\pi+(-1)^{n}(\pi-2\vartheta_{ \mathrm{e}})\), which, along with eq. 71, induces a scaling relation between the (fractional) radial locations of its higher-order images, \(\bar{\eta}_{n}=\eta_{n}/\eta_{\mathrm{ps}}-1\), as (see eq. 25), \[\frac{\bar{\eta}_{n+1}}{\bar{\eta}_{n}}\approx\mathrm{e}^{-\gamma_{\mathrm{PT} }}\cdot\mathrm{e}^{\pm\gamma_{\mathrm{ps}}(2\,\theta_{n}/\pi-1)}\;, \tag{75}\] where the upper sign (\(+\)) applies for even orders \(n\) and vice versa. The equation above demonstrates the existence of a conformal scaling symmetry on the image plane of a static and spherically-symmetric spacetime, with the lensing Lyapunov exponent playing the role of the critical scaling exponent here (see also [60]). Due to the above scaling of image sizes, we can anticipate a similar scaling relation for the flux densities of images of successive orders (see, e.g., [53]). With sufficient knowledge of the inclination of the emitter relative to the observer \(\vartheta_{\mathrm{e}}\), measuring the image plane radii or flux densities of higher-order images can yield an inference of the lensing Lyapunov exponent of the central object. Furthermore, with this scaling relation (75), we can also see from eq. 71 that the time delay between such images also obeys a remarkable relation (see eq. 26), \[\hat{\Delta}t_{n+1}-\hat{\Delta}t_{n}\approx\pi\eta_{\mathrm{ps}}[1\mp(2 \vartheta_{\mathrm{e}}/\pi-1)]=t_{\mathrm{d;ps}}[1\mp(2\vartheta_{\mathrm{e}}/ \pi-1)]\;, \tag{76}\] where \(t_{\mathrm{d;ps}}=\pi\eta_{\mathrm{ps}}\) is a characteristic delay time equal to the half-orbital time of a photon moving on a planar bound orbit. Thus, measuring the time delay between the appearance of these images gives us an independent estimate of the shadow size of the UCO. Therefore, detections of higher-order images of a compact source of flux that is present in the vicinity of a supermassive ultracompact object open up interesting new windows into testing gravity and general relativity in the strong-field regime. The emergence of compact sources of flux transiently orbiting the central black hole is likely related to flaring events locally heating the accreting plasma [94, 98]. Such flares are frequently observed in Sgr A\({}^{*}\) across the electromagnetic spectrum [121, 122, 123, 124]. Supermassive compact objects, such as M87\({}^{*}\) or Sgr A\({}^{*}\), can also undergo accretion, in which case the hot accreting plasma present close by can act as an extended source of emission. Such extended sources cast higher-order images, each of which is simply the union of the higher-order images of individual fluid elements, all of the same order. In general, these higher-order images are time-delayed, gravitationally-lensed projections of the plasma bulk state and appear as rings in \(\delta\mathcal{R}_{\infty}\) on the image plane (thus, photon "subrings"). The characteristic diameters \(\left<d_{n}\right>_{\phi}\) and the widths \(w_{n}\) of consecutive order subrings obey the following approximate scaling relations (see Sec. II; See also [58]), \[\frac{\left<d_{n+1}\right>_{\phi}/\left<d_{\mathrm{sh}}\right>_{\phi}-1}{ \left<d_{n}\right>_{\phi}/\left<d_{\mathrm{sh}}\right>_{\phi}-1}:=\frac{ \alpha_{1;n+1}}{\alpha_{1;n}}\approx\mathrm{e}^{-\gamma_{\mathrm{ps}}}\;;\;\; \frac{w_{n+1}}{w_{n}}\approx\mathrm{e}^{-\gamma_{\mathrm{ps}}}\;, \tag{77}\] as anticipated from eq. 75. In the above, we have used \(\left<\cdot\right>_{\phi}\) to denote the characteristic diameter of a closed polar curve \(d(\psi)\) on the image plane, over the range of the image plane polar angle \(0\leq\psi<2\pi\). Here, inspired by [12], we use the median diameter as the characteristic diameter, and in Sec. II.1 introduce the subring diameter calibration factors \(\alpha_{1;n}:=\left<d_{n}\right>_{\phi}/\left<d_{\mathrm{ps}}\right>_{\phi}-1\) that succinctly characterize the impact, specifically, of non-gravitational degrees of freedom, such as the properties of the emitting source, on the sizes of the subrings. This is easily seen: For a given spacetime, since \(d_{\mathrm{sh}}\) is fixed, varying, e.g., the plasma velocity, emissivity, opacity, or magnetic field configuration will yield different \(\alpha_{1;n}\) in general. It is straightforward also to show that the flux densities \(F_{n}\) through the subrings obey a nice scaling relation (see Sec. I.3; See also Refs. [58, 53]), \[\frac{F_{n+1}}{F_{n}}\approx\frac{w_{n+1}}{w_{n}}\approx\mathrm{e}^{-\gamma_{ \mathrm{ps}}}\;. \tag{78}\] Thus, inferring the lensing Lyapunov exponent \(\gamma_{\mathrm{ps}}\) in particular from observations relies on being able to detect a pair subrings and measuring direct observables such as their diameters, widths, or flux densities. While this is a challenging problem, recent work suggests that this may be achievable [125; 58; 126]. As noted above, the characteristics of photon subrings, such as their diameters and widths, depend not only on the space-time metric \(\diameter\) but also on the observer viewing angle as well as on the properties of the emitting region. To be able to ascertain the effects of gravity, we must naturally have sufficient control over the non-gravitational degrees of freedom. Therefore, to enable an extensive study of the impact of each of these aspects, both separately and when combined, we have employed a simple conical-torus morphological model to characterize the latter. This allows us, to vary the locations of the inner \(r_{\rm in}\) and outer \(r_{\rm out}\) boundaries of the emitting region, its geometrical-thickness \(\diameter_{1/2}\), as well as its inclination \(\iota\) relative to the observer, for any spacetime geometry \(\diameter\). By employing such simple semianalytic morphological models, we are able to, rather transparently, understand how the structure of the photon ring changes - i.e., the variations in the subring diameters and widths, as well as the accuracy of the conformal scaling symmetry that relates subrings - with the gastrophysical degrees of freedom. We summarize the strategy adopted here as follows. In Sec. II we fix the spacetime geometry to be given by the Schwarzschild BH metric, and consider the variations in the first four higher-order images of geometrically-thin disks for the case of an observer lying along the axis of the disk in Sec. II.2 and for general viewing angles in Sec. II.3. In Sec. II.4, we explore the impact of varying the geometrical-thickness of the emitting region for an observer present along the pole. Finally, in Sec. III, we adopt a fiducial morphological model for the emitting region, that is roughly consistent with the output of state-of-the-art GRMHD simulations (see, e.g., [109]), and allow the spacetime geometry to vary instead. In Appendix B, we also report the outcome of such an analysis for geometrically-thin disks in non-Schwarzschild BH spacetimes. While the observed spectra of quasars are well explained by the cold, optically-thick, geometrically-thin Novikov-Thorne accretion disk models [71] that terminate well outside the photon sphere, the accretion flows associated with EHT targets such as M87\({}^{*}\) and Sgr A\({}^{*}\), whose photon rings we can pragmatically hope to detect, have significantly different properties. These are hot, optically-thin at sub-\(\rm mm\) wavelengths, typically geometrically-thick, and we receive significant emission from close to the photon sphere (see, e.g., Refs. [10; 11; 5; 12; 10]). Thus, while the properties of the subrings cast by geometrically-thin disks provide a number of useful insights [65; 69; 70], it is important to examine the impact of nonzero scale-height of the emitting region. Here we have allowed these morphological parameters to take their maximally permissible ranges so we can understand the maximal possible variations of the photon ring, and thus understand the limits of our expectations. We now briefly summarize our qualitative results below and direct the reader to see Sec. II for our detailed quantitative findings. We believe these quantitative estimates will be particularly useful as simple sanity checks when developing algorithms aimed at extracting subring characteristics. For an observer lying along the space-like normal to the disk mid-plane (relative inclinations of \(\dot{\iota}=0\) or \(\pi\); "face-on"), the photon subrings are perfect annuli on the image plane. Unsurprisingly [70], larger disks (larger \(r_{\rm out}\)) cast concomitantly larger subrings (larger \(d_{n}\)) and wider disks (larger \(r_{\rm out}-r_{\rm in}\)) cast wider subrings (larger \(w_{n}\)). Furthermore, thicker disks (larger \(\diameter_{1/2}\)) generically cast both larger and wider subrings. For observers present at arbitrary inclinations, naturally, the subrings are no longer circular on the image plane. Instead, their outer and inner edges are described by closed polar curves \(\eta_{\rm out}(\psi)\) and \(\eta_{\rm in}(\psi)\) respectively. For geometrically-thin disks, we find in Sec. II.3 the median subring diameter \(\left<d_{n}\right>_{\psi}=2\left<\eta_{n;{\rm out}}\right>_{\psi}=d_{\rm sh} \alpha_{1;n}\) to be independent of the viewing angle (cf. [69]), reminiscent of, in static and spherically-symmetric spacetimes, the shadow diameter \(d_{\rm sh}\) itself. We also discuss how the \(n=1\) subring can become non-compact, and as large as the direct or primary (\(n=0\)) image, if the observer receives substantial emission from large distances (large \(r_{\rm out}\)) along their line-of-sight. This may be possible, e.g., if the accretion flow is very geometrically-thick (nearly spherical), the observer is lying in the plane of the accretion disk, or if the observer is present along the jet of an active galactic nucleus (likeliest of these possibilities; See, e.g., [127]). For realistic values for the geometrical-thickness (\(\diameter_{1/2}\lesssim\pi/10\); See, e.g., [109; 110]) however, we find that the fractional deviation of the \(n=1\) subring diameter \(\diameter_{1;1}-1\) in a Schwarzchild BH spacetime is roughly within \(-0.01\lesssim\alpha_{1;1}-1\lesssim 0.3\). For these scale heights, we find that the width of the first subring would be smaller than about a gravitational radius, \(w_{1}\lesssim 1M\), i.e., the effective angular resolution required to resolve its width, in the best case scenario, is comparable to the angular gravitational radius \(\diameter_{\rm g}=GM/(c^{2}D)\), where \(D\) is the distance to the compact object. For M87\({}^{*}\) and Sgr A\({}^{*}\), these have been inferred from the 2017 EHT observations to be \(3.8^{+0.4}_{-0.4}\mu\)as [1] and \(4.8^{+1.4}_{-0.7}\mu\)as [7] respectively. Thus, as noted in [58], precise measurements of the diameter and width of the \(n=1\) subrings for these objects may be possible with a high-frequency ground array or with radio dishes in low Earth orbits. The Rezzolla-Zhidenko (RZ) parametrized BH models can cast wider (and larger) subrings on the image plane: For morphological parameters of the emitting region (and, in particular, its geometrical-thickness) comparable to those obtained for the Schwarzschild BH from GRMHD simulations and for moderate ranges of the metric deviation parameters (\(|\epsilon,a_{0},a_{1}|\leq 1\)), the subring widths can be larger by a factor of two. Therefore, observations of the first photon subring are not only feasible but can potentially yield stronger constraints on the magnitude of deviations from the Kerr metric. Higher-order (\(n\geq 2\)) subrings always remain compact and are tightly tied to the shadow boundary, independently of the morphology of the emitting region. The fractional deviation of the \(n=2\) subring diameter, in particular, in a Schwarzschild spacetime from the shadow diameter is an order-of-magnitude smaller than the \(n=1\) diameter variation, \(|\alpha_{1;2}-1|\lesssim 3\times 10^{-2}\). In fact, for realistic scale-heights, this drops by yet another factor of \(10\). Measurements of increasingly higher-order subring diameters yield increasingly accurate and precise estimates of the shadow diameter, and \(|\alpha_{1,n}-1|\) decreases by about a factor of \(10\) per increase in image order \(n\). This striking independence of the higher-order subring diameters on the size and, more generally, the morphology of the emitting region can be taken to mean that a (feasible; [63; 64; 68; 58]) measurement of the \(n=2\) subring diameter, in particular, will yield the first "direct" measurement of the shadow size of astrophysical ultracompact objects such as M87\({}^{*}\) and Sgr A\({}^{*}\), i.e., with no additional calibration or modeling necessary. We find the maximal widths of the \(n=2\) and \(n=3\) subrings in a Schwarzschild spacetime to be about \(w_{2}\lesssim 0.17M\) and \(w_{3}\lesssim 0.01M\) respectively. Thus, resolving both the diameters and widths of the \(n=2\) and \(n=3\) subrings will require likely even larger baselines [58]. Finally, to quantify our ability to extract the lensing Lyapunov exponent from a joint measurement of either the diameters or the widths of a pair of subrings in principle, we introduced in Sec. II.1 two error functions. We find that, for realistic scale heights of accretion flows in a Schwarzschild BH spacetime, with a joint measurement of the diameters of the first pair of subrings we can obtain the lensing Lyapunov exponent with an error of \(\lesssim 20\%\). With a joint measurement of their widths instead, this error is similar or lesser. For realistic values for the outer boundary of the emitting region as well \(r_{\rm out}\approx 20M\), this error becomes even smaller (\(\lesssim 5\%\); see Fig. 11). Finally, we also obtain comparably small errors when attempting to obtain the lensing Lyapunov exponent for about \(80000\) different BH models, and for two different morphological models for the emitting region; For thick-disks, see Fig. 11 and for thin-disks, see Fig. 14. There are two obvious limitations to our work. First, we have only considered nonspinning spacetimes here. However, complementary work (see, e.g., Refs. [65; 31; 58; 67]) indicates that many of the qualitative features obtained here should carry forward to the case of stationary and axisymmetric spacetimes. Nevertheless, there are important differences when considering spinning objects: Photon orbits are no longer generically planar, and we should expect the number of independent scaling exponents to increase with the number of orbital degrees of freedom (see, e.g., Refs. [60; 62; 68]). Second, we have not explicitly considered the full variations in the non-gravitational degrees of freedom that are possible. Nonetheless, barring the impact of optical depth (but see also [128; 129]), we do not see this as a significant limitation due to the broad analysis presented in Secs. I.2, I.3, and Appendix A. In conclusion, the EHT has already demonstrated how tests of the spacetime metric are possible with the inferred shadow sizes of the astrophysical objects M87\({}^{*}\)[6; 32; 33; 51] and Sgr A\({}^{*}\)[12]. The next leap for such tests with black hole imaging would be to detect the time-delayed, self-similarly nested higher-order images cast by the accompanying hot accretion flow, which appears to be feasible [62; 63; 64; 58]. We have demonstrated here how future inferences of the Lyapunov exponents for these supermassive ultracompact objects, along with the (existing) inferences of their shadow sizes will yield stringent and unprecedented tests of the spacetime geometry, as well as of the underlying theory of gravity and fields. ###### Acknowledgements. It is a pleasure to thank Dominic Chang for several insightful discussions and suggestions. We are also grateful to Koushik Chatterjee, Ramesh Narayan, Michael Johnson, and Alejandro Cruz-Osorio for useful suggestions. PK acknowledges supported in part from grants from the Gordon and Betty Moore Foundation and the John Templeton Foundation to the Black Hole Initiative at Harvard University, and from NSF award OISE-1743747. PK and LR acknowledge support from the ERC Advanced Grant 'JETSET: Launching, propagation and emission of relativistic jets from binary mergers and across mass scales' (grant no. 884631).
2309.16335
End-to-end Risk Prediction of Atrial Fibrillation from the 12-Lead ECG by Deep Neural Networks
Background: Atrial fibrillation (AF) is one of the most common cardiac arrhythmias that affects millions of people each year worldwide and it is closely linked to increased risk of cardiovascular diseases such as stroke and heart failure. Machine learning methods have shown promising results in evaluating the risk of developing atrial fibrillation from the electrocardiogram. We aim to develop and evaluate one such algorithm on a large CODE dataset collected in Brazil. Results: The deep neural network model identified patients without indication of AF in the presented ECG but who will develop AF in the future with an AUC score of 0.845. From our survival model, we obtain that patients in the high-risk group (i.e. with the probability of a future AF case being greater than 0.7) are 50% more likely to develop AF within 40 weeks, while patients belonging to the minimal-risk group (i.e. with the probability of a future AF case being less than or equal to 0.1) have more than 85% chance of remaining AF free up until after seven years. Conclusion: We developed and validated a model for AF risk prediction. If applied in clinical practice, the model possesses the potential of providing valuable and useful information in decision-making and patient management processes.
Theogene Habineza, Antônio H. Ribeiro, Daniel Gedon, Joachim A. Behar, Antonio Luiz P. Ribeiro, Thomas B. Schön
2023-09-28T10:47:40Z
http://arxiv.org/abs/2309.16335v1
# End-to-end Risk Prediction of Atrial Fibrillation from the 12-Lead ECG by Deep Neural Networks ###### Abstract **Background:** Atrial fibrillation (AF) is one of the most common cardiac arrhythmias that affects millions of people each year worldwide and it is closely linked to increased risk of cardiovascular diseases such as stroke and heart failure. Machine learning methods have shown promising results in evaluating the risk of developing atrial fibrillation from the electrocardiogram. We aim to develop and evaluate one such algorithm on a large CODE dataset collected in Brazil. **Methods:** We used the CODE cohort to develop and test a model for AF risk prediction for individual patients from the raw ECG recordings without the use of additional digital biomarkers. The cohort is a collection of ECG recordings and annotations by the Telehealth Network of Minas Gerais, in Brazil. A convolutional neural network based on a residual network architecture was implemented to produce class probabilities for the classification of AF. The probabilities were used to develop a Cox proportional hazards model and a Kaplan-Meier model to carry out survival analysis. Hence, our model is able to perform risk prediction for the development of AF in patients without the condition. **Results:** The deep neural network model identified patients without indication of AF in the presented ECG but who will develop AF in the future with an AUC score of 0.845. From our survival model, we obtain that patients in the high-risk group (i.e. with the probability of a future AF case being greater than 0.7) are 50% more likely to develop AF within 40 weeks, while patients belonging to the minimal-risk group (i.e. with the probability of a future AF case being less than or equal to 0.1) have more than 85% chance of remaining AF free up until after seven years. **Conclusion:** We developed and validated a model for AF risk prediction. If applied in clinical practice, the model possesses the potential of providing valuable and useful information in decision-making and patient management processes. **Keywords:** Atrial fibrillation; Deep neural network; ECG; Risk prediction; Survival analysis ## Introduction Atrial fibrillation (AF) is progressively more common worldwide within an ageing population [1]. It is associated with adverse outcomes such as cognitive impairment and can lead to more severe heart diseases if not treated early. Previous studies have found a close link between AF and increased risk of death [2] and heart-related complications, such as stroke and heart failure [3, 4, 5]. Good assessment of patient risk can allow more frequent monitoring and facilitate early diagnosis. Early detection of the problem might allow to start anticoagulation treatment and help prevent death and disability. The electrocardiogram (ECG) is a convenient, fast, and affordable option used at many hospitals, clinics, primary and specialised health centres to diagnose many types of cardiovascular diseases. Over the past 50 years, computer-assisted tools have complemented physician interpretation of ECGs. Notably, the realm of deep learning has emerged as a promising avenue to enhance automated ECG analysis, showcasing impressive strides in recent years [6, 7, 8]. Prior studies have predominantly explored the use of deep neural networks (DNNs) to automatically detect AF and other cardiac arrhythmias from standard 12-lead ECGs [9, 10, 11]. This advancement holds valuable implications for clinical decision support, offering auxiliary tools for diagnosing cardiac arrhythmias. However, while achieving consistent diagnoses in patients-even among those with established conditions-is an essential aspect, the parallel need remains for systems yielding timely and early warning for patients with prospective conditions to develop AF. Combining the features obtained from DNNs with survival methods is a promising approach for accurate risk prediction. Recent studies explored this approach for the risk prediction of heart diseases [12] and mortality [13, 14]. The risk prediction of AF from the 12-lead ECG has been studied before with different approaches and varying degrees of success. Raghunath et al. [15] used DNNs for a dataset collected during 30 years to directly predict new-onset AF within one year and identified the patients at risk of AF-related stroke among those predicted to be at high risk of impending AF. The authors in [16] focused on predicting future AF incidents and the time to the event but used a DNN model trained on a different dataset, and the survival analysis spanned a longer period. From our group, Zvuloni et al. [17] performed end-to-end AF risk prediction from the 12-lead ECG but did not go further to implement survival modelling and estimate the time to the AF event. Further, Biton et al. [18] presents a model that used digital biomarkers in combination with deep representation learning to predict the risk of AF. Their model uses a random forest classifier including features from a pre-trained DNN where the weights are kept fixed from a different ECG classification task. The aim of our work is to bridge the gap between these studies. While these previous studies focused either on directly predicting future AF cases within a given time frame or incorporated DNNs trained on disparate datasets for survival modelling, there exists no comprehensive approach that synergizes the capabilities of DNNs in AF diagnosis with the precision of survival analysis techniques for estimating time-to-event outcomes. Contrarily, our approach combines both of these aspects: firstly, by employing an end-to-end trained DNN to assess the risk of AF development, and secondly, by utilizing the DNN's output to construct a time-to-event model that forecasts the occurrence of AF from the date of ECG examination. We demonstrate the effectiveness of the method which offers accurate prognostic insights into AF occurrences. Further, we release implementation codes and trained weights to facilitate future studies. ## Methods ### The dataset The model development and testing were conducted using the CODE (Clinical Outcomes in Digital Electrocardiology) dataset [19]. The CODE dataset consists of 2,322,465 12-lead ECG records from 1,558,748 different patients. The ECG records were collected in 811 counties in the state of Minas Gerais, Brazil by a public telehealth system, Telehealth Network of Minas Gerais (TNMG) between 2010 and 2017. A detailed description of the recordings and the labelling process for each ECG exam of the CODE dataset can be found in [11]. Information about the patients was recorded together with their ECG tracings. The average age of the patients (considering each exam separately) is 53.6 years with a median of 54 years and a standard deviation of 17.4 years. To analyse the natural history of patients with regard to AF, we identified patients who recorded multiple ECG exams. The distribution of the number of visits for each patient during a period of eight years is depicted in Supplementary Material Figure S.1. As the figure shows, the majority of patients recorded only a single ECG exam (1,104,588 patients). 285,685 patients performed two visits each, while the remaining 168,475 patients recorded ECG exams more than twice. The number of medical visits undertaken by each patient was taken into consideration in classifying the exams into different classes as discussed in the problem formulation. The ECG signals are between 7 and 10 seconds long and recorded at sampling frequencies ranging from 300 to 600 Hz. The ECG records were re-sampled at 400 Hz to generate between 2800 and 4000 temporal samples. All ECGs are zero-padded to obtain a uniform size of 4096 samples for each ECG lead, which are then used as input to the convolutional model. The labels for AF in the CODE dataset were extracted from the text report produced by the expert who looked at the ECGs. To improve the quality of the annotations, some exams were reviewed by doctors, in this case, disagreement with the labels produced by the University of Glasgow automatic diagnosis software was used to select exams to be reviewed. The procedure is described in detail in [11]. ### Problem formulation The study considered patients in the CODE database with at least two ECG exams or that have AF. Patients were classified into three groups (NoAF, BaselineAF, FutureAF) according to the presence or absence of a record with AF condition and whether the record with AF is the baseline or not. The ECG exams from the patients were classified into three different classes, focusing on patients who undertook multiple exams. The classification process, which is illustrated in Figure 1, is detailed as follows: * _NoAF Class:_ all ECG exams from patients who recorded multiple exams without presenting an AF abnormality. We exclude the last exam for each patient or exams recorded within one week Figure 1: Diagram of patients groups and exams categories. from the last exam. * _WithAF Class:_ combined all ECG exams that exhibit the AF condition. * _FutureAF Class:_ regrouped normal ECG exams from patients who had normal ECG exams at the beginning, but who were diagnosed with AF condition in a follow-up exam. The retained records were made before the patients were first diagnosed with AF condition. We exclude all subsequent normal exams after the first positive case, and exams made within one week before this case. The one-week threshold was set so we don't have to deal with paroxysmal atrial fibrillation cases, which is a brief event of atrial fibrillation that usually stops in 24 hours and may last up to a week. We are interested in using predictions of the FutureAF class for predicting the long-term risk of AF, hence we consider that exams should be distanced by at least one week to be considered as a follow-up exam. Hence, ECG exams recorded within one week before the first exam with the AF condition were not added FutureAF. Similarly, exams for which we do no follow the patient for longer than one week were not added to NoAF. We used the remaining exams for developing and testing the model. In the final dataset, 637,514 exams (92.17%) belong to the class _NoAF_; 41,851 (6.05%) to class _WithAF_; and, 12,280 (1.78%) to the class _FutureAF_. This final dataset was split uniformly at random and by patient into train set, validation set and test set. 60% of the data were allocated for training, 10% for validation and 30% for testing. Splitting the data into train and validation sets as we have done is common for large datasets such as ours because cross-validation becomes computationally expensive [9, 11, 20]. The train-test split happened so that ECG records belonging to one patient ended up in the same split. ### DNN architecture and training The DNN architecture in this study was based on a deep residual neural network implemented in previous studies [11, 13]. The neural network consists of a convolutional layer followed by five residual blocks and ends with a fully connected (dense) layer that passes its output to a softmax to obtain three class probabilities for the classes NoAF, WithAF and FutureAF which are defined to add up to one. While the focus is on predicting the class FutureAF from ECG exams with an absence of the AF condition, we kept the exams belonging to the class WithAF to improve the performance of the model. Hence, the developed model also has the capability of conducting automatic AF diagnosis. The DNN model was trained by minimising the average cross-entropy loss using the Adam optimiser [21]. Default parameters were used with weight decay of \(5\cdot 10^{-4}\) to regularise the model. As the results obtained in [11, 13] were satisfactory, this study kept most of the selected hyperparameters from these studies. Hence, no further hyperparameter tuning was performed. The initial learning rate was \(10^{-3}\) and was reduced by a factor of 10 whenever the validation loss remained without improvement for 7 consecutive epochs. The dropout rate was manually tuned between values: 0.8 and 0.5 with the latter value resulting in improved performance. The training was performed until the minimum learning rate of \(10^{-7}\) was reached or for a maximum of 70 epochs. We save and use as the final the one with the best validation results (i.e. minimum error loss) during the optimisation process as a form of early stopping. Despite the pronounced class imbalance, we abstain from employing strategies like over- or under-sampling to mitigate it. Over-sampling risks overfitting the minority class, while under-sampling discards numerous majority samples. Since our emphasis lies not on threshold-dependent metrics like accuracy, but rather on utilising the resulting class probabilities for the survival model, the class imbalance becomes less influential. ### Model evaluation and metrics After the training process, the performance of the DNN model was evaluated on the test data using classification evaluation metrics: sensitivity, positive predictive value (PPV), specificity, false positive rate, \(F\)-score, the Receiver Operating Characteristic (ROC) curve, Area Under the Receiver Operating Characteristic Curve (AUC-ROC), Precision-Recall Curve and Average Precision (AP) score. This study first evaluated the performance of the model on the task of classifying the three groups: NoAF, WithAF and FutureAF, based on the class probabilities from the DNN model. We plotted the ROC curves, the precision-recall curves and the confusion matrix, and computed the AUC score and AP scores for each class. Next, an evaluation of the model considering only the FutureAF class and the NoAF class was performed to assess the ability of the model to distinguish normal exams within the two classes. In other words, to evaluate how the model performs at AF risk prediction for patients without AF. For this task, samples labelled as WithAF class were removed. The class probabilities for the NoAF class and for the FutureAF class were normalised for each instance to sum to one. Lastly, a probability threshold that maximises the \(F_{1}\)-score for NoAF class and FutureAF class was selected, and the threshold-based metrics, namely sensitivity, PPV, specificity and \(F_{1}\)-score were computed. The threshold was obtained using the validation set, while all metrics including the plots were measured using the test set. ### Time-to-event models This study considers non-parametric and semi-parametric methods for time-to-event prediction. Patients in the test set belonging to the class NoAF (191,665 recordings, 116,255 unique patients) and the class FutureAF (3691 recordings, 2016 unique patients) were considered for the time-to-event prediction. We used Kaplan-Meier method [22] and Cox proportional hazard (PH) models [23]. The Kaplan-Meier method [22] (also referred to as the product-limit method) is a non-parametric method that provides an empirical estimate of the survival probability at a specific survival time using the actual sequence of the event times. Similar to other non-parametric methods, the advantage of the Kaplan-Meier is that it allows for the analysis without assumptions. On the other hand, the Cox PH model [23] allows us to adjust to different covariates and hence are also interesting to the analysis. Cox PH models are the most commonly used semi-parametric model for survival analysis. The model assumes that the covariates have an exponential influence on the hazard. The log-hazard of an individual is a sum of the population-level baseline hazard and a linear function of the corresponding covariates. We provide two analyses for the Cox PH model, in one analysis we adjust the model with age and gender, and in a second analysis we adjust the model with comorbidities in addition to age and gender. We consider 16 variables that were recorded during a patient visit, that include comorbidities, cardiovascular risk factors and cardiovascular drug usage, namely: use of diuretics, beta-blockers, converting enzyme inhibitors, amiodarone, or calcium blockers, obesity, diabetes mellitus, smoking, previous myocardial revascularization, family history of coronary heart disease, previous myocardial infarction, dyslipidemia, chronic kidney disease, chronic lung disease, chnica disease, arterial hypertension. The _observation time_\(T\) is given in weeks. During the development of the Cox PH model, patients were subdivided into four groups according to quintiles of the probability output of the DNN: \([0,0.1)\); \([0.1,0.4)\), \([0.4,0.7)\) and \([0.7,1.0]\). The study used the first group of patients having a predicted probability of less than \(0.1\) as a reference and produced hazard ratios for the remaining groups. For the Kaplan-Meier model, patients were grouped according to the same intervals: \([0,0.1)\); \([0.1,0.4)\), \([0.4,0.7)\) and \([0.7,1.0]\). We used the lifelines python library [24]. ## Results We developed a model to predict whether a patient belongs to the classes NoAF, WithAF or FutureAF. Our results for the classification task are available in the supplementary material. Since our ultimate goal is to predict the risk of a future AF event, we present here the ability of the model to predict the class FutureAF and the results from survival analysis. ### AF risk prediction and survival analysis The DNN model outputs class probabilities for the three classes. In a first analysis, we excluded exams from the class WithAF in order to study the ability of the model to distinguish between FutureAF and NoAF. We compute the performance metrics using the probability of FutureAF against that of NoAF. In Table 1 we display the confusion matrix, where the predicted values are compared against the true values. In Figure 2 we show the ROC curve and the AUC-ROC score obtained for this case. The AUC-ROC score was equal to 0.845. This reveals that the model can detect elements in each class. Figure 3 displays the PR curves and the calculated average precision (AP) scores. The AP score for the class FutureAF was quite small (\(\text{AP}=0.22\)) and its PR curve had a low area under the curve. This suggests that the model is unable to provide both, high sensitivity and PPV values at once for exams in the class FutureAF. An option for applying the model on the prediction task between two classes is to select a threshold that maximises the \(F_{1}\)-score, i.e. putting equal weights on both sensitivity and PPV. The threshold was computed using the validation set and was applied to the classification task for both the validation set and the test set. The obtained optimal probability threshold was equal to 0.1043 and the corresponding performance metrics are shown in Table 2. All the metrics consider the class FutureAF as the positive class. The sensitivity and PPV values on the test set are 0.322 and 0.247, respectively. In contrast, the specificity is very high (0.981), which is mainly due to class imbalance. The class probabilities from the DNN model belonging to the class FutureAF were used to develop survival models. Two Cox PH models were implemented, one adjusted with age and gender, and another adjusted with comorbidities in addition to age and gender. Table 3 shows the hazard ratios of patients whose probabilities for the class FutureAF belong to one of the groups: (0.1-0.4], (0.4-0.7] and (0.7-1.0], taking patients in the group (0.0-0.1] as a reference. As the table indicates, moving from a lower probability range to a higher probability range, the hazards leading to AF also increase. Considering the Cox PH model adjusted with age and gender plus comorbidities, the probability range of (0.7-1.0] had the highest hazard ratio that equals 40.869 (95% CI: \(32.83-50.87 \begin{table} \begin{tabular}{l c|c c} & & \multicolumn{2}{c}{**Predicted Value**} \\ & & NoAF & FutureAF \\ \hline **True** & NoAF & 188 606 & 3 059 \\ **Value** & FutureAF & 2 584 & 1 107 \\ \hline \end{tabular} \end{table} Table 1: Confusion matrix. Figure 2: The ROC curves and AUC scores for FutureAF class versus NoAF class. AUC-ROC\(=0.845\) the model assessment, however, some covariates (the three probability ranges in this case) did not pass the non-proportional test, hence rejecting the null hypothesis of proportional hazards. This led the study to use a non-parametric model in order to make further survival analyses. A Kaplan-Meier approach was used to this end. The survival curves that were generated through the Kaplan-Meier estimator are displayed in Figure 4. Note that survival time refers in the context of our study to the time-to-event which is the development of AF and not to actual mortality-related survival. Therefore, survival probability refers to the likelihood that no event occurs. The shaded area highlights the 95% confidence interval of the survival probability at different survival times (exponential Greenwood confidence intervals were used [25]). Patients within the lowest risk group maintained survival probabilities greater than 0.8 during the study period of about seven years. The survival probability is reduced at a higher rate moving from patients in a lower probability range to patients in a higher probability range. The median survival times for patients in probability groups \((0.0-0.1]\), \((0.1-0.4]\), \((0.4-0.7]\) and \((0.7-1.0]\) are infinity, 248, 82 and 40 weeks respectively. The median time without developing AF defines the point in time where on average 50% of the patients in a group would have had the condition. That means for example, patients in the first cohort (probability range \((0.0-0.1]\)) have a 50% chance of never developing AF within seven years, while patients in the last cohort (probability range \((0.7-1.0]\)) are 50% likely to develop AF within 40 weeks (less than a year). A table below the survival curve in Figure 4 shows the number of patients at risk, censored \begin{table} \begin{tabular}{l c c} \hline & Validation & Test (CI 95\%) \\ \hline Sensitivity & 0.315 & 0.322 (\(\pm\) 0.016) \\ PPV & 0.250 & 0.247 (\(\pm\) 0.012) \\ Specificity & 0.982 & 0.981 (\(\pm\) 0.001) \\ F1-score & 0.279 & 0.280 (\(\pm\) 0.012) \\ \hline \end{tabular} \end{table} Table 2: Performance metrics on the task of predicting the class FutureAF versus NoAF. Figure 3: The precision-recall curves and AP scores for FutureAF class versus NoAF class. Recall denotes the sensitivity, and precision denotes the positive predictive value. Figure 4: Survival curves for the different cohorts based on their probability range using the Kaplan-Meier model. \begin{table} \begin{tabular}{l c c c c} \hline \hline Adjusted for: & Probability Group & Hazard Ratio & CI 95\% & P - value \\ \hline & (0.1, 0.4] & 4.060 & 3.77 - 4.37 & \(<0.005\) \\ Age and sex & (0.4, 0.7] & 20.609 & 17.11 - 24.82 & \(<0.005\) \\ & (0.7, 1.0] & 42.339 & 33.99 - 52.74 & \(<0.005\) \\ \hline Age, sex, risk factors & (0.1, 0.4] & 3.995 & 3.71 - 4.30 & \(<0.005\) \\ comorbidities, & (0.4, 0.7] & 20.444 & 16.98 - 24.62 & \(<0.005\) \\ \& drug usage\({}^{*}\) & (0.7, 1.0] & 40.869 & 32.83 - 50.87 & \(<0.005\) \\ \hline \end{tabular} \({}^{*}\)**We adjust for the following comorbidities, cardiovascular risk factors, and drug usage:** use of diuretics, beta-blockers, converting enzyme inhibitors, amiodarone, or calcium blockers, obesity, diabetes mellitus, smoking, previous myocardial revascularization, family history of coronary heart disease, previous myocardial infarction, dyslipidemia, chronic kidney disease, chronic lung disease, chagas disease, arterial hypertension. \end{table} Table 3: Hazard ratios for different probability groups from the Cox PH model. patients (i.e. no further follow-up or the event time is beyond the study period) and patients with AF at different time intervals (50 weeks each time interval). Taking the event times 0 and 50 weeks as an example, for patients within the probability range \((0-0.1]\), the number of patients at risk was 129,369 (68%), censored cases were 60,091 and 794 (0.42%) AF events were recorded after 50 weeks; while for patients within the probability range (0.7 - 1.0] the number of patients at risk was 61 (33.7%), censored cases were 26 and 94 (51.9%) AF events were recorded. This again provides an estimate of the time to event for patients in different risk groups. ## Discussion ### DNN model performance The DNN model produced a good AUC score for the class FutureAF, which suggests its potential at predicting this class. The actual ability to predict the class FutureAF was attested by the AP score obtained for this class (\(\text{AP}=0.22\)). The low score reveals the difficulty in predicting this class and suggests that there would be many false positive cases (incorrectly predicting the class FutureAF) regardless of the threshold. Regarding the risk prediction task (normal ECG exams in FutureAF vs NoAF), the DNN model produced lower sensitivity and PPV values as shown in Table 2 (the probability threshold here maximises \(F_{1}\)-score). However, the specificity was as high as 0.982. This indicates that most of the exams that could be predicted as negative are truly negative and that there would be very few false positive cases. Hence, the information from this prediction task can be of value during a screening of a large population, i.e. one can consider that among the individuals predicted as negative, approximately 1.8% are at risk of developing AF. ### Survival analysis The survival analysis implemented in this study provided additional and valuable information about the risk level and an estimate of the time to the event of having an AF condition. The Cox PH model produced the hazard ratios for patients belonging to four different probability groups taking the group with the lowest risk as a reference. The Cox PH model failed the non-proportional test; still, it provides insight into the risk level incurred by patients in different groups. As stated in [24], a model that does not meet the proportional hazards assumption still can be useful in performing prediction (e.g. predicting survival times) as opposed to making inferences. Recent work also suggests that virtually all real-world clinical datasets will violate the proportional hazards assumptions if sufficiently powered and that statistical tests for the proportional hazards assumption may be unnecessary [26]. To understand the influence of a class probability group on the survival duration, a Kaplan-Meier model was implemented. The results showed that patients in the highest risk group (FutureAF class probability range of \((0.7-1.0]\)) were approximately 60% likely to develop AF within one year, compared with less than 15% of patients in the minimal risk group (FutureAF class probability range of \((0.0-0.1]\)) that would develop the condition within the complete time span of seven years. These findings proved the ability of the DNN model at predicting patients with impending AF conditions and with different risk levels. Compared to the results of the study in [18], which used digital biomarkers from the raw 12-lead ECG, clinical information and features from deep representation learning to make AF risk prediction, our approach learns predicting features directly from the raw ECG signal without the need to extract any biomarker. Thus precluding the need to extract biomarkers from the ECG signal which facilitates the ECG processing pipeline. It is also worth mentioning that the median survival time obtained in [18] is more than two years for patients in probability group \((0.8-1.0]\). Even though the methods used to produce survival curves are different (Cox PH model versus Kaplan-Meier) and also the classifier used (Random Forrest versus Neural Network with Softmax), their results seem less alarming considering the results in this work, where 50% of patients in the probability group \((0.7-1.0]\) are likely to develop AF within 40 weeks (less than one year). This difference in median survival times may also be attributed to the fact that the study in [18] used a random forest classifier while this study uses neural networks and a sigmoid function for classification. ### Clinical implications Patients with clinical AF that are not taking anticoagulant medication have an elevated risk of stroke, and the strokes caused by AF are more severe than strokes caused by other causes [27]. AF does not always cause symptoms, and for roughly 20% of the population, stroke is the first manifestation of AF [28]. Thus, there is a lot of interest in detecting cases of AF before the occurrence of a stroke, by systematic screening for asymptomatic AF [29] or, more recently, by the recognition of those in sinus rhythm who will develop AF in the future [9, 17, 18, 30, 31]. Among the risk scores that use clinical variables, the CHARGE-AF risk score is one of the most accurate and well-validated and uses variables readily available in primary care settings [30]. A recent review of risk scores based on clinical variables for prediction of AF [31] found that 14 different scores are potentially useful, with AUC-ROC curves between 0,65 to 0,77 for the general population, with best results for the CHARGE-AF and MHS scores. Risk scores based on standard 12-lead ECGs are a promising tool considering both practical and technical questions [9, 17, 18]. Reported studies, including ours, showed much higher discrimination capacity, with AUC-ROC curves over 0.85. Since ECGs are routinely performed in most subjects at risk, ie, those older than 60 years old, the prediction can be obtained automatically, without the need of inputting variables in a risk calculator. In this study, we also provide semi-parametric and non-parametric time-to-event models that might help inform doctors of the development of the disease for each group of patients. The model was tested in cases where the disease could be observed up to seven years of the examination, providing a more complete picture for the use of this model in clinical practice The ability to accurately recognise patients that have a high chance of developing AF may allow the intensified surveillance of those patients, with early recognition of the appearance of the AF. In this case, the early institution of anticoagulant treatment could prevent the drastic event of a stroke and change the natural history of this condition. Moreover, new therapies to prevent AF could be developed and used for preventing not the stroke but potentially the whole set of complications related to the appearance of AF. All these clinical applications of the method deserve to be tested in controlled clinical trials, but preliminary prospective studies confirmed that AI-augmented ECG analysis could be helpful, at least, to recognise those at higher risk of developing AF [32]. ### Limitations One limitation lies in the dataset used for model development and testing. Many of the patients that were considered as all-time normal (without AF during the whole data collection period) had dropped from the follow-up before the study period ended or had a relatively shorter time interval between their first and last ECG records. Therefore, it is impossible to tell with certainty whether an individual was at no risk of developing AF within seven years. Censored data are unexceptional in survival analysis, however, in normal supervised learning, an ideal dataset would consist of patients who had recorded ECG exams regularly for the considered study period. Moreover, we do not prove this is better than existing clinical scores such as CHARGE-AF [30]. Similar to a statement in [18], during data selection, there was a bias towards individuals who had a cardiac disease or a forthcoming heart condition, since all the patients considered had attended multiple medical visits. The AF label is also solely based on the ECG analysis. This label might contain errors from medical mistakes and from problems in the extraction of the label (see [11] for a more complete discussion of the labeling process). This way, some FutureAF exams might be previously missed AF cases during the ECG analysis. Finally, the model is developed and tested solely on patients from Brazil, and external validation in other cohorts is needed to verify the efficiency of the model in other populations. ## Conclusion This study employed ResNet-based convolutional DNNs for end-to-end AF risk prediction from 12-lead ECG signals. The trained DNN effectively identified ECG signal changes indicative of AF development, facilitating risk prediction and survival analysis. By integrating DNN probabilities into Cox PH and Kaplan-Meier models, hazard ratios and survival functions were derived, stratifying patients based on risk levels. This model holds promise for clinical application, aiding AF risk stratification and informing clinical decisions. Further validation is imperative to confirm predictive performance. Future research should encompass external validation on diverse datasets, preferably from distinct geographic populations, to assess model usability across different groups. Exploring the model's potential in identifying AF-related stroke risks is another avenue, considering the established AF-stroke connection [4, 5]. Additionally, extending this approach to predict other arrhythmias and cardiovascular diseases is a plausible direction for further development. #### Ethical approval This study complies with all relevant ethical regulations. CODE Study was approved by the Research Ethics Committee of the Universidade Federal de Minas Gerais, protocol 49368496317.7.0000.5149. Since this is a secondary analysis of anonymized data stored in the TNMG, informed consent was not required by the Research Ethics Committee for the present study. ### Declaration of interests There are no competing interests. #### Funding This research is financially supported by the _Wallenberg AI, Autonomous Systems and Software Program (WASP)_ funded by Knut and Alice Wallenberg Foundation, and by _Kjell och Marta Beijer Foundation_. ALPR is supported in part by CNPq (465518/2014-1, 310790/2021-2 and 409604/2022-4) and by FAPEMIG (PPM-00428-17, RED-00081-16 and PPE-00030-21). ALPR received a Google Latin America Research Award scholarship. JB acknowledges the support of the Technion EVPR Fund: Hitman Family Fund and Grant No ERANET - 3-16881 from the Israeli Ministry of Health. The funders had no role in the study design; collection, analysis, and interpretation of data; writing of the report; or decision to submit the paper for publication. ### Data sharing The DNN model parameters that yield the results from in this paper are available under ([https://zenodo.org/record/7038219#.Y9Phl4LMJNw](https://zenodo.org/record/7038219#.Y9Phl4LMJNw)). This should allow the reader to partially reproduce the results from this study. 15% of the CODE cohort (denoted CODE-15%) was also made openly available ([https://doi.org/10.5281/zenodo.4916206](https://doi.org/10.5281/zenodo.4916206)). Researchers affiliated with educational or research institutions might make requests to access the full CODE cohort. Requests should be made to the corresponding author of this paper. They will be forwarded and considered on an individual basis by the Telehealth Network of Minas Gerais. An estimate for the time needed for data access requests to be evaluated is three months. If approved, any data use will be restricted to non-commercial research purposes. The data will only be made available on the execution of appropriate data use agreements. ### Code availability The code for the model training, evaluation and statistical analysis is available at the GitHub repository [https://github.com/mygithub27/af-risk-prediction-by-ecg-dnn](https://github.com/mygithub27/af-risk-prediction-by-ecg-dnn).
2309.06640
REVIS: An Error Visualization Tool for Rust
Rust is a programming language that uses a concept of ownership to guarantee memory safety without the use of a garbage collector. However, some error messages related to ownership can be difficult to understand and fix, particularly those that depend on value lifetimes. To help developers fix lifetime-related errors, we developed REVIS, a VSCode extension that visualizes lifetime-related Rust compiler errors. We describe the design and implementation of the VSCode extension, along with a preliminary evaluation of its efficacy for student learners of Rust. Although the number of participants was too low to enable evaluation of the efficacy of REVIS, we gathered data regarding the prevalence and time to fix the compiler errors that the participants encountered.
Ruochen Wang, Molly Maclaren, Michael Coblenz
2023-09-12T23:15:49Z
http://arxiv.org/abs/2309.06640v1
# REVIS: An Error Visualization Tool for Rust ###### Abstract. Rust is a programming language that uses a concept of ownership to guarantee memory safety without the use of a garbage collector. However, some error messages related to ownership can be difficult to understand and fix, particularly those that depend on value _lifetimes_. To help developers fix lifetime-related errors, we developed REVIS, a VSCode extension that visualizes lifetime-related Rust compiler errors. We describe the design and implementation of the VSCode extension, along with a preliminary evaluation of its efficacy for student learners of Rust. Although the number of participants was too low to enable evaluation of the efficacy of REVIS, we gathered data regarding the prevalence and time to fix the compiler errors that the participants encountered. Rust, program visualization, compiler errors, usability of programming languages + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition a visualization available on this line; the triangle points downward to indicate that the visualization is currently displayed. In the diagram at the right, the blue region shows the lifetime of the variable x. The red arrow shows the use of x, which is erroneous because it is outside the blue region. The purple arrow shows why the lifetime of x ends: its value was moved to another variable. A short tip is shown below the diagram to explain the cause of the error. REVIS is motivated by the difficulty novices have learning the rules of Rust's type system. Novice Rust programmers often find lifetime-related compile errors difficult to understand, even when the error messages that the Rust compiler produces are generally thought to be excellent (Crichton, 2020; Fulton et al., 2021). Unlike many kinds of errors, lifetime-related errors pertain to spans of code, not just individual lines. Prior programming tools have little special accommodation for errors that include multiple regions of code that relate to each other. They typically are able to display decorations for ranges of characters with associated messages, but the decorations may not be clearly related to each other, particularly because multiple errors can occur in one region of code. In addition, the associated messages cannot be displayed together, so developers cannot see the complete information provided by the error messages. REVIS uses disclosure triangles in the gutter to allow programmers to focus on all the information pertaining to individual errors. We conducted a preliminary evaluation of REVIS in a Rust-based compilers course at our university. In our evaluation, we randomly assigned participants to use or not use REVIS. We collected snapshots of each version of their code that was built, enabling us to analyze error occurrence rates and costs of fixing each error. Although we did not recruit enough participants to enable a quantitative evaluation of REVIS's benefits, we found that four of the twenty most time-intensive errors to fix pertained to borrowing or ownership. This constituted about 3% of the time spent fixing errors, suggesting that making a significant impact on error-fixing time may require additional kinds of tool or educational support. If this fraction generalizes to other programming contexts, it could mean that fixing Rust-specific compiler errors does not present a significant barrier to Rust adoption. Alternatively, addressing lifetime-related errors may be more important in a broader programming context compared with the restricted domain of a compilers course. The contributions of this paper can be summarized as follows: * We describe REVIS, a novel error visualization tool for Rust lifetime errors.1 Footnote 1: The tool is available in VSCode marketplace at [https://marketplace.visualstudio.com/items?itemName=weirane.errorviz](https://marketplace.visualstudio.com/items?itemName=weirane.errorviz). * We conducted a preliminary evaluation of REVIS, observing that lifetime and borrowing errors occur frequently but only consume about 3% of the total error-fixing time in the context of the compilers course in which we did the study. Figure 1: REVIS’s visualization for a use after move error. The function argument x is defined in line 10 and moved to variable a in line 12, so its lifetime is lines 10 – 12. The usage of x on line 15 is an error because it occurs after line 12. The visualization shows that the use of x occurs outside of x’s lifetime. ## 2. Revis Tool ### Supported Errors The Rust compiler produces many kinds of lifetime-related errors. REVIS focuses on those errors that have all the related information within the same function. REVIS supports eight errors, including use of variables outside of their lifetimes, variable use or borrowing when already mutually borrowed, and variables not living long enough. ### Visualization Design REVIS's visualizations consist of three types of components: _regions_, _arrows_ and _tips_. Regions are vertical lines that span across multiple lines of code with horizontal end points on each end and descriptive text to the right. They are mainly used to show the lifetimes of variables. In figure 1, the blue component that spans from line 10 to line 12 is a region. If only one end of the region is relevant to the error, the region will be open and have one horizontal end point and an arrow pointing up or down at the other end, as figure 2 shows. _Arrows_ are horizontal lines with arrowheads pointing to the right. Each arrow is attached to a line of code and indicates a single event, such as a borrow or move, that is relevant to the error. The head of the arrow is always positioned at the center of the line while the tail can be configured to start at any position. Because the ends of regions are not at the center of a line, allowing arrow tails to start at any position can make the visualizations easier to read. For example, in figure 1, the tail of the purple arrow is positioned to overlap with the end of the blue region. _Tips_ are lines of text at the bottom of the visualization. They explain why the error occurred and may give suggestion on how to fix the error. The components have two severity levels: error and information, matching the severity levels of Rust compiler messages. Error-level components are displayed in red and information-level components are displayed in blue or purple. ### Implementation Our extension depends on the rust-analyzer extension (Ferrous Systems and contributors, 2023). When a source file is saved, rust-analyzer runs the Rust compiler to obtain compile errors for the current file. This storing action triggers our "save diagnostics" hook on diagnostics change. Inside the hook, we first filter out Rust errors with supported error codes from the diagnostics list, and then display right-pointing red triangles for each of them on the corresponding lines so the user knows there is a visualization available. The error code, position, and message of the error are stored in this step for use when generating the visualization. When the cursor is on the line with the triangle, a keyboard shortcut causes our extension to display the visualization and rotates the triangle to indicate that the visualization is shown. Later, to hide the visualization, the user can press the same keyboard shortcut again. Visualizations are Figure 2. Visualization with an open region. The function returns a closure that captures a local variable. Because the lifetime of the local variable does not cover the lifetime of the closure, there is an error. The region is open at line 15 because it suffices to show that the lifetime of the closure extends beyond the function. SVG images that consist of the components described in section 2.2. We compute visualizations by parsing the error message according to the error code. As errors with the same error code have a consistent error message pattern, for errors with a certain error code, we extract variable lifetime information and move/borrow actions from the message, and use _region_ or _arrow_ to display them. After the SVG image is computed, we calculate the maximum width of the lines on which the image will take and make sure the image does not overlap with the code. The image is configured as a decoration of the first line that the image is displayed. When the user zoom in or out their editor view, the image is able to scale with the text so the visualization will align with the code. REVIS uses keyboard shortcuts to open and close visualizations because VSCode does not support mouse clicks on the gutter and the visualization image yet (Fragnani, 2023). If the user saves the file again, displayed visualizations are automatically updated. ## 3. Evaluation We ran a user study in a combined graduate-undergraduate compiler construction course during spring quarter 2023. In the course, students were introduced to Rust for the first time and completed several programming assignments over the duration of the class. ### Method Students who agreed to participate installed a build.rs file, which committed each built version of their code to a remote repository. It happens when the file is saved and the Rust compiler is run. We collected work for approximately four weeks, starting six weeks into the ten-week term (due to a delay in IRB review). After the study, the participants completed an exit survey regarding their programming experience, and their experience using the tool. Students who contributed code and filled out the survey received a $5 gift card. In addition, students could opt in to be a randomly assigned to either use REVIS or not; students who opted in received an additional $5 gift card. Our study was approved by our IRB. ### Results We gathered commit information from six students, five of whom had professional software development experience. Five agreed to random assignment; three actually used REVIS. Exit SurveyTwo of the three students assigned to use the tool said it was useful and are likely to use it again, while one was unable to access the visualizations. The participants rated the relative difficulty (on a Likert scale) of REVIS-supported errors. E0597, in which a variable with a borrowed value does not live long enough, had four out of seven responses range from "Moderately hard" to "Very hard." Other errors such as E0382, the result of a borrow of a moved variable, were either rated as "Very easy" or were never encountered by five of the seven participants. Figure 3. Participants who had access to the visualizations were asked to compare the difficulty of fixing compiler errors with and without REVIS to the difficulty in other languages. _Quantitative Analysis._ We gathered 6334 commits from six students, three of whom used REVIS and three of whom did not. However, only 833 of those commits came from students who used the tool, as some students did not continue to submit their progress throughout the study. We built each version and parsed the error messages and repository timestamps into a SQLite database. In order to determine time taken to fix distinct compilation errors, we adapted Mesbah et al.'s methodology for determining Active Resolution Cost (ARC) [20]. A _resolution session_ is a sequence of consecutive builds, \(B_{1},B_{2},\ldots,B_{k}\) where a given error message \(E_{i}\) first appears in \(B_{1}\) and is resolved in \(B_{k}\). If two consecutive builds are separated by at least 1500 seconds, we assume the student may have taken a break, so we cap the time at 1500 seconds. We simplified the definition of ARC by _not_ excluding build time due to the small scale of the programming assignment. The ARC for error \(E_{i}\) with resolution session \(B_{1},B_{2}\ldots,B_{k}\) is defined in equation 1. \[\text{ARC}\ \stackrel{{\text{def}}}{{=}}\ \sum_{i=1}^{k-1}\frac{T_{i}}{|E_{i}|} \tag{1}\] We categorized the data by error code to count the total number of occurrences, total time spent fixing each error, and average time spent fixing errors by error code across participants. Because individual times are estimated, we refer to them as _costs_ in Table 1. \begin{table} \begin{tabular}{l l c c c} \hline \hline **Error** & **Description** & \# & \multicolumn{3}{c}{**Cost**} \\ \cline{3-5} & & **Total** & **Avg** & **Std. Dev.** \\ \hline E0308 & Expected type did not match the received type & 360 & 26\% & 69s & 146s \\ E0425 & An unresolved name was used & 363 & 22\% & 58s & 156s \\ N/A & Syntax error & 275 & 9.7\% & 33s & 71s \\ E0599 & Method is used on a type which doesn’t implement it & 145 & 9.2\% & 60s & 153s \\ E0004 & Compiler cannot guarantee a matching pattern & 63 & 7.3\% & 110s & 137s \\ E0061 & Invalid \# of arguments passed to function & 92 & 4.7\% & 48s & 100s \\ E0277 & Use of type that does not implement some trait & 109 & 3.7\% & 32s & 44s \\ E0063 & Struct’s or struct-like enum variant’s field not provided & 24 & 2.0\% & 79s & 253s \\ E0433 & An undeclared crate, module, or type was used & 67 & 1.4\% & 19s & 64s \\ E0507 & A borrowed value was moved out & 21 & 1.1\% & 49s & 93s \\ E0609 & Attempted to access a non-existent field in a struct & 27 & 0.99\% & 34s & 39s \\ E0412 & A used type name is not in scope & 48 & 0.96\% & 19s & 31s \\ **E0597** & A value was dropped while it was still borrowed & 7 & 0.88\% & 119s & 51s \\ E0133 & Unsafe code used outside of an unsafe context & 11 & 0.82\% & 70s & 173s \\ E0596 & You tried to mutably borrow a non-mutable variable & 10 & 0.74\% & 69s & 121s \\ **E0382** & Variable used after its contents were moved elsewhere & 16 & 0.66\% & 39s & 27s \\ E0432 & An import was unresolved & 28 & 0.58\% & 20s & 46s \\ E0614 & Dereferenced a variable which cannot be dereferenced & 15 & 0.40\% & 25s & 36s \\ E0282 & Compiler could not infer type; need type annotation & 9 & 0.40\% & 42s & 49s \\ E0369 & Binary operation on a type which doesn’t support it & 11 & 0.38\% & 32s & 42s \\ \hline \hline \end{tabular} \end{table} Table 1: Top 20 Error Codes by Active Resolution Cost. “Total” column shows the fraction of the total cost that was due to fixes for this error. Errors that are supported by REVIS are shown in bold. Of the 15019 total error messages, 51 were supported by REVIS, and of the 51, only 5 came from students using the tool with 3 distinct resolution sessions. In the context of this compilers assignment, borrowing and ownership-related errors did not appear as frequently as other errors such as E0425: An unresolved name was used, which has the most resolution sessions. Unresolved variable name errors being the most frequent compiler error is consistent with Mesbah's results with Java as well as (Seo et al., 2014)'s results with C++. Table 1 shows the number of resolution sessions as well as the total, average, and standard deviation of calculated Active Resolution Cost by error code. In terms of total cost, E0597 ranked at #13 and E0382 at #16 both can be visualized using REVIS. In addition, E0597 has the #4 highest average time. These two errors make up only 1.5% of the total cost. However, combined with other ownership-related errors not yet supported by REVIS like E0507 and E0596, they contribute to about 3.4% of repair time. Other errors such as those involving traits are also unique to Rust. E0227 consumed 3.7% of the total time. Together, these five errors contribute to about 7.1% of the total cost. ## 4. Limitations The preliminary evaluation took place in a compilers course, in which the work may not reflect general-purpose programming work, because the compiler was written in a functional style and most of the references were immutable references. The study began about five weeks into the course, at which point the participants (who were students) already had a certain amount of expertise in Rust. These two factors, combined with the limited set of participants, significantly limit the external validity of the study. Further work is needed to evaluate REVIS in a more general context. ## 5. Discussion Participants spent only a limited amount of time fixing lifetime-related errors (3.4% of the total error-fixing time). However, we suspect that by the time the study ran, the development challenges that would have resulted in most lifetime-related errors were already resolved, since the overall structure of the compiler was already set. Alternatively, it is possible that the challenges of Rust's type system have an overstated significance in the prior literature. Figure 4. Log-scale violin plot representing the time taken to resolve errors by error code. Each dot represents a resolution session. Both E0382 and E0597 have a lower bound \(>\) 10 seconds. In spite of the relatively small amount of time spent fixing lifetime-related errors, borrow errors supported by REVIS appear to take longer to fix than other errors. Tools to make fixing these errors easier may help reduce user frustration and the perception of type system challenges even if they do not make a significant impact on task completion times. One borrowing error, _value dropped while borrowed_, may be much harder to fix than other borrowing errors. This may represent an opportunity for future tool development. ## 6. Future Work We plan to conduct a larger-scale evaluation of our tool that focuses on a more general setting rather than a classroom setting. We hope to improve REVIS to support more error codes related to lifetimes, including errors that are not constrained in single functions. We will need to find a way to relate different parts of the code that are potentially far away from each other. It will also be useful to enrich the messages in the visualizations, as we only use the existing output of the Rust compiler to compute the visualizations. For example, in figure 1, we could show the users "'x' moved to 'a'" instead of "'x' moved to another variable." We also want to include an opt-in data collection component in our tool and release it to the public to allow us to conduct a study in the real world. The data collection component will also be able to give users insight of their coding behavior. E0507 (borrowed value was moved out) and E0596 (tried to mutably borrow a non-mutable variable) may represent opportunities for future work; although these are not lifetime-related errors, they are specific to sequences of ownership operations. A future extension may be able to propose fixes, helping users address these errors more quickly. Alternatively, better educational tools could help users avoid introducing these errors. Since one participant had trouble seeing the visualizations, we may consider opening them by default. In the future we hope VSCode will enable detection of clicks in the gutter. ## 7. Related Work Several authors have described empirically-evaluated guidelines for writing textual error messages. Denning et al. (2021) found that length, jargon usage, sentence structure, and vocabulary are affect textual error message readability. Becker et al. (2021) found in an experiment that error message readability can be subjective and experience-dependent, but generally participants believed that length, tone, and jargon usage affected readability. Becker et al. (2021) summarized many earlier results, among with is the argument that good error messages can make a large impact on usability for beginners. Our design of REVIS particularly reflects their recommendation _provide context to the error_, since it aligns its visualizations with the relevant erroneous code. It also reflects the recommendation _increase readability_, since the diagrams reduce the amount of additional text needed relative to text-only errors. Existing editor tools around Rust errors have little special support for errors that span multiple lines. Rust-analyzer (Ferrous Systems and contributors, 2023) is a popular Rust language extension for VSCode. As shown in figure 5, the rust-analyzer VSCode extension displays multiple decorations for errors: red squiggly underline for the erroneous code and gray dotted underlines for code related to the errors. When the developer moves the mouse onto one of the decorations, the hover box that pops out contains a textual description of all the related information as line numbers and text. It cannot display all the hover boxes related to one error together, which is a problem because the various contributing lines can have their own boxes. Also, the hover boxes may contain irrelevant information, such as the type of the variable under the cursor. Finally, they may overlap with the code that caused the error. REVIS always puts its diagrams to the right of the code, avoiding any overlap. Developers also view command line error messages directly in the terminal. Command line error messages are able to convey all the diagnostics in one place, as shown in figure 6. However, because the descriptions are in a separate window than their editor, users need to move back and forth to relate the error messages to the code in their editor. Sometimes command line output may omit lines of code (lines 13 and 14 are omitted in figure 6), which makes it more difficult to build the relation between the error on the command line and the code in the editor. Also, although command line errors use different colors to indicate related error messages, they does not point out the lifetime ranges of variables. With REVIS, not only can users view the description of their errors next to their code, they also get an intuitive sense of the lifetime range, which helps them understand the error. There are existing visualization frameworks for Rust, such as RustViz (Almeida et al., 2022). RustViz enables instructors to create diagrams for Rust programs to help students understand lifetime-related concepts and rules. RustViz could also be used to show errors. For example, figure 7 shows the same error as in figure 1. Compared to REVIS, RustViz supports a richer set of visual vocabulary, since it supports creating diagrams for arbitrary sequences of Rust ownership events. REVIS uses a more limited vocabulary with the goal of making it easier to read the specific errors we wanted it to display. Our approach allows us to explain problems in a more highly-customized way for each error message. ## 8. Conclusion This paper described REVIS, a Visual Studio Code plugin that provides visualizations for lifetime-related error messages emitted by the Rust compiler. We conducted a preliminary evaluation in Figure 5. Rust-analyzer hover box displayed for the erroneous “8x” in figure 1 Figure 6. Command line error message for the error in figure 1 which we collected snapshots of students' work in a compilers course. Although we did not have enough participants to assess whether REVIS helped reduce error-fixing time, we observed that most error-fixing time was spent fixing errors that occur in other languages as well. If our data are representative of general Rust development, this could mean that Rust's type system (in terms of time fixing errors) does not impose significant burden compared to other languages. ## Acknowledgments We thank our study participants for submitting their data. We also thank Roland Rodriguez and Esteban Kuber from Amazon for insights on the future development of REVIS.
2308.00118
A game-theoretic analysis of baccara chemin de fer, II
In a previous paper, we considered several models of the parlor game baccara chemin de fer, including Model B2 (a $2\times2^{484}$ matrix game) and Model B3 (a $2^5\times2^{484}$ matrix game), both of which depend on a positive-integer parameter $d$, the number of decks. The key to solving the game under Model B2 was what we called Foster's algorithm, which applies to additive $2\times2^n$ matrix games. Here "additive" means that the payoffs are additive in the $n$ binary choices that comprise a player II pure strategy. In the present paper, we consider analogous models of the casino game baccara chemin de fer that take into account the $100\,\alpha$ percent commission on Banker (player II) wins, where $0\le\alpha\le1/10$. Thus, the game now depends not just on the discrete parameter $d$ but also on a continuous parameter $\alpha$. Moreover, the game is no longer zero sum. To find all Nash equilibria under Model B2, we generalize Foster's algorithm to additive $2\times2^n$ bimatrix games. We find that, with rare exceptions, the Nash equilibrium is unique. We also obtain a Nash equilibrium under Model B3, based on Model B2 results, but here we are unable to prove uniqueness.
Stewart N. Ethier, Jiyeon Lee
2023-07-31T19:36:27Z
http://arxiv.org/abs/2308.00118v2
# A Game-Theoretic Analysis of ###### Abstract In a previous paper, we considered several models of the parlor game _baccara chemin de fer_, including Model B2 (a \(2\times 2^{484}\) matrix game) and Model B3 (a \(2^{5}\times 2^{484}\) matrix game), both of which depend on a positive-integer parameter \(d\), the number of decks. The key to solving the game under Model B2 was what we called Foster's algorithm, which applies to additive \(2\times 2^{n}\) matrix games. Here "additive" means that the payoffs are additive in the \(n\) binary choices that comprise a player II pure strategy. In the present paper, we consider analogous models of the casino game _baccara chemin de fer_ that take into account the \(100\,\alpha\) percent commission on Banker (player II) wins, where \(0\leq\alpha\leq 1/10\). Thus, the game now depends not just on the discrete parameter \(d\) but also on a continuous parameter \(\alpha\). Moreover, the game is no longer zero sum. To find all Nash equilibria under Model B2, we generalize Foster's algorithm to additive \(2\times 2^{n}\) bimatrix games. We find that, with rare exceptions, the Nash equilibrium is unique. We also obtain a Nash equilibrium under Model B3, based on Model B2 results, but here we are unable to prove uniqueness. **Keywords**: _baccara_; _chemin de fer_; sampling without replacement; bimatrix game; best response; Nash equilibrium; Foster's algorithm **Classification**: MSC primary 91A05; secondary 91A60 ## 1 Introduction The parlor game _baccara chemin de fer_ was one of the motivating examples that led to the development of noncooperative two-person game theory (Borel, 1924). We can classify game-theoretic models of _baccara_ in two ways. First according to how the cards are dealt: * Model A (with replacement). Cards are dealt with replacement from a single deck. * Model B (without replacement). Cards are dealt without replacement from a \(d\)-deck shoe. And second according to the information available to Player and Banker about their own two-card hands: * Model 1 (Player total, Banker total). Each of Player and Banker sees the total of his own two-card hand but not its composition. * Model 2 (Player total, Banker composition). Banker sees the composition of his own two-card hand while Player sees only his own total. * Model 3 (Player composition, Banker composition). Each of Player and Banker sees the composition of his own two-card hand. (We do not consider the fourth possibility.) Under Model A1 _baccara_ is a \(2\times 2^{88}\) matrix game, which was solved by Kemeny and Snell (1957). Under Model B2 _baccara_ is a \(2\times 2^{484}\) matrix game, which was solved in part by Downton and Lockwood (1975) and in full by Ethier and Gamez (2013). Under Model B3 _baccara_ is a \(2^{5}\times 2^{484}\) matrix game, which was solved in part by Ethier and Gamez (2013). Each of these works was concerned with the parlor game _baccara chemin de fer_, in contrast to the casino game. The rules of the parlor game, which also apply to the casino game, are as in Ethier and Gamez (2013): The role of Banker rotates among the players (counter-clockwise), changing hands after a Banker loss or when Banker chooses to relinquish his role. Banker announces the amount he is willing to risk, and the total amount bet on Player's hand cannot exceed that amount. After a Banker win, all winnings must be added to the bank unless Banker chooses to withdraw. The game is played with six standard 52-card decks mixed together. Denominations A, 2-9, 10, J, Q, K have values 1, 2-9, 0, 0, 0, 0, respectively, and suits are irrelevant. The total of a hand, comprising two or three cards, is the sum of the values of the cards, modulo 10. In other words, only the final digit of the sum is used to evaluate a hand. Two cards are dealt face down to Player and two face down to Banker, and each looks only at his own hand. The object of the game is to have the higher total (closer to 9) at the end of play. A two-card total of 8 or 9 is a _natural_. If either hand is a natural, the game is over. If neither hand is a natural, Player then has the option of drawing a third card. If he exercises this option, his third card is dealt face up. Next, Banker, observing Player's third card, if any, has the option of drawing a third card. This completes the game, and the higher total wins. Winning bets on Player's hand are paid by Banker at even odds. Losing bets on Player's hand are collected by Banker. Hands of equal total result in a tie or a _push_ (no money is exchanged). Since several players can bet on Player's hand, Player's strategy is restricted. He must draw on a two-card total of 4 or less and stand on a two-card total of 6 or 7. When his two-card total is 5, he is free to stand or draw as he chooses. (The decision is usually made by the player with the largest bet.) Banker, on whose hand no one can bet, has no constraints on his strategy under classical rules. There is one important additional rule in the casino game: The house collects a five percent commission on Banker wins. (This commission has been known to be as high as ten percent; see Villiod (1906).) Thus, our aim in the present paper is to generalize the aforementioned results to allow for a \(100\,\alpha\) percent commission on Banker wins. We will assume that \(0\leq\alpha\leq 1/10\). This makes _baccara chemin de fer_ a bimatrix game instead of a matrix game, one that depends on a positive integer parameter \(d\) (under Model B), the number of decks, as well as a continuous parameter \(\alpha\) (under Model A or B), the commission on Banker wins. In the case of Model A1 all Nash equilibria were identified in an unpublished paper by the authors (Ethier and Lee, 2013), assuming only \(0\leq\alpha<2/5\). Under Model A1 and the present assumption (\(0\leq\alpha\leq 1/10\)), the Nash equilibrium is unique for each \(\alpha\). There are also unimportant additional rules in the casino game. Specifically, in modern casino _baccara chemin de fer_, Banker's strategy is severely restricted. With a few exceptions, these restrictions are benign, but because of the exceptions we ignore them entirely. Ethier and Gamez (2013) studied Models A2, A3, B1, B2, and B3 in the special case \(\alpha=0\). That was part I, and the present paper, with \(0\leq\alpha\leq 1/10\), is part II. To keep the paper from becoming unduly long, we will focus our attention on Models B2 and B3, leaving the simpler models A2, A3, and B1 to the interested reader. The key to solving the parlor game under Model B2 was what we called Foster's algorithm, which applies to additive \(2\times 2^{n}\) matrix games. Foster (1964) called it a computer technique. Here "additive" means that the payoffs are additive in the \(n\) binary choices that comprise a player II pure strategy. In Section 2 we generalize Foster's algorithm to additive \(2\times 2^{n}\) bimatrix games. The generalization is not straightforward. In Section 3 we show that, with rare exceptions, the Nash equilibrium is unique under Model B2. Uniqueness is important because it ensures that optimal strategies are unambiguous. The proof of uniqueness is computer assisted, with computations carried out in infinite precision using _Mathematica_. In Section 4 we obtain a Nash equilibrium under Model B3, based on Model B2 results, but here, just as for the parlor game, we are unable to prove uniqueness. ## 2 Two Lemmas for Additive Bimatrix Games A reduction lemma for additive \(m\times 2^{n}\) matrix games was stated by Ethier and Gamez (2013). It had already been used implicitly by Kemeny and Snell (1957), Foster (1964), and Downton and Lockwood (1975). Here we generalize to additive \(m\times 2^{n}\) bimatrix games. **Lemma 1** (Reduction by strict dominance).: _Let \(m\geq 2\) and \(n\geq 1\) and consider an \(m\times 2^{n}\) bimatrix game of the following form. Player I has \(m\) pure strategies, labeled \(0,1,\ldots,m-1\). Player II has \(2^{n}\) pure strategies, labeled by the subsets \(T\subset\{1,2,\ldots,n\}\). For \(u=0,1,\ldots,m-1\), there exist probabilities \(p_{u}(0)\geq 0\), \(p_{u}(1)>0\),..., \(p_{u}(n)>0\) with \(p_{u}(0)+p_{u}(1)+\cdots+p_{u}(n)=1\) together with a real number \(b_{u}(0)\), and for \(l=1,2,\ldots,n\), there exists a real \(m\times 2\) matrix_ \[\begin{pmatrix}b_{0,0}(l)&b_{0,1}(l)\\ b_{1,0}(l)&b_{1,1}(l)\\ \vdots&\vdots\\ b_{m-1,0}(l)&b_{m-1,1}(l)\end{pmatrix}.\] _Assume that the \(m\times 2^{n}\) bimatrix game has player II payoff matrix \(\mathbf{B}\) with \((u,T)\) entry given by_ \[b_{u,T}:=p_{u}(0)b_{u}(0)+\sum_{l\in T^{c}}p_{u}(l)b_{u,0}(l)+\sum_{l\in T}p_{u }(l)b_{u,1}(l)\] _for \(u\in\{0,1,\ldots,m-1\}\) and \(T\subset\{1,2,\ldots,n\}\). Here \(T^{c}:=\{1,2,\ldots,n\}-T\)._ _We define_ \[T_{0} :=\{l\in\{1,2,\ldots,n\}:b_{u,0}(l)>b_{u,1}(l)\text{ for }u=0,1, \ldots,m-1\},\] \[T_{1} :=\{l\in\{1,2,\ldots,n\}:b_{u,0}(l)<b_{u,1}(l)\text{ for }u=0,1, \ldots,m-1\},\] \[T_{*} :=\{1,2,\ldots,n\}-T_{0}-T_{1},\] _and put \(n_{*}:=|T_{*}|\)._ _Then, given \(T\subset\{1,2,\ldots,n\}\), player II's pure strategy \(T\) is strictly dominated unless \(T_{1}\subset T\subset T_{1}\cup T_{*}\). Therefore, the \(m\times 2^{n}\) bimatrix game can be reduced to an \(m\times 2^{n_{*}}\) bimatrix game with no loss of Nash equilibria._ _Remark_.: The game can be thought of as follows. Player I chooses a pure strategy \(u\in\{0,1,\ldots,m-1\}\). Then Nature chooses a random variable \(Z_{u}\) with distribution \(P(Z_{u}=l)=p_{u}(l)\) for \(l=0,1,\ldots,n\). Given that \(Z_{u}=0\), the game is over and player II's conditional expected payoff is \(b_{u}(0)\). If \(Z_{u}\in\{1,2,\ldots,n\}\), then player II observes \(Z_{u}\) (but not \(u\)) and based on this information chooses a "move" \(j\in\{0,1\}\). Given that \(Z_{u}=l\) and player II chooses move \(0\) (resp., move \(1\)), player II's conditional expected payoff is \(b_{u,0}(l)\) (resp., \(b_{u,1}(l)\)). Thus, player II's pure strategies can be identified with subsets \(T\subset\{1,2,\ldots,n\}\), with player II choosing move \(0\) if \(Z_{u}\in T^{c}\) and move \(1\) if \(Z_{u}\in T\). The lemma implies that, regardless of player I's strategy choice, it is optimal for player II to choose move \(0\) if \(Z_{u}\in T_{0}\) and move \(1\) if \(Z_{u}\in T_{1}\). Proof.: Suppose that the condition \(T_{1}\subset T\subset T_{1}\cup T_{*}\) fails. There are two cases. In case 1, there exists \(l_{0}\in T_{1}\) with \(l_{0}\notin T\). Here define \(T^{\prime}:=T\cup\{l_{0}\}\). In case 2, there exists \(l_{0}\in T\) with \(l_{0}\notin T_{1}\cup T_{*}\) (so \(l_{0}\in T_{0}\)). Here define \(T^{\prime}:=T-\{l_{0}\}\). Then, for \(u=0,1,\ldots,m-1\), \[b_{u,T^{\prime}} =p_{u}(0)b_{u}(0)+\sum_{l\in(T^{\prime})^{c}}p_{u}(l)b_{u,0}(l)+ \sum_{l\in T^{\prime}}p_{u}(l)b_{u,1}(l)\] \[=p_{u}(0)b_{u}(0)+\sum_{l\in T^{c}}p_{u}(l)b_{u,0}(l)+\sum_{l\in T }p_{u}(l)b_{u,1}(l)\] \[\qquad\qquad\qquad\pm p_{u}(l_{0})(b_{u,1}(l_{0})-b_{u,0}(l_{0}))\] \[>p_{u}(0)b_{u}(0)+\sum_{l\in T^{c}}p_{u}(l)b_{u,0}(l)+\sum_{l\in T}p_{u }(l)b_{u,1}(l)\] \[=b_{u,T},\] where the \(\pm\) sign is a plus sign in case 1 and a minus sign in case 2. This tells us that player II's pure strategy \(T\) is strictly dominated by pure strategy \(T^{\prime}\), as required. Ethier and Gamez (2013) formulated Foster's (1964) algorithm for solving additive \(2\times 2^{n}\) matrix games. Here we generalize that result to additive \(2\times 2^{n}\) bimatrix games. **Lemma 2** (Foster's algorithm).: _Let \(n\geq 1\) and consider a \(2\times 2^{n}\) bimatrix game of the following form. Player I has two pure strategies, labeled \(0\) and \(1\). Player II has \(2^{n}\) pure strategies, labeled by the subsets \(T\subset\{1,2,\ldots,n\}\). For \(u=0,1\), there exist probabilities \(p_{u}(0)\geq 0\), \(p_{u}(1)>0\),..., \(p_{u}(n)>0\) with \(p_{u}(0)+p_{u}(1)+\cdots+p_{u}(n)=1\) together with a real number \(b_{u}(0)\), and for \(l=1,2,\ldots,n\), there exists a real \(2\times 2\) matrix_ \[\begin{pmatrix}b_{0,0}(l)&b_{0,1}(l)\\ b_{1,0}(l)&b_{1,1}(l)\end{pmatrix}.\] _Assume that the \(2\times 2^{n}\) bimatrix game has payoff bimatrix \((\boldsymbol{A},\boldsymbol{B})\) with \((u,T)\) entry given by \((a_{u,T},b_{u,T})\), where \(a_{u,T}\) is an arbitrary real number and_ \[b_{u,T}:=p_{u}(0)b_{u}(0)+\sum_{l\in T^{c}}p_{u}(l)b_{u,0}(l)+\sum_{l\in T}p_{ u}(l)b_{u,1}(l)\] _for \(u\in\{0,1\}\) and \(T\subset\{1,2,\ldots,n\}\). Here \(T^{c}:=\{1,2,\ldots,n\}-T\)._ _We define_ \[T_{0,0} :=\{l\in\{1,2,\ldots,n\}:b_{0,0}(l)>b_{0,1}(l)\text{ and }b_{1,0}(l)>b_{1,1}(l)\},\] \[T_{0,1} :=\{l\in\{1,2,\ldots,n\}:b_{0,0}(l)\geq b_{0,1}(l)\text{ and }b_{1,0}(l)\leq b_{1,1}(l)\] \[\text{with at least one of these two inequalities strict}\},\] \[T_{1,0} :=\{l\in\{1,2,\ldots,n\}:b_{0,0}(l)\leq b_{0,1}(l)\text{ and }b_{1,0}(l)\geq b_{1,1}(l)\] \[\text{with at least one of these two inequalities strict}\},\] \[T_{1,1} :=\{l\in\{1,2,\ldots,n\}:b_{0,0}(l)<b_{0,1}(l)\text{ and }b_{1,0}(l)<b_{1,1}(l)\},\] _and assume that \(T_{0,0}\cup T_{0,1}\cup T_{1,0}\cup T_{1,1}=\{1,2,\ldots,n\}\)._ (a) _If player I uses the mixed strategy \((1-p,p)\) for some \(p\in[0,1]\), then player II's unique best response is the pure strategy_ \[T(p):=T_{1,1}\cup\{l\in T_{0,1}:p(l)<p\}\cup\{l\in T_{1,0}:p(l)>p\}, \tag{1}\] _where_ \[p(l):=\frac{p_{0}(l)[b_{0,1}(l)-b_{0,0}(l)]}{p_{0}(l)[b_{0,1}(l)-b_{0,0}(l)]-p _{1}(l)[b_{1,1}(l)-b_{1,0}(l)]},\] provided \(p\) does not belong to \(\{p(l):l\in T_{0,1}\cup T_{1,0}\}\). If \(p=p(l)\) for exactly one choice of \(l\in T_{0,1}\cup T_{1,0}\), namely \(l^{\prime}\), then player II's set of best responses is the set of mixtures of the two pure strategies \(T(p)\), as in Equation (1), and \(T(p)\cup\{l^{\prime}\}\). If \(p=p(l)\) for exactly two choices of \(l\in T_{0,1}\cup T_{1,0}\), namely \(l^{\prime}\) and \(l^{\prime\prime}\), then player II's set of best responses is the set of mixtures of the four pure strategies \(T(p)\), as in Equation (1), \(T(p)\cup\{l^{\prime}\}\), \(T(p)\cup\{l^{\prime\prime}\}\), and \(T(p)\cup\{l^{\prime},l^{\prime\prime}\}\)._ (b) _For each \(p\in[0,1]\) with \(p\notin\{p(l):l\in T_{0,1}\cup T_{1,0}\}\), assume that \(a_{0,T(p)}\neq a_{1,T(p)}\). Assume also that \(a_{0,T(0)}<a_{1,T(0)}\) and \(a_{0,T(1)}>a_{1,T(1)}\). Then every Nash equilibrium \((\boldsymbol{p},\boldsymbol{q})\) must have \(\boldsymbol{p}=(1-p(l),p(l))\) for some \(l\in T_{0,1}\cup T_{1,0}\)._ (c) _Under the assumptions of part_ (b)_, if \(p=p(l)\) for exactly one choice of \(l\in T_{0,1}\cup T_{1,0}\), namely \(l^{\prime}\), then every Nash equilibrium \((\boldsymbol{p},\boldsymbol{q})\) must have \(\boldsymbol{p}=(1-p,p)\) and \(\boldsymbol{q}\) with entries \(1-q\) and \(q\in[0,1]\) at the coordinates corresponding to player II pure strategies \(T(p)\) and \(T(p)\cup\{l^{\prime}\}\)_ (\(0\)_es elsewhere)_, where_ \[(1-q)a_{0,T(p)}+q\,a_{0,T(p)\cup\{l^{\prime}\}}=(1-q)a_{1,T(p)}+q\,a_{1,T(p) \cup\{l^{\prime}\}}. \tag{2}\] \(\boldsymbol{q}\) _is called an equalizing strategy._ (d) _Under the assumptions of part_ (b)_, if \(p=p(l)\) for exactly two choices of \(l\in T_{0,1}\cup T_{1,0}\), namely \(l^{\prime}\) and \(l^{\prime\prime}\), then every Nash equilibrium \((\boldsymbol{p},\boldsymbol{q})\) must have \(\boldsymbol{p}=(1-p,p)\) and \(\boldsymbol{q}\) with entries \(q,q^{\prime},q^{\prime\prime},q^{\prime\prime\prime}\in[0,1]\)_ (_with \(q+q^{\prime}+q^{\prime\prime}+q^{\prime\prime\prime}=1\)_) _at the coordinates corresponding to player II pure strategies \(T(p)\), \(T(p)\cup\{l^{\prime}\}\), \(T(p)\cup\{l^{\prime\prime}\}\), and \(T(p)\cup\{l^{\prime},l^{\prime\prime}\}\)_ (\(0\)_es elsewhere)_, where_ \[q\,a_{0,T(p)}+q^{\prime}a_{0,T(p)\cup\{l^{\prime}\}}+q^{\prime \prime}a_{0,T(p)\cup\{l^{\prime\prime}\}}+q^{\prime\prime\prime}a_{0,T(p)\cup \{l^{\prime},l^{\prime\prime}\}}\] \[\qquad=q\,a_{1,T(p)}+q^{\prime}a_{1,T(p)\cup\{l^{\prime}\}}+q^{ \prime\prime}a_{1,T(p)\cup\{l^{\prime\prime}\}}+q^{\prime\prime\prime}a_{1,T(p) \cup\{l^{\prime},l^{\prime\prime}\}}.\] _Again, \(\boldsymbol{q}\) is called an equalizing strategy._ _Remark_.: (a) Lemma 1 implies that every player II pure strategy \(T\) that does not satisfy \(T_{1,1}\subset T\subset T_{1,1}\cup T_{0,1}\cup T_{1,0}\) is strictly dominated. Thus, the \(2\times 2^{n}\) bimatrix game can be reduced to a \(2\times 2^{n_{*}}\) bimatrix game, where \(n_{*}:=|T_{0,1}\cup T_{1,0}|\), with no loss of Nash equilibria. (b) The reason for referring to this lemma as an algorithm is that it gives straightforward conditions for determining all Nash equilibria. These conditions primarily involve checking for equalizing strategies in a limited number of cases. Proof.: (a) For \(T(p)\) to be player II's unique best response, it must be the case that \(T\mapsto(1-p)b_{0,T}+p\,b_{1,T}\) is uniquely maximized at \(T=T(p)\). Now, for arbitrary \(T\) that excludes \(l^{\prime}\), the additivity of player II's payoffs implies that \[(1-p)b_{0,T\cup\{l^{\prime}\}}+p\,b_{1,T\cup\{l^{\prime}\}}>(1-p)b_{0,T}+p\,b_ {1,T} \tag{3}\] if and only if \[(1-p)p_{0}(l^{\prime})b_{0,1}(l^{\prime})+p\,p_{1}(l^{\prime})b_{1,1}(l^{ \prime})>(1-p)p_{0}(l^{\prime})b_{0,0}(l^{\prime})+p\,p_{1}(l^{\prime})b_{1,0}( l^{\prime}). \tag{4}\] But Inequality (4) is equivalent to \[(1-p)p_{0}(l^{\prime})[b_{0,1}(l^{\prime})-b_{0,0}(l^{\prime})]+p\,p_{1}(l^{ \prime})[b_{1,1}(l^{\prime})-b_{1,0}(l^{\prime})]>0, \tag{5}\] which holds if and only if \[l^{\prime}\in T_{1,1}\cup\{l\in T_{0,1}:p(l)<p\}\cup\{l\in T_{1,0}:p(l)>p\}=:T(p).\] Similarly, \[(1-p)b_{0,T\cup\{l^{\prime}\}}+p\,b_{1,T\cup\{l^{\prime}\}}<(1-p)b_{0,T}+p\,b_{1,T}\] if and only if \[l^{\prime}\in T_{0,0}\cup\{l\in T_{0,1}:p(l)>p\}\cup\{l\in T_{1,0}:p(l)<p\}. \tag{6}\] If we assume that \(p\notin\{p(l):l\in T_{0,1}\cup T_{1,0}\}\), then Inclusion (6) is equivalent to \(l^{\prime}\in T(p)^{c}\). The first conclusion of part (a) follows. For the second conclusion, notice that Inequalities (3)-(5), with the inequalities replaced by equalities, are equivalent to each other and to \(p(l^{\prime})=p\). This suffices. The third conclusion follows similarly. (b) For \(p\in[0,1]\) with \(p\notin\{p(l):l\in T_{0,1}\cup T_{1,0}\}\), we have seen that the pure strategy \(T(p)\) is the unique best response to \(\boldsymbol{p}=(1-p,p)\). However, for \(0<p<1\), the mixed strategy \(\boldsymbol{p}\) cannot be a best response to the pure strategy \(T(p)\) unless \(a_{0,T(p)}=a_{1,T(p)}\), which has been ruled out. To extend this to \(p=0\) and \(p=1\), we note that neither \((0,T(0))\) nor \((1,T(1))\) is a pure Nash equilibrium, by virtue of the other assumptions of part (b). (c) We assume that \(p=p(l^{\prime})\) for a unique \(l^{\prime}\in T_{0,1}\cup T_{1,0}\). By part (a), any mixture of the pure strategies \(T(p)\) and \(T(p)\cup\{l^{\prime}\}\) will be a best response to the mixed strategy \(\boldsymbol{p}=(1-p,p)\), but at most one such mixture, namely the equalizing strategy that chooses \(T(p)\) with probability \(1-q\) and \(T(p)\cup\{l^{\prime}\}\) with probability \(q\), where \(q\) satisfies Equation (2), will result in a Nash equilibrium. (d) The proof is similar to that of part (c). ## 3 Model B2 In this section we study Model B2. Here cards are dealt without replacement from a \(d\)-deck shoe, and Player sees only the total of his two-card hand, while Banker sees the composition of his two-card hand. Player has a stand-or-draw decision at two-card totals of 5, and Banker has a stand-or-draw decision in \(44\times 11=484\) situations (44 compositions corresponding to Banker totals of 0-7, and 11 Player third-card values, 0-9 and \(\varnothing\)), so _baccara chemin de fer_ is a \(2\times 2^{484}\) bimatrix game. We denote Player's two-card hand by \((X_{1},X_{2})\), where \(0\leq X_{1}\leq X_{2}\leq 9\), Banker's two-card hand by \((Y_{1},Y_{2})\), where \(0\leq Y_{1}\leq Y_{2}\leq 9\), and Player's and Banker's third-card values (possibly \(\varnothing\)) by \(X_{3}\) and \(Y_{3}\). Note, for example, that \(X_{1}\) and \(X_{2}\) are not Player's first- and second-card values; rather, they are the smaller and larger values of Player's first two cards. We define the function \(M\) on the set of nonnegative integers by \[M(i):=\mathrm{mod}(i,10), \tag{7}\] that is, \(M(i)\) is the remainder when \(i\) is divided by \(10\). We define Player's two-card total by \(X:=M(X_{1}+X_{2})\). We further denote by \(G_{0}\) and \(G_{1}\) Banker's profit in the parlor game from standing and from drawing, respectively, assuming a one-unit bet. More precisely, for \(v=0\) (Banker stands) and \(v=1\) (Banker draws), \[G_{v}:=\begin{cases}1&\text{if Banker wins},\\ 0&\text{if a tie occurs},\\ -1&\text{if Player wins}.\end{cases}\] We next define the relevant probabilities when cards are dealt without replacement. Let \[p_{4}((i_{1},i_{2}),(j_{1},j_{2})):= (2-\delta_{i_{1},i_{2}})\frac{4d(1+3\delta_{i_{1},0})}{52d}\cdot \frac{4d(1+3\delta_{i_{2},0})-\delta_{i_{2},i_{1}}}{52d-1}\] \[\cdot(2-\delta_{j_{1},j_{2}})\frac{4d(1+3\delta_{j_{1},0})- \delta_{j_{1},i_{1}}-\delta_{j_{1},i_{2}}}{52d-2}\] \[\cdot\frac{4d(1+3\delta_{j_{2},0})-\delta_{j_{2},i_{1}}-\delta_{ j_{2},i_{2}}-\delta_{j_{2},j_{1}}}{52d-3} \tag{8}\] be the probability that Player's two-card hand is \((i_{1},i_{2})\), where \(0\leq i_{1}\leq i_{2}\leq 9\), and Banker's two-card hand is \((j_{1},j_{2})\), where \(0\leq j_{1}\leq j_{2}\leq 9\). To elaborate on this formula, we note that the order of the cards within a two-card hand is irrelevant, so the hand comprising \(i_{1}\) and \(i_{2}\) can be written as \((\min(i_{1},i_{2}),\max(i_{1},i_{2}))\), and the factor \((2-\delta_{i_{1},i_{2}})\) adjusts the probability accordingly. The factors of the form \((1+3\delta_{i_{1},0})\) take into account the fact that cards valued as \(0\) are four times as frequent as cards valued as \(1\), for example. Finally, the terms of the form \(\,-\delta_{i_{2},i_{1}}\) are the effects of previously dealt cards. In practice, the order in which the first four cards are dealt is Player, Banker, Player, Banker. But it is mathematically equivalent, and slightly more convenient, to assume that the order is Player, Player, Banker, Banker. Second, \[p_{5}((i_{1},i_{2}),(j_{1},j_{2}),k):= p_{4}((i_{1},i_{2}),(j_{1},j_{2}))\] \[\cdot\frac{4d(1+3\delta_{k,0})-\delta_{k,i_{1}}-\delta_{k,i_{2}} -\delta_{k,j_{1}}-\delta_{k,j_{2}}}{52d-4} \tag{9}\] is the probability that Player's two-card hand is \((i_{1},i_{2})\), where \(0\leq i_{1}\leq i_{2}\leq 9\) and \(M(i_{1}+i_{2})\leq 7\), Banker's two-card hand is \((j_{1},j_{2})\), where \(0\leq j_{1}\leq j_{2}\leq 9\) and \(M(j_{1}+j_{2})\leq 7\), and the fifth card dealt has value \(k\in\{0,1,\ldots,9\}\). Note that \(\sum_{0\leq k\leq 9}p_{5}((i_{1},i_{2}),(j_{1},j_{2}),k)=p_{4}((i_{1},i_{2}),(j_{1},j_ {2}))\). Third, \[p_{6}((i_{1},i_{2}),(j_{1},j_{2}),k,l):= p_{5}((i_{1},i_{2}),(j_{1},j_{2}),k)\] \[\cdot\frac{4d(1+3\delta_{l,0})-\delta_{l,i_{1}}-\delta_{l,i_{2}} -\delta_{l,j_{1}}-\delta_{l,j_{2}}-\delta_{l,k}}{52d-5} \tag{10}\] is the probability that Player's two-card hand is \((i_{1},i_{2})\), where \(0\leq i_{1}\leq i_{2}\leq 9\) and \(M(i_{1}+i_{2})\leq 7\), Banker's two-card hand is \((j_{1},j_{2})\), where \(0\leq j_{1}\leq j_{2}\leq 9\) and \(M(j_{1}+j_{2})\leq 7\), the fifth card dealt has value \(k\in\{0,1,\ldots,9\}\), and the sixth card dealt has value \(l\in\{0,1,\ldots,9\}\). Note that \(\sum_{0\leq l\leq 9}p_{6}((i_{1},i_{2}),(j_{1},j_{2}),k,l)=p_{5}((i_{1},i_{2}),(j_{1 },j_{2}),k)\). Given a function \(f\) on the set of integers, let us define, for \(u\in\{0,1\}\), \(0\leq j_{1}\leq j_{2}\leq 9\) with \(M(j_{1}+j_{2})\leq 7\), and \(k\in\{0,1,\ldots,9\}\), \[e_{u,0}((j_{1},j_{2}),k):=\sum_{i=0}^{4+u}\sum_{\begin{subarray}{c}0\leq i_{ 1}\leq i_{2}\leq 9:\\ M(i_{1}+i_{2})=i\end{subarray}}f(M(j_{1}+j_{2})-M(i+k))\] \[\bigg{/}\sum_{i=0}^{4+u}\sum_{\begin{subarray}{c}0\leq i_{1}\leq i_{2}\leq 9 :\\ M(i_{1}+i_{2})=i\end{subarray}}p_{5}((i_{1},i_{2}),(j_{1},j_{2}),k) \tag{11}\] and \[e_{u,1}((j_{1},j_{2}),k):=\sum_{i=0}^{4+u}\sum_{\begin{subarray}{c}0\leq i_{ 1}\leq i_{2}\leq 9:\\ M(i_{1}+i_{2})=i\end{subarray}}\sum_{l=0}^{9}f(M(j_{1}+j_{2}+l)-M(i+k))\] \[\bigg{/}\sum_{i=0}^{4+u}\sum_{\begin{subarray}{c}0\leq i_{1}\leq i_{2}\leq 9 :\\ M(i_{1}+i_{2})=i\end{subarray}}\sum_{l=0}^{9}p_{6}((i_{1},i_{2}),(j_{1},j_{2}),k,l), \tag{12}\] where \(u\in\{0,1\}\) denotes Player's pure strategy (\(u=0\) if Player always stands on two-card totals of \(5\), \(u=1\) if Player always draws on two-card totals of \(5\)). Notice that the denominators of Equation (11) and Equation (12) are equal; we denote their common value by \(p_{u}((j_{1},j_{2}),k)\). We further define, for \(u\in\{0,1\}\) and \(0\leq j_{1}\leq j_{2}\leq 9\) with \(M(j_{1}+j_{2})\leq 7\), \[e_{u,0}((j_{1},j_{2}),\varnothing):=\sum_{i=5+u}^{7}\sum_{ \begin{subarray}{c}0\leq i_{1}\leq i_{2}\leq 9:\\ M(i_{1}+i_{2})=i\end{subarray}}f(M(j_{1}+j_{2})-i)p_{4}((i_{1},i_{2}),(j_{1},j _{2}))\] \[\bigg{/}\sum_{i=5+u}^{7}\sum_{\begin{subarray}{c}0\leq i_{1} \leq i_{2}\leq 9:\\ M(i_{1}+i_{2})=i\end{subarray}}p_{4}((i_{1},i_{2}),(j_{1},j_{2})) \tag{13}\] and \[e_{u,1}((j_{1},j_{2}),\varnothing):=\sum_{i=5+u}^{7}\sum_{ \begin{subarray}{c}0\leq i_{1}\leq i_{2}\leq 9:\\ M(i_{1}+i_{2})=i\end{subarray}}\sum_{l=0}^{9}f(M(j_{1}+j_{2}+l)-i)\] \[\bigg{/}\sum_{i=5+u}^{7}\sum_{\begin{subarray}{c}0\leq i_{1} \leq i_{2}\leq 9:\\ M(i_{1}+i_{2})=i\end{subarray}}\sum_{l=0}^{9}p_{5}((i_{1},i_{2}),(j_{1},j_{2}),l), \tag{14}\] where \(u\in\{0,1\}\) has the same interpretation as above. Notice that the denominators of Equation (13) and Equation (14) are equal; we denote their common value by \(p_{u}((j_{1},j_{2}),\varnothing)\). We turn to Banker's payoffs in the casino game. Let us define \[f(x):=\operatorname{sgn}(x)-\alpha\,\mathbf{1}_{(0,\infty)}(x)=\begin{cases}1- \alpha&\text{if }x>0,\\ 0&\text{if }x=0,\\ -1&\text{if }x<0.\end{cases} \tag{15}\] If Banker has two-card hand \((j_{1},j_{2})\), where \(0\leq j_{1}\leq j_{2}\leq 9\) and \(M(j_{1}+j_{2})\leq 7\), and Player's third-card value is \(k\in\{0,1,\ldots,9\}\), then Banker's standing \((v=0)\) and drawing \((v=1)\) expectations are, with \(f\) as in Equation (15), \[b_{u,v}((j_{1},j_{2}),k)\] \[\quad:=E[G_{v}-\alpha\,\mathbf{1}_{\{G_{v}=1\}}\mid X\in\{0,1, \ldots,4+u\},\,(Y_{1},Y_{2})=(j_{1},j_{2}),\,X_{3}=k]\] \[\quad=E[G_{v}\mid X\in\{0,1,\ldots,4+u\},\,(Y_{1},Y_{2})=(j_{1}, j_{2}),\,X_{3}=k]\] \[\quad\quad-\alpha\,P(G_{v}=1\mid X\in\{0,1,\ldots,4+u\},\,(Y_{1},Y_{2})=(j_{1},j_{2}),\,X_{3}=k)\] \[\quad=e_{u,v}((j_{1},j_{2}),k),\quad u,v\in\{0,1\}, \tag{16}\] where \(u\) denotes Player's pure strategy (\(u=0\) if Player always stands at two-card totals of \(5\), \(u=1\) if Player always draws at two-card totals of \(5\)). Here \(100\,\alpha\) is the percent commission on Banker wins. Throughout we assume that \(0\leq\alpha\leq 1/10\). If Banker has two-card hand \((j_{1},j_{2})\), where \(0\leq j_{1}\leq j_{2}\leq 9\) and \(M(j_{1}+j_{2})\leq 7\), and Player stands, then Banker's standing \((v=0)\) and drawing \((v=1)\) expectations are, with \(f\) as in Equation (15), \[b_{u,v}((j_{1},j_{2}),\varnothing)\] \[\quad:=E[G_{v}-\alpha\,\mathbf{1}_{\{G_{v}=1\}}\mid X\in\{5+u, \ldots,7\},\,(Y_{1},Y_{2})=(j_{1},j_{2}),\,X_{3}=\varnothing]\] \[\quad\quad-\alpha\,P(G_{v}=1\mid X\in\{5+u,\ldots,7\},\,(Y_{1},Y_ {2})=(j_{1},j_{2}),\,X_{3}=\varnothing)\] \[\quad\quad=e_{u,v}((j_{1},j_{2}),\varnothing),\quad u,v\in\{0,1\}, \tag{17}\] where \(u\) denotes Player's pure strategy, as above. We now define the payoff bimatrix \((\boldsymbol{A},\boldsymbol{B})\) to have \((u,T)\) entry \((a_{u,T},b_{u,T})\) for \(u\in\{0,1\}\) and \(T\subset\{(j_{1},j_{2}):0\leq j_{1}\leq j_{2}\leq 9,\,M(j_{1}+j_{2})\leq 7\} \times\{0,1,\ldots,9,\varnothing\}\), where \[b_{u,T}:=p_{u}(0)b_{u}(0) +\sum_{\begin{subarray}{c}0\leq j_{1}\leq j_{2}\leq 9:\\ M(j_{1}+j_{2})\leq 7\end{subarray}}\sum_{\begin{subarray}{c}k\in\{0,1,\ldots,9, \varnothing\}:\\ ((j_{1},j_{2}),k)\in T^{c}\end{subarray}}p_{u}((j_{1},j_{2}),k)b_{u,0}((j_{1}, j_{2}),k)\] \[+\sum_{\begin{subarray}{c}0\leq j_{1}\leq j_{2}\leq 9:\\ M(j_{1}+j_{2})\leq 7\end{subarray}}\sum_{\begin{subarray}{c}k\in\{0,1,\ldots,9, \varnothing\}:\\ ((j_{1},j_{2}),k)\in T\end{subarray}}p_{u}((j_{1},j_{2}),k)b_{u,1}((j_{1},j_{2 }),k),\] using Equations (16) and (17) and \[p_{u}(0)b_{u}(0) :=-\alpha\sum_{j=8}^{9}\,\sum_{i=0}^{j-1}\,\sum_{\begin{subarray}{c} 0\leq i_{1}\leq i_{2}\leq 9:\\ \tilde{M}(i_{1}+i_{2})=i\end{subarray}}\,\sum_{\begin{subarray}{c}0\leq j_{1} \leq j_{2}\leq 9:\\ \tilde{M}(j_{1}+j_{2})=j\end{subarray}}p_{4}((i_{1},i_{2}),(j_{1},j_{2}))\] \[=-\frac{32\,\alpha\,d^{2}(37120\,d^{2}-4044\,d+109)}{(52\,d)_{4}}, \tag{18}\] and where \(a_{u,T}:=-b_{u,T}\) with \(\alpha=0\). Here \((z)_{r}:=z(z-1)\cdots(z-r+1)\). Notice that Equation (18) does not depend on \(u\). We want to find all Nash equilibria of the casino game _backara chemin de fer_ under Model B2, for all positive integers \(d\) and for \(0\leq\alpha\leq 1/10\). We apply Lemma 2, Foster's algorithm. Lemma 1 also applies, reducing the game to \(2\times 2^{20}\), but that is not needed. We demonstrate the method by treating the case \(d=6\) and \(0\leq\alpha\leq 1/10\) in detail. Then we state results for all \(d\). The first step is to derive a preliminary version of Banker's optimal move at each information set for \(\alpha=0\) and for \(\alpha=1/10\). At only three of the \(44\times 11=484\) information sets does Banker's optimal move differ at \(\alpha=0\) and \(\alpha=1/10\). Because \(b_{u,v}((j_{1},j_{2}),k)\) is linear in \(\alpha\), if the optimal move at \(((j_{1},j_{2}),k)\) is the same for \(\alpha=0\) and \(\alpha=1/10\), then it is also the same for \(0\leq\alpha\leq 1/10\). Results are shown in Table 1. The sets \(\{p(l):l\in T_{0,1}\}\) and \(\{p(l):l\in T_{1,0}\}\) of Lemma 2 are the best-response discontinuities. In the present setting, the sets \(T_{0,1}\) and \(T_{1,0}\) depend on \(\alpha\), call them \(T_{0,1}^{\alpha}\) and \(T_{1,0}^{\alpha}\). We call \(\{p((j_{1},j_{2}),k):((j_{1},j_{2}),k)\in T_{0,1}^{0}\cup T_{0,1}^{1/10}\}\), which has 17 elements, and \(\{p((j_{1},j_{2}),k):((j_{1},j_{2}),k)\in T_{1,0}^{0}\cup T_{1,0}^{1/10}\}\), which has three elements, _best-response-discontinuity curves_. The 20 such curves can be evaluated as follows: \[p((0,3),9) =\frac{471,\!143+1081\,\alpha}{24(49,\!118-22,\!303\,\alpha)}, \quad\quad p((7,8),4)=\frac{22,\!301+223,\!099\,\alpha}{103,\!799},\] \[p((1,2),9) =\frac{475,\!514-3219\,\alpha}{24(48,\!930-22,\!303\,\alpha)}, \quad\quad p((0,6),\varnothing)=\frac{477,\!191-54,\!732\,\alpha}{12(49,\!37 7-26,\!957\,\alpha)},\] \[p((4,9),9) =\frac{79,\!051-1643\,\alpha}{4(49,\!117-22,\!302\,\alpha)}, \quad\quad p((1,5),\varnothing)=\frac{474,\!840-49,\!249\,\alpha}{24(24,\!298- 13,\!265\,\alpha)},\] \[p((5,8),9) =\frac{459,\!978+5089\,\alpha}{24(48,\!334-21,\!947\,\alpha)}, \quad\quad p((2,4),\varnothing)=\frac{486,\!444-56,\!617\,\alpha}{108(5486-2995 \,\alpha)},\] \[p((6,7),9) =\frac{458,\!114+9555\,\alpha}{2(589,\!498-267,\!671\,\alpha)}, \quad\quad p((3,3),\varnothing)=\frac{17(4903-621\,\alpha)}{144(691-377\, \alpha)},\] \[p((2,2),1) =\frac{732,\!517-127,\!942\,\alpha}{24(31,\!070-13,\!467\,\alpha)},\quad\quad p((7,9),\varnothing)=\frac{239,\!771-25,\!800\,\alpha}{42(7027-382 4\,\alpha)},\] \[p((6,8),1) =\frac{676,\!141-99,\!462\,\alpha}{24(31,\!442-13,\!465\,\alpha)},\quad\quad p((8,8),\varnothing)=\frac{78,\!837-8300\,\alpha}{112(875-478\, \alpha)},\] \[p((7,7),1) =\frac{7(95,\!873-13,\!162\,\alpha)}{24(31,\!442-13,\!465\,\alpha) },\quad\quad p((1,5),6)=\frac{348,\!662-715,\!139\,\alpha}{24(13,\!068-8627\, \alpha)},\] \[p((0,5),4) =\frac{2(13,\!030+112,\!111\,\alpha)}{102,\!143}, p((2,4),6) =\frac{116,\!325-239,\!444\,\alpha}{32(3272-2191\,\alpha)},\] \[p((6,9),4) =\frac{29,\!485+222,\!912\,\alpha}{103,\!799}, p((3,3),6) =\frac{149,\!704-346,\!703\,\alpha}{288(561-373\,\alpha)}.\] In Figure 1 these 20 best-response-discontinuity curves are graphed simultaneously. Notice that \(p((0,3),9)\), \(p((1,2),9)\), \(p((4,9),9)\), \(p((5,8),9)\), and \(p((6,7),\) \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Banker’s & \multicolumn{8}{c}{Player’s third-card value (\(\varnothing\) if Player stands)} \\ total & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & \(\varnothing\) \\ \hline \(0,1,2\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) \\ \hline 3 & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{S}\) & \(\ast^{1}\) & \(\mathbf{D}\) \\ \hline 4 & \(\mathbf{S}\) & \(\ast^{2}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{S}\) & \(\mathbf{D}\) \\ \hline 5 & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\ast^{3}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{D}\) & \(\mathbf{S}\) & \(\mathbf{D}\) \\ \hline 6 & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\ast^{5}\) & \(\mathbf{D}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\ast^{4}\) \\ \hline 7 & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) & \(\mathbf{S}\) \\ \hline \multicolumn{10}{c}{\(\alpha=0\)} \\ \hline \({}^{1}\)\((3,9)\): & \(((0,3),9)\), \(((1,2),9)\), \(((4,9),9)\), & & & & & \\ \multicolumn{10}{c}{\(((5,8),9)\), \(((6,7),9)\)} & & & & & & \\ \hline \({}^{2}\)\((4,1)\): & \(((0,4),1)\), \(((1,3),1)\), \(((5,9),1)\) & & & & & & \\ \multicolumn{10}{c}{\(((2,2),1)\)} & & & & & & & \\ \multicolumn{10}{c}{\(((6,8),1)\), \(((7,7),1)\)} & & & & & & & \\ \hline \({}^{3}\)\((5,4)\): & \(((0,5),4)\), \(((6,9),4)\), \(((7,8),4)\) & & & & & & & \\ \multicolumn{10}{c}{\(((1,4),4)\), \(((2,3),4)\)} & & & & & & & \\ \hline \({}^{4}\)\((6,\varnothing)\): & \(((0,6),\varnothing)\), \(((1,5),\varnothing)\), \(((2,4),\varnothing)\), & & & & & & & \\ \multicolumn{10}{c}{\(((3,3),\varnothing)\), \(((7,9),\varnothing)\), \(((8,8),\varnothing)\)} & & & & & & \\ \hline \({}^{5}\)\((6,6)\): & \(((0,6),6)\), \(((7,9),6)\), \(((8,8),6)\) & & & & & & & \\ \multicolumn{10}{c}{\(((1,5),6)\), \(((2,4),6)\)} & & & & & & & & \\ \multicolumn{10}{c}{\(((3,3),6)\)} & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Banker’s optimal move (preliminary version) in the casino game _baccara chemin de fer_ under Model B2 with \(d=6\) and with \(\alpha=0\) and \(\alpha=1/10\), indicated by S (stand) or D (draw), except in the 20 cases indicated by S/D (stand if Player always stands at two-card totals of 5, draw if Player always draws at two-card totals of 5) or D/S (draw if Player always stands at two-card totals of 5, stand if Player always draws at two-card totals of 5) in which it depends on Player’s pure strategy. 9) (red) intersect \(p((0,5),4)\), \(p((6,9),4)\), and \(p((7,8),4)\) (blue); \(p((6,8),1)\) and \(p((7,7),1)\) (green) and \(p((0,6),\varnothing)\), \(p((1,5),\varnothing)\), \(p((2,4),\varnothing)\), \(p((3,3),\varnothing)\), \(p((7,9)\), \(\varnothing)\), and \(p((8,8),\varnothing)\) (black) intersect \(p((3,3),6)\) (orange). Thus, there are 23 points of intersection. We notice also that three of the curves are only partially defined, in that they intersect the horizontal line \(p=1\). These are \(p((2,2),1)\) (green) and \(p((1,5),6)\) and \(p((2,4),6)\) (orange). They correspond to the three entries in Table 1 that differ at \(\alpha=0\) and \(\alpha=1/10\). We are now ready to identify the cases that must be checked for equalizing strategies. If \(r\) is the number of best-response-discontinuity curves and \(s\) is the number of points of intersection of these curves, then there are \(r+2s\)\(\alpha\)-intervals and \(s\)\(\alpha\)-values that must be checked for equalizing strategies. When \(d=6\) we have seen that \(r=20\) and \(s=23\), hence there are 66 \(\alpha\)-intervals and 23 \(\alpha\)-values that require attention. We have summarized these 89 cases in Tables 2 and 3. Let us provide more detail on Table 2. For each best-response-discontinuity curve, if it is intersected by \(m\) other best-response-discontinuity curves, that divides the interval \([0,1/10]\) into \(m+1\) subintervals, each of which contributes a row to Table 2. The Banker strategy for a given row can be deduced from Lemma 2. Let us consider row 44. The Banker strategy DDDDD-SSS-DDD-MSSSSD-DDD, together with Table 1, allows us to evaluate Player's \(2\times 2\) payoff matrix, which is \[\boldsymbol{A}=\begin{array}{ll}\text{B: S at }((0,6),\varnothing)&\text{B: D at }((0,6),\varnothing)\\ \text{P: S at }5\left(\begin{array}{ll}-\frac{22,721,165,499}{1,525,814,595,305} &-\frac{3,606,648,223}{305,162,919,061}\\ -\frac{2,716,895,133}{217,973,513,615}&-\frac{20,151,297,323}{1,525,814,595,305} \end{array}\right)\!,\end{array}\] and solving \[\begin{pmatrix}1&-1\end{pmatrix}\boldsymbol{A}\begin{pmatrix}1-q\\ q\end{pmatrix}=0\] yields the equalizing strategy \[q=77,\!143,\!741/121,\!269,\!912. \tag{19}\] For row 45, Banker's strategy differs from that in row 44 only at \(((3,3),6)\) and we obtain \[\boldsymbol{A}=\begin{array}{ll}\text{B: S at }((0,6),\varnothing)&\text{B: D at }((0,6),\varnothing)\\ \text{P: D at }5\left(\begin{array}{ll}-\frac{22,707,392,731}{1,525,814,595,305} &-\frac{18,019,468,347}{1,525,814,595,305}\\ -\frac{19,019,357,419}{1,525,814,595,305}&-\frac{20,152,388,811}{1,525,814,595,3 05}\end{array}\right)\!,\end{array}\] which yields the equalizing strategy \[q=76,\!834,\!069/121,\!269,\!912. \tag{20}\] Next we provide more detail on Table 3. In row 18, corresponding to the intersection of \(p((0,6),\varnothing)\) and \(p((3,3),6)\), which occurs at \[\alpha_{0}:=\frac{16,\!145,\!999,\!279-\sqrt{226,\!436,\!619,\!657,\!206,\!227,\! 489}}{17,\!712,\!223,\!814}\approx 0.0620017,\] the Banker strategy DDDDD-SSS-DDD-MSSSSSD-DDM, together with Table 1, allow us to evaluate Player's \(2\times 4\) payoff matrix, which is \[\text{B: SS}\quad\text{B: SD}\quad\text{B: DS}\quad\text{B: DD}\] \[\text{P: S at }5\quad\left(\begin{array}{cc}-\frac{22,707,392,731}{1,525,814,595,305}&-\frac{22,721,165,499}{1,525,814,595,305}&-\frac{18,019,468,34 7}{1,525,814,595,305}&-\frac{3,606,648,223}{305,162,919,061}\\ -\frac{19,019,357,419}{1,525,814,595,305}&-\frac{2,716,895,133}{217,973,513,615}& -\frac{20,152,388,811}{1,525,814,595,305}&-\frac{20,151,297,323}{1,525,814,595,3 05}\end{array}\right),\] where, for example, the Banker strategy SD means S at \(((0,6),\varnothing)\) and D at \(((3,3),6)\). There are exactly four equalizing strategies with supports of size two, namely \[(1-q,0,q,0), q=76,834,069/121,269,912, \tag{21}\] \[(1-q,0,0,q), q=76,834,069/120,960,240,\] (22) \[(0,1-q,q,0), q=77,143,741/121,579,584,\] (23) \[(0,1-q,0,q), q=77,143,741/121,269,912. \tag{24}\] Figure 1: The 20 best-response-discontinuity curves for Model B2 and \(d=6\) graphed simultaneously as functions of \(\alpha\in[0,1/10]\) with range restricted to \([0,1]\). There are 23 points of intersection. No two curves of the same color intersect each other. The labels on the red, blue, and black curves are listed from largest \(p\) to smallest \(p\). For example, \(p((6,9),4)>p((0,5),4)>p((7,8),4)\). \begin{table} \begin{tabular}{c c c c} \hline \hline case & Player & approximate & Banker strategy at \\ & \(p\) & \(\alpha\) interval & \((3,9)\), \((4,1)\), \((5,4)\), \((6,\varnothing)\), \((6,6)\) \\ \hline [MISSING_PAGE_POST] D-SS-DD-SSSSSS-DDD \\ \hline \hline \end{tabular} \end{table} Table 2: The \(66\) cases that must be checked for an equalizing strategy, under Model B2 with \(d=6\) and \(0\leq\alpha\leq 1/10\). The Banker strategy DDDD-SSSS-DDD-MSSSSS-DDD-MSSSSD-DDD-D-DD-MSSSD-DD-D-DD-SSSSD-DD-SSSS-DD-SSSS-DD-SSSS-DD-SSSS-DD-SSSSSS-SSSS-DD-SSSSSSSS-DD-SS-SSSSSS-DD-SSSS-DD-SSSS-SSSSSS-DD-SSSS-SSSSSS-DD-SSSS-DD-SS-SSSSSSSS-DD-SS-SSSSSS-DD-SSSS-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD-SSSSSSSS-DD-SSSSSSSS-DD-SSSSSS-DD-SSSSSS-DD To summarize then, if \(\alpha\neq\alpha_{0}\), then there is a unique Nash equilibrium \((\mathbf{p},\mathbf{q})=((1-p,p),(1-q,q))\), with \[p=p((0,6),\varnothing)=\frac{477,\!191-54,\!732\,\alpha}{12(49,\!377-26,\!957\, \alpha)} \tag{25}\] and with \(q\) as in Equation (19) if \(0\leq\alpha<\alpha_{0}\) and \(q\) as in Equation (20) if \(\alpha_{0}<\alpha\leq 1/10\). If \(\alpha=\alpha_{0}\), uniqueness fails. Nash equilibria include \((\mathbf{p},\mathbf{q})=((1-p,p),\mathbf{q})\), with \(p\) as in Equation (25) and \(\mathbf{q}\) as in Equations (21)-(24). Moreover, any mixture of these four Nash equilibria is a Nash equilibrium. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{case} & Player & approximate & Banker strategy at \\ & \(p\) & \(\alpha\) interval & \((3,9)\), \((4,1)\), \((5,4)\), \((6,\varnothing)\), \((6,6)\) \\ \hline [MISSING_PAGE_POST] \) & DDDDDD-SSS-DDD-SSSSSS-DDM \\ \hline \hline \end{tabular} \end{table} Table 2: (_continued_). Notice that the Nash equilibrium with \(p\) as in Equation (25) and \(\mathbf{q}\) as in Equation (21) coincides with the one from row 45 of Table 2, the Nash equilibrium with \(p\) as in Equation (25) and \(\mathbf{q}\) as in Equation (24) coincides with the one from row 44 of Table 2. The two others, with \(p\) as in Equation (25) and \(\mathbf{q}\) as in Equation (22) or Equation (23), are new. The next step is to verify the three conditions in part (b) of Lemma 2. The first condition is easy because the work has already been done in checking for equalizing strategies. Consider \([0,1/10]\times[0,1]\) minus the union of the 20 best-response-discontinuity curves, as shown in Figure 1. It is the union of 43 disjoint connected open regions. The best response \(T^{\alpha}(p)\) is constant on each of these regions, so we can see that the entries of \(\mathbf{A}\) corresponding to column \(T^{\alpha}(p)\) have already been computed in analyzing the 66 cases of Table 2. The second condition is easiest because the strategy is the same for \(p=0\) and all \(\alpha\). (The case \(d=1\) is an exception, and it can be checked separately.) \begin{table} \begin{tabular}{c c c c} \hline \hline case & intersecting curves & approx. \(\alpha\) & \begin{tabular}{c} Banker strategy at \\ \((3,9)\), \((4,1)\), \((5,4)\), \((6,\varnothing)\), \((6,6)\) \\ \end{tabular} \\ \hline [MISSING_PAGE_POST] SSSSSSSM-DDM \\ \hline \hline \end{tabular} \end{table} Table 3: The 23 intersections that must be checked for an equalizing strategy, under Model B2 with \(d=6\) and \(0\leq\alpha\leq 1/10\). The meaning of the Banker strategies is as in Table 2. Only in case 18 are there equalizing strategies. The third condition is a little more involved because of the three best-response-discontinuity curves that intersect \(p=1\). They divide \([0,1/10]\) into four intervals, and the third condition can be confirmed for each. This completes the analysis of the case \(d=6\). Statistics for other values of \(d\) are shown in Table 4. Next, we summarize results under Model B2 for all \(d\geq 1\). See Table 5. First, all Nash equilibria \((\boldsymbol{p},\boldsymbol{q})=((1-p,p),\boldsymbol{q})\) have the same \(p\), namely \[p=p((0,6),\varnothing)=\frac{(8\,d-1)(12\,d-1)(24\,d-1)-2\,\alpha\,d(128\,d^{2 }-8\,d+1)}{2\,d(1408\,d^{2}-220\,d+9)-2\,\alpha\,d(768\,d^{2}-116\,d+5)}, \tag{26}\] which generalizes Equation (25). Table 6 indicates the strategies on which Banker mixes, with drawing probability \(q\). For \(d=1\), \[q=290,\!383/450,\!072\text{ if }\alpha\in[0,\alpha_{1}), \tag{27}\] \[q=288,\!499/450,\!072\text{ if }\alpha\in(\alpha_{1},\alpha_{2}),\] (28) \[q=40,\!811/64,\!296\text{ if }\alpha\in(\alpha_{2},1/10]. \tag{29}\] For \(d=2\), \[q=2,\!591,\!845/4,\!119,\!192\text{ if }\alpha\in[0,\alpha_{3}), \tag{30}\] \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \(d\) & (a) & (b) & (c) & (d) & (e) & \(d\) & (a) & (b) & (c) & (d) & (e) \\ \hline 1 & 26 & 13 & 4 & 52 & 2 & 13 & 28 & 28 & 9 & 84 & 0 \\ 2 & 23 & 30 & 3 & 83 & 1 & 14 & 28 & 27 & 9 & 82 & 0 \\ 3 & 22 & 9 & 4 & 40 & 0 & 15–16 & 28 & 30 & 7 & 88 & 0 \\ 4 & 21 & 19 & 4 & 59 & 1 & 17 & 28 & 35 & 7 & 98 & 0 \\ 5 & 21 & 24 & 4 & 69 & 1 & 18 & 28 & 35 & 6 & 98 & 0 \\ 6 & 20 & 23 & 3 & 66 & 1 & 19 & 28 & 36 & 6 & 100 & 0 \\ 7 & 26 & 24 & 9 & 74 & 1 & 20 & 28 & 44 & 6 & 116 & 0 \\ 8 & 28 & 34 & 10 & 96 & 2 & 21–37 & 28 & 46 & 6 & 120 & 0 \\ 9 & 28 & 39 & 9 & 106 & 2 & 38–44 & 28 & 41 & 6 & 110 & 0 \\ 10 & 28 & 28 & 8 & 84 & 2 & 45 & 28 & 38 & 6 & 104 & 0 \\ 11 & 28 & 31 & 9 & 90 & 2 & 46–76 & 28 & 36 & 6 & 100 & 0 \\ 12 & 28 & 26 & 9 & 80 & 0 & \(\geq 77\) & 28 & 37 & 6 & 102 & 0 \\ \hline \hline \end{tabular} \end{table} Table 4: Dependence on \(d\) of various quantities associated with the casino game under Model B2 with \(0\leq\alpha\leq 1/10\). Column (a) contains the number of best-response-discontinuity curves; column (b) contains the number of points of intersection of these curves; column (c) contains the number of these curves that intersect \(p=1\) or \(p=0\); column (d) contains the number of \(\alpha\)-intervals that must be checked for equalizing strategies; and column (e) contains the number of \(\alpha\)-values at which the Nash equilibrium is nonunique. \[q=872,\!479/\!1,\!373,\!064\ \text{if}\ \alpha\in(\alpha_{3},1/10]. \tag{31}\] For \(d=3\), \(q\) is as in Equation (33) if \(\alpha\in[0,1/10]\). For \(4\leq d\leq 7\), \[q=\frac{368,\!640\,d^{4}-68,\!624\,d^{3}-2168\,d^{2}+981\,d-48}{8\,d(52\,d-5)( 1408\,d^{2}-220\,d+9)}\ \text{if}\ \alpha\in[0,\alpha_{0}), \tag{32}\] \[q=\frac{367,\!104\,d^{4}-68,\!000\,d^{3}-2228\,d^{2}+981\,d-48}{8 \,d(52\,d-5)(1408\,d^{2}-220\,d+9)}\ \text{if}\ \alpha\in(\alpha_{0},1/10]. \tag{33}\] For \(d=8,9\), \[q=\frac{367,\!616\,d^{4}-67,\!728\,d^{3}-2416\,d^{2}+1015\,d-51}{8\,d(52\,d-5)( 1408\,d^{2}-220\,d+9)}\ \text{if}\ \alpha\in[0,\alpha_{4}), \tag{34}\] \(q\) is as in Equation (32) if \(\alpha\in(\alpha_{4},\alpha_{0})\), and \(q\) is as in Equation (33) if \(\alpha\in(\alpha_{0},1/10]\). For \(d=10,11\), \[q=\frac{366,\!592\,d^{4}-67,\!344\,d^{3}-2456\,d^{2}+1017\,d-51}{8\,d(52\,d-5)( 1408\,d^{2}-220\,d+9)}\ \text{if}\ \alpha\in[0,\alpha_{5}), \tag{35}\] \(q\) is as in Equation (34) if \(\alpha\in(\alpha_{5},\alpha_{0})\), and \[q=\frac{366,\!080\,d^{4}-67,\!104\,d^{3}-2476\,d^{2}+1015\,d-51}{8\,d(52\,d-5 )(1408\,d^{2}-220\,d+9)}\ \text{if}\ \alpha\in(\alpha_{0},1/10]. \tag{36}\] Finally, for \(d\geq 12\), \(q\) is as in Equation (35) if \(\alpha\in[0,1/10]\). \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Banker’s & \multicolumn{8}{c}{Player’s third-card value (\(\varnothing\) if Player stands)} \\ total & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & \(\varnothing\) \\ \hline \(0,1,2\) & **D** & **D** & **D** & **D** & **D** & **D** & **D** & **D** & **D** & **D** \\ \hline \(3\) & **D** & **D** & **D** & **D** & **D** & **D** & **D** & **D** & **D** & **D** \\ \hline \(4\) & S & \(*\) & **D** & **D** & **D** & **D** & **D** & **D** & **S** & **S** & **D** \\ \hline \(5\) & S & S & S & S & \(*\) & **D** & **D** & **D** & **D** & **S** & **S** & **D** \\ \hline \(6\) & S & S & S & S & S & S & \(*\) & **D** & **D** & **S** & **S** & **S** \\ \hline \(7\) & S & S & S & S & S & S & S & S & S & S & S & **S** \\ \hline \end{tabular} \end{table} Table 5: Banker’s optimal move in the casino game _baccara chemin de fer_ under Model B2 (or B3) for all \(d\geq 1\) and \(0\leq\alpha\leq 1/10\), indicated by S (stand) or D (draw). For the five asterisks, see Table 6 (or Table 7). We can obtain the uniqueness of the Nash equilibrium for each \(d=1,2,\ldots,76\). For \(d\geq 77\), we observe that the best-response-discontinuity curves are ordered in a way that does not depend on \(d\). The six curves corresponding to \((4,1)\) intersect the six partial curves corresponding to \((6,6)\), and \(p((1,5),6)\) intersects \(p((2,4),6)\). Thus, there are \(37\) points of intersection for all \(d\geq 77\). With this information we can apply Foster's algorithm with a variable \(d\) to get the desired uniqueness. At each of the exceptional points \(\alpha_{0}\) (\(4\leq d\leq 11\)), \(\alpha_{1}\) and \(\alpha_{2}\) (\(d=1\)), \(\alpha_{3}\) (\(d=2\)), \(\alpha_{4}\) (\(d=8,9\)), and \(\alpha_{5}\) (\(d=10,11\)), there are exactly four Nash equilibria with Banker equilibrium strategy having support size \(2\), just as we saw in the case \(d=6\). We leave the evaluation of the various mixing probabilities to the reader. We have established the following theorem. **Theorem 1**.: _Consider the casino game baccara chemin de fer under Model B2 with \(d\) a positive integer and \(0\leq\alpha\leq 1/10\). With rare exceptions, there is a unique Nash equilibrium. Player's equilibrium strategy is to draw at \(5\) with probability as in Equation (26). Banker's equilibrium strategy is as in Tables 5 and 6. The number of exceptions is two if \(d\in\{1,8,9,10,11\}\), one if \(d\in\{2,4,5,6,7\}\), and none otherwise. For each of these exceptional values of \(\alpha\), there are four Banker equilibrium strategies of support size \(2\)._ Let us briefly compare the Nash equilibrium of the casino game (Theorem 1) with that of the parlor game (Ethier and Gamez, 2013), under Model B2 in both cases. We also compare them in the limit as \(d\to\infty\). In the casino game, Player's mixing probability (i.e., Player's probability of drawing at two-card totals of \(5\)) is as in Equation (26), which depends explicitly on \(d\) and \(\alpha\). Banker's mixing probability (i.e., Banker's probability of drawing at \(((0,6),\varnothing)\)) depends on \(d\) and is a step function in \(\alpha\) with zero, one, or two discontinuities (zero, hence no \(\alpha\) dependence, if \(d=3\) or \(d\geq 12\)). In the limit as \(d\to\infty\), Player's mixing probability converges to \[p=\frac{9-\alpha}{11-6\,\alpha}, \tag{37}\] while Banker's mixing probability converges to \(179/286\). It follows that Banker's limiting probability of drawing at \((6,\varnothing)\), including \(((0,6),\varnothing)\) and \(((8,8),\varnothing)\), is \[q=\frac{1}{2}\,\frac{179}{286}+\frac{1}{16}=\frac{859}{2288}, \tag{38}\] and we recognize Equations (37) and (38) as the parameters of the Model A1 Nash equilibrium. In the parlor game, the results of the preceding paragraph apply with \(\alpha=0\). ## 4 Model B3 In this section we study Model B3. Here cards are dealt without replacement from a \(d\)-deck shoe, and each of Player and Banker sees the composition of his own two-card hand. Player has a stand-or-draw decision in the five situations corresponding to a two-card total of \(5\), and Banker has a stand-or-draw decision in \(44\times 11=484\) situations (\(44\) compositions corresponding to Banker totals of \(0\)-\(7\), and \(11\) Player third-card values, \(0\)-\(9\) and \(\varnothing\)), so _baccara chemin de fer_ is a \(2^{5}\times 2^{484}\) bimatrix game. The \(2^{5}\) pure strategies of Player can be labeled by the numbers \(0\)-\(31\) in binary form. For example, strategy \(19=(10011)_{2}\) denotes the Player pure strategy of drawing at \((0,5)\), standing at \((1,4)\), standing at \((2,3)\), drawing at \((6,9)\), and drawing at \((7,8)\). More generally, for each \(u\in\{0,1,\ldots,31\}\), write \(u=16u_{1}+8u_{2}+4u_{3}+2u_{4}+u_{5}=(u_{1}u_{2}u_{3}u_{4}u_{5})_{2}\), where \(u_{1},u_{2},u_{3},u_{4},u_{5}\in\{0,1\}\) and define \(S_{u}\) to be the set of two-card hands at which Player, using pure strategy \(u\), draws: \[S_{u}:=\{(i_{1},i_{2}): 0\leq i_{1}\leq i_{2}\leq 9,\,M(i_{1}+i_{2})\leq 4\text{ or }(i_{1},i_{2})=(0,5)\text{ if }u_{1}=1\] \[\text{ or }(i_{1},i_{2})=(1,4)\text{ if }u_{2}=1\text{ or }(i_{1},i_{2})=(2,3)\text{ if }u_{3}=1\] \[\text{ or }(i_{1},i_{2})=(6,9)\text{ if }u_{4}=1\text{ or }(i_{1},i_{2})=(7,8)\text{ if }u_{5}=1\}.\] The complement of \(S_{u}\) with respect to \(\{(i_{1},i_{2}):0\leq i_{1}\leq i_{2}\leq 9,\,M(i_{1}+i_{2})\leq 7\}\), written \(S_{u}^{c}\), is the set of two-card hands at which Player, using pure strategy \(u\), stands. The random variables \((X_{1},X_{2})\), \((Y_{1},Y_{2})\), \(X_{3}\), \(Y_{3}\), \(G_{0}\), and \(G_{1}\), as well as the function \(M\), have the same meanings as in Section 3. We continue to use Equations (8)-(10). Given a function \(f\) on the set of integers, let us define, by analogy with Equations (11) and (12), for \(u\in\{0,1,\ldots,31\}\), \(0\leq j_{1}\leq j_{2}\leq 9\) with \(M(j_{1}+j_{2})\leq 7\), and \(k\in\{0,1,\ldots,9\}\), \[e_{u,0}((j_{1},j_{2}),k):=\sum_{\begin{subarray}{c}0\leq i_{1} \leq i_{2}\leq 9:\\ (i_{1},i_{2})\in S_{u}\end{subarray}}f(M(j_{1}+j_{2})-M(i_{1}+i_{2}+k))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\cdot p_{5 }((i_{1},i_{2}),(j_{1},j_{2}),k) \tag{39}\] and \[e_{u,1}((j_{1},j_{2}),k):=\sum_{\begin{subarray}{c}0\leq i_{1} \leq i_{2}\leq 9:\\ (i_{1},i_{2})\in S_{u}\end{subarray}}\sum_{l=0}^{9}f(M(j_{1}+j_{2}+l)-M(i_{1}+ i_{2}+k))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\cdot p_{6}((i_{ 1},i_{2}),(j_{1},j_{2}),k,l)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\cdot(40)\] Notice that the denominators of Equation (39) and Equation (40) are equal; we denote their common value by \(p_{u}((j_{1},j_{2}),k)\). We further define, for \(u\in\{0,1,\ldots,31\}\) and \(0\leq j_{1}\leq j_{2}\leq 9\) with \(M(j_{1}+j_{2})\leq 7\), \[e_{u,0}((j_{1},j_{2}),\varnothing):=\sum_{\begin{subarray}{c}0 \leq i_{1}\leq i_{2}\leq 9:\\ (i_{1},i_{2})\in S_{u}^{c}\end{subarray}}f(M(j_{1}+j_{2})-M(i_{1}+i_{2}))p_{4} ((i_{1},i_{2}),(j_{1},j_{2}))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad \[e_{u,1}((j_{1},j_{2}),\varnothing):=\sum_{\begin{subarray}{c}0\leq i_{1}\leq i_{2} \leq 9:\\ (i_{1},i_{2})\in S^{u}_{u}\end{subarray}}\sum_{l=0}^{9}f(M(j_{1}+j_{2}+l)-M(i_{1 }+i_{2}))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ \[+\sum_{\begin{subarray}{c}0\leq j_{1}\leq j_{2}\leq 9:\\ M(j_{1}+j_{2})\leq 7\end{subarray}}\sum_{\begin{subarray}{c}k\in\{0,1,\ldots,9, \varnothing\}:\\ ((j_{1},j_{2}),k)\in T\end{subarray}}p_{u}((j_{1},j_{2}),k)b_{u,1}((j_{1},j_{2} ),k),\] using Equations (43) and (44) and recalling Equation (18), and where \(a_{u,T}:=-b_{u,T}\) with \(\alpha=0\). We want to find a Nash equilibrium of the casino game _baccara chemin de fer_ under Model B3, for all positive integers \(d\) and for \(0\leq\alpha\leq 1/10\). We demonstrate the method by treating the case \(d=6\) and \(0\leq\alpha\leq 1/10\) in detail. Then we state results for all \(d\). Let \(d=6\). We begin by applying Lemma 1, both with \(\alpha=0\) and \(\alpha=1/10\), reducing the game to \(2^{5}\times 2^{20}\), where \(20\) refers to the same \(20\) information sets we identified in Model B2. In fact, if we attempt to derive the analogue of Table 1 under Model B3, we find that it is identical to Table 1. But here an S entry, for example, means that S is optimal versus each of Player's \(2^{5}\) pure strategies. An S/D entry, for example, means that S is optimal versus Player's pure strategy SSSSS (\(u=0\)) and D is optimal versus Player's pure strategy DDDDDD (\(u=31\)). We recall that, under Model B2, the support of Banker's unique equilibrium strategy comprises the two pure strategies DDDDDD-SSS-DDD-SSSSSD-DDDD (45) DDDDDD-SSS-DDD-DSSSSSD-DDDD at \(((0,3),9)\), \(((1,2),9)\), \(((4,9),9)\), \(((5,8),9)\), \(((6,7),9)\); at \(((2,2),1)\), \(((6,8),1)\), \(((7,7),1)\); at \(((0,5),4)\), \(((6,9),4)\), \(((7,8),4)\); at \(((0,6),\varnothing)\), \(((1,5),\varnothing)\), \(((2,4),\varnothing)\), \(((3,3),\varnothing)\), \(((7,9),\varnothing)\), \(((8,8),\varnothing)\); and at \(((1,5),6)\), \(((2,4),6)\), \(((3,3),6)\). (This assumes \(0\leq\alpha<\alpha_{0}\). For \(\alpha_{0}<\alpha\leq 1/10\), there is one change: D at \(((3,3),6)\) is changed to S.) The key idea is quite simple. We consider the \(2^{5}\times 2\) bimatrix game obtained from Model B3 by restricting Banker's pure strategies to these two alternatives. Reversing the roles of Player and Banker, we then have a \(2\times 2^{5}\) bimatrix game to which Foster's algorithm (Lemma 2) applies. The resulting Nash equilibrium yields a candidate for a Nash equilibrium under Model B3, which we can then, we hope, confirm. (The method fails for \(d=1\), which must be treated separately. It works otherwise, but an additional \(\alpha\)-interval appears when \(d=2\), \(9\), or \(12\).) To apply Lemma 2, we will have to redefine our notation. Temporarily, Banker is player I and Player is player II. Let \(V_{0},V_{1}\subset\{(j_{1},j_{2}):0\leq j_{1}\leq j_{2}\leq 9,\,M(j_{1}+j_{2}) \leq 7\}\times\{0,1,\ldots,9,\varnothing\}\) correspond to the two pure strategies in Display (45); specifically, \(V_{0}\) and \(V_{1}\) are the collections of information sets at which Banker draws. Given a function \(f\) on the set of integers, let us define, for \(u\in\{0,1\}\) and \(0\leq i_{1}\leq i_{2}\leq 9\) with \(M(i_{1}+i_{2})\leq 7\), \[e_{u,0}(i_{1},i_{2})\] \[:=\bigg{[}\sum_{\begin{smallmatrix}0\leq j_{1}\leq j_{2}\leq 9: \\ M(j_{1}+j_{2})\leq 7,\\ ((j_{1},j_{2}),\varnothing)\in V_{u}^{c}\end{smallmatrix}}f(M(j_{1}+j_{2})-M(i_ {1}+i_{2}))p_{4}((i_{1},i_{2}),(j_{1},j_{2}))\] \[+\sum_{\begin{smallmatrix}0\leq j_{1}\leq j_{2}\leq 9:\\ M(j_{1}+j_{2})\leq 7,\\ ((j_{1},j_{2}),\varnothing)\in V_{u}\end{smallmatrix}}\sum_{l=0}^{9}f(M(j_{1}+ j_{2}+l)-M(i_{1}+i_{2}))p_{5}((i_{1},i_{2}),(j_{1},j_{2}),l)\bigg{]}\] \[\bigg{/}\sum_{\begin{smallmatrix}0\leq j_{1}\leq j_{2}\leq 9:\\ M(j_{1}+j_{2})\leq 7\end{smallmatrix}}p_{4}((i_{1},i_{2}),(j_{1},j_{2})) \tag{46}\] and \[e_{u,1}(i_{1},i_{2})\] \[:=\bigg{[}\sum_{\begin{smallmatrix}0\leq j_{1}\leq j_{2}\leq 9:\\ M(j_{1}+j_{2})\leq 7\end{smallmatrix}}\sum_{\begin{smallmatrix}0\leq k\leq 9:\\ ((j_{1},j_{2}),k)\in V_{u}^{c}\end{smallmatrix}}f(M(j_{1}+j_{2})-M(i_{1}+i_{2}+ k))\] \[\cdot p_{5}((i_{1},i_{2}),(j_{1},j_{2}),k)\] \[+\sum_{\begin{smallmatrix}0\leq j_{1}\leq j_{2}\leq 9:\\ M(j_{1}+j_{2})\leq 7\end{smallmatrix}}\sum_{\begin{smallmatrix}0\leq k\leq 9:\\ ((j_{1},j_{2}),k)\in V_{u}\end{smallmatrix}}\sum_{l=0}^{9}f(M(j_{1}+j_{2}+l)-M( i_{1}+i_{2}+k))\] \[\cdot p_{6}((i_{1},i_{2}),(j_{1},j_{2}),k,l)\bigg{]}\] \[\bigg{/}\sum_{\begin{smallmatrix}0\leq j_{1}\leq j_{2}\leq 9:\\ M(j_{1}+j_{2})\leq 7\end{smallmatrix}}\sum_{k=0}^{9}p_{5}((i_{1},i_{2}),(j_{1},j_{2}),k). \tag{47}\] Notice that the denominators of Equation (46) and Equation (47) are equal; we denote the common value by \(p_{u}(i_{1},i_{2})\), which does not actually depend on \(u\). If Player has two-card hand \((i_{1},i_{2})\), where \(0\leq i_{1}\leq i_{2}\leq 9\) and \(M(i_{1}+i_{2})\leq 7\), Banker's expectation when Player stands (\(v=0\)) or draws (\(v=1\)) is, with \(f\) as in Equation (15), \[a_{u,v}^{*}(i_{1},i_{2}):=e_{u,v}(i_{1},i_{2}),\quad u,v\in\{0,1\},\] where \(u\in\{0,1\}\) denotes Banker's pure strategy. For convenience, this definition ignores the rule that Player has a choice only with a two-card total of \(5\). We can now define the \(2\times 2^{5}\) payoff bimatrix \((\boldsymbol{A}^{*},\boldsymbol{B}^{*})\) to have \((u,T)\) entry \((a_{u,T}^{*},b_{u,T}^{*})\) for \(u\in\{0,1\}\) and \(T\subset\{(0,5),(1,4),(2,3),(6,9),(7,8)\}\), where \[a_{u,T}^{*}=-\frac{32\,\alpha\,d^{2}(37,\!120\,d^{2}-4044\,d+109)}{(52 \,d)_{4}}\] \[\qquad\qquad+\sum_{\begin{subarray}{c}0\leq i_{1}\leq i_{2}\leq 9: \\ M(i_{1}+i_{2})=6,7\end{subarray}}p_{u}(i_{1},i_{2})a_{u,0}^{*}(i_{1},i_{2})+\sum_ {\begin{subarray}{c}0\leq i_{1}\leq i_{2}\leq 9:\\ 0\leq M(i_{1}+i_{2})\leq 4\end{subarray}}p_{u}(i_{1},i_{2})a_{u,1}^{*}(i_{1},i_{ 2})\] \[\qquad\qquad+\sum_{\begin{subarray}{c}0\leq i_{1}\leq i_{2}\leq 9: \\ M(i_{1}+i_{2})=5,\\ (i_{1},i_{2})\in T^{c}\end{subarray}}p_{u}(i_{1},i_{2})a_{u,0}^{*}(i_{1},i_{ 2})+\sum_{\begin{subarray}{c}0\leq i_{1}\leq i_{2}\leq 9:\\ M(i_{1}+i_{2})=5,\\ (i_{1},i_{2})\in T\end{subarray}}p_{u}(i_{1},i_{2})a_{u,1}^{*}(i_{1},i_{2}),\] and where \(b_{u,T}^{*}:=-a_{u,T}^{*}\) with \(\alpha=0\). Note that \(\mathbf{B}^{*}\) is additive, so this game fits into the framework of Lemma 2. We find that \(T_{1,0}=\{(0,5),(1,4),(2,3),(6,9),(7,8)\}\) and the best-response discontinuities, which do not depend on \(\alpha\), satisfy \[0<p^{*}(2,3)<p^{*}(1,4)<p^{*}(0,5)<p^{*}(7,8)<p^{*}(6,9)<1.\] This leads to a Player equilibrium mixed strategy of DMSDD at \((0,5)\), \((1,4)\), \((2,3)\), \((6,9)\), and \((7,8)\), drawing at \((1,4)\) with equalizing probability \[q^{*}=\frac{35,\!003+186,\!672\,\alpha}{576(130-71\,\alpha)}. \tag{48}\] Banker's equilibrium mixed strategy is DDDDD-SSS-DDD-MSSSSD-DDD at \((3,9)\), \((4,1)\), \((5,4)\), \((6,\varnothing)\), and \((6,6)\), together with Table 1, where Banker draws at \(((0,6),\varnothing)\) with probability \[p^{*}=p^{*}(1,4)=18,\!885,\!571/36,\!781,\!056. \tag{49}\] Recall that this was derived from Banker's equilibrium mixed strategy under Model B2 when \(0\leq\alpha<\alpha_{0}\). For \(\alpha_{0}<\alpha\leq 1/10\), we obtain the same \(q^{*}\) as in Equation (48), but now Banker's equilibrium mixed strategy is DDDDD-SSS-DDD-MSSSSD-DDS at \((3,9)\), \((4,1)\), \((5,4)\), \((6,\varnothing)\), and \((6,6)\), together with Table 1, where Banker draws at \(((0,6),\varnothing)\) with probability \[p^{*}=p^{*}(1,4)=18,\!792,\!835/36,\!781,\!056. \tag{50}\] In both cases we have found Nash equilibria of the \(2^{5}\times 2\) bimatrix game obtained by restricting Banker to two specific pure strategies, those that arise from Model B2. We now return to regarding Player as player I and Banker as player II, so we redefine \[p=\frac{35,\!003+186,\!672\,\alpha}{576(130-71\,\alpha)} \tag{51}\] and \(q\) as in Equations (49) and (50), resulting in two versions of \((\mathbf{p},\mathbf{q})\) that we hope to confirm as Nash equilibria for the \(2^{5}\times 2^{20}\) bimatrix game of Model B3 under suitable conditions on \(\alpha\). Let \((\mathbf{A},\mathbf{B})\) be the \(2^{5}\times 2^{20}\) payoff bimatrix. Let \(\mathbf{p}\) be the Player mixed strategy with \(1-p\) and \(p\) (as in Equation (51)) at entries \(19=(10011)_{2}\) and \(27=(11011)\) of 0-31 (0s elsewhere). Let \(\mathbf{q}\) be the Banker mixed strategy with \(1-q\) and \(q\) (as in Equation (49)) at entries 1,019,407 = \((11111000111000001111)_{2}\) and 1,019,663 = \((1111100011100001111)_{2}\) of 0-1,048,575 (0s elsewhere). Then \((\mathbf{p},\mathbf{q})\) is a Nash equilibrium of \((\mathbf{A},\mathbf{B})\) if and only if \[\text{entries 19 and 27 (of 0--31) of $\mathbf{A}\mathbf{q}^{\mathsf{T}}$ are equal and maximal,} \tag{52}\] \[\text{entries 1,019,407 and 1,019,663 (of 0--1,048,575) of $\mathbf{p}\mathbf{B}$}\] \[\text{ are equal and maximal} \tag{53}\] Now Condition (52) is automatic by virtue of how \((\mathbf{p},\mathbf{q})\) was chosen, so it remains to verify Condition (53), which concerns only rows 19 and 27 (of 0-31) of \(\mathbf{B}\). Let \(\mathbf{B}^{\circ}\) be the \(2\times 2^{20}\) submatrix of \(\mathbf{B}\) comprising rows 19 and 27, so that Condition (53) is equivalent to \[\text{entries 1,019,407 and 1,019,663 (of 0--1,048,575) of $(1-p,p)\mathbf{B}^{\circ}$}\] \[\text{ are equal and maximal.}\] We apply Lemma 2 once again, this time to \((\mathbf{A}^{\circ},\mathbf{B}^{\circ})\), where \(\mathbf{A}^{\circ}\) is the \(2\times 2^{20}\) submatrix of \(\mathbf{A}\) comprising rows 19 and 27 (of 0-31). We find that \(T_{0,1}^{\alpha}=\{((0,6),\varnothing),((1,5),\varnothing),((2,4),\) \(\varnothing),((3,3),\varnothing),((7,9),\varnothing),((8,8),\varnothing)\}\), for both \(\alpha=0\) and \(\alpha=1/10\), while \(T_{1,0}^{0}=\varnothing\) and \(T_{1,0}^{1/10}=\{((3,3),6)\}\). See Figure 2 for the best-response-discontinuity curves. Furthermore, \(p((0,6),\varnothing)\) and \(p((3,3),6)\) intersect when \(\alpha\) is \[\beta_{0}:=\frac{\text{84,325,687}-\sqrt{\text{6,246,646,053,635,809}}}{\text {92,945,476}}\approx 0.0569147.\] This is enough to conclude that there is a Nash equilibrium for \(0\leq\alpha<\beta_{0}\) and another for \(\beta_{0}<\alpha\leq 1/10\). Both have the same \(\mathbf{p}\), namely a \((1-p,p)\) mixture of DSSDD and DDSDD (rows 19 and 27 of 0-31), with \(p\) as in Equation (51). But the mixture \(\mathbf{q}\) differs in the two cases. The first is a \((1-q,q)\) mixture of DDDDD-SSS-DDD-SSSSSD-DDD and DDDDD-SSS-DDD-DSSSSSD-DDD (columns 1,019,407 and 1,019,663 of 0-1,048,575), together with Table 1, with \(q\) as in Equation (49). The second is a \((1-q,q)\) mixture of DDDDDD-SSS-DDD-SSSSSD-DDS and DDDDD-SSS-DDD-DSSS-DD (columns 1,019,406 and 1,019,662 of 0-1,048,575), together with Table 1, with \(q\) as in Equation (50). Finally, just as in Model B2, we obtain multiple Nash equilibria when \(\alpha=\beta_{0}\). Player strategy DMSDD and Banker strategy DDDDD-SSS-DDD-MSSSSD-DDM, together with Table 1, allow us to evaluate Player's \(2\times 4\) payoff matrix, which is \[\text{P: S at $(1,4)$}\begin{pmatrix}-\frac{3,953,411,487}{305,162,919,061}&- \frac{19,769,569,403}{1,525,814,595,305}&-\frac{19,423,187,963}{1,525,814,595,3 05}&-\frac{1,765,972,721}{138,710,417,755}\\ -\frac{3,878,240,147}{305,162,919,061}&-\frac{1,939,185,798}{1,525,814,595,305}& -\frac{19,782,952,383}{1,525,814,595,305}&-\frac{19,783,609,631}{1,525,814,595,305}\end{pmatrix}\!,\] where, for example, the Banker strategy SD means S at \(((0,6),\varnothing)\) and D at \(((3,3),6)\). Here there are exactly four equalizing strategies with supports of size two, namely \[(1-q,0,q,0), q=18,\!792,\!835/36,\!781,\!056,\] \[(1-q,0,0,q), q=3,\!758,\!567/7,\!337,\!664,\] \[(0,1-q,q,0), q=18,\!885,\!571/36,\!873,\!792,\] \[(0,1-q,0,q), q=18,\!885,\!571/36,\!781,\!056.\] This completes the description of the Nash equilibria under Model B3 when \(d=6\). Next, we summarize results under Model B3 for all \(d\geq 1\). First, there is a Nash equilibrium \((\boldsymbol{p},\boldsymbol{q})\) with Player strategy DMSDD at \((0,5)\), \((1,4)\), \((2,3)\), \((6,9)\), and \((7,8)\), drawing at \((1,4)\) with probability \[p=p((0,6),\varnothing)=\frac{(12\,d-1)(16\,d^{2}-14\,d+1)+8\,\alpha\,d(112\,d^ {2}-24\,d+1)}{32\,d^{2}(11\,d-1)-16\,d^{2}\,\alpha(12\,d-1)} \tag{54}\] if \(d\geq 2\), and \[p=p((8,8),\varnothing)=\frac{4+203\,\alpha}{2(38-21\,\alpha)} \tag{55}\] Figure 2: The seven best-response-discontinuity curves for Model B3 (restricted to rows 19 and 27 of 0–31) and \(d=6\) graphed simultaneously as functions of \(\alpha\in[0,1/10]\) with range restricted to \([0,1]\). There are six points of intersection. if \(d=1\). Table 7 indicates the strategies on which Banker mixes, with drawing probability \(q\). For \(d=1\), \[q=4519/10,\!716\text{ if }\alpha\in[0,\beta_{1}), \tag{56}\] \[q=3991/10,\!716\text{ if }\alpha\in(\beta_{1},1/10]. \tag{57}\] For \(d=2\), \[q=17,\!431/64,\!512\text{ if }\alpha\in[0,\beta_{2}), \tag{58}\] \[q=192,\!637/709,\!632\text{ if }\alpha\in(\beta_{2},\beta_{3}),\] (59) \[q=65,\!407/236,\!544\text{ if }\alpha\in(\beta_{3},1/10]. \tag{60}\] For \(d=3\), \(q\) is as in Equation (62) if \(\alpha\in[0,1/10]\). For \(4\leq d\leq 7\), \[q=\frac{92,\!160\,d^{4}-120,\!128\,d^{3}+26,\!336\,d^{2}-2000\,d+47}{256\,d^{2 }(11\,d-1)(52\,d-5)}\text{ if }\alpha\in[0,\beta_{0}), \tag{61}\] \[q=\frac{91,\!776\,d^{4}-119,\!968\,d^{3}+26,\!320\,d^{2}-2000\,d +47}{256\,d^{2}(11\,d-1)(52\,d-5)}\text{ if }\alpha\in(\beta_{0},1/10]. \tag{62}\] For \(d=8\), \[q=\frac{91,\!904\,d^{4}-119,\!680\,d^{3}+26,\!064\,d^{2}-1932\,d+41}{256\,d^{2 }(11\,d-1)(52\,d-5)}\text{ if }\alpha\in[0,\beta_{4}), \tag{63}\] \(q\) is as in Equation (61) if \(\alpha\in(\beta_{4},\beta_{0})\), and \(q\) is as in Equation (62) if \(\alpha\in(\beta_{0},1/10]\). For \(d=9\), \[q=\frac{91,\!648\,d^{4}-119,\!488\,d^{3}+26,\!032\,d^{2}-1932\,d+41}{256\,d^{ 2}(11\,d-1)(52\,d-5)}\text{ if }\alpha\in[0,\beta_{5}), \tag{64}\] \(q\) is as in Equation (63) if \(\alpha\in(\beta_{5},\beta_{4})\), \(q\) is as in Equation (61) if \(\alpha\in(\beta_{4},\beta_{0})\), and \(q\) is as in Equation (62) if \(\alpha\in(\beta_{0},1/10]\). For \(d=10,11\), \(q\) is as in Equation (64) if \(\alpha\in[0,\beta_{5})\), \(q\) is as in Equation (63) if \(\alpha\in(\beta_{5},\beta_{0})\), and \[q=\frac{91,\!520\,d^{4}-119,\!520\,d^{3}+26,\!048\,d^{2}-1932\,d+41}{256\,d^{ 2}(11\,d-1)(52\,d-5)}\text{ if }\alpha\in(\beta_{0},1/10]. \tag{65}\] For \(d=12\), \(q\) is as in Equation (64) if \(\alpha\in[0,\beta_{0})\) and \[q=1,\!689,\!974,\!681/2,\!989,\!264,\!896\text{ if }\alpha\in(\beta_{0},1/10]. \tag{66}\] Finally, if \(d\geq 13\), \(q\) is as in Equation (64) if \(\alpha\in[0,1/10]\). At each of the exceptional points \(\beta_{0}\) (\(4\leq d\leq 12\)), \(\beta_{1}\) (\(d=1\)), \(\beta_{2}\) and \(\beta_{3}\) (\(d=2\)), \(\beta_{4}\) (\(d=8,9\)), and \(\beta_{5}\) (\(d=9,10,11\)), there are exactly four Nash equilibria with Banker equilibrium strategy having support size 2, just as we saw in the case \(d=6\). We leave the evaluation of the various mixing probabilities to the reader. We have established the following theorem. **Theorem 2**.: _Consider the casino game baccara chemin de fer under Model B3 with \(d\) a positive integer and \(0\leq\alpha\leq 1/10\). There is a Nash equilibrium of the following form. Player's equilibrium strategy is to draw at \((0,5)\), \((6,9)\), and \((7,8)\), stand at \((2,3)\), and mix at \((1,4)\), drawing with probability as in Equation (54) if \(d\geq 2\), and with probability as in Equation (55) if \(d=1\). Banker's equilibrium strategy is as in Tables 5 and 7. For certain values of \(\alpha\) (namely, \(\beta_{0},\beta_{1},\ldots,\beta_{5}\) of Table 7), multiple Nash equilibria are known to exist. The number of such \(\alpha\)-values is three if \(d=9\), two if \(d\in\{2,8,10,11\}\), one if \(d\in\{1,4,5,6,7,12\}\), and none otherwise. For each of these \(\alpha\)-values, Player's equilibrium strategy is as above and Banker has four equilibrium strategies of support size \(2\)._ Let us briefly compare the Nash equilibrium of the casino game (Theorem 2) with that of the parlor game (Ethier and Gamez, 2013), under Model B3 in both cases. We also compare them in the limit as \(d\to\infty\). In the casino game, Player's mixing probability (i.e., Player's probability of drawing at \((1,4)\)) is as in Equation (54), which depends explicitly on \(d\) and \(\alpha\). Banker's mixing probability (i.e., Banker's probability of drawing at \(((0,6),\varnothing)\)) depends on \(d\) and is a step function in \(\alpha\) with zero, one, two, or three discontinuities (zero, hence no \(\alpha\) dependence, if \(d=3\) or \(d\geq 13\)). In the limit as \(d\to\infty\), Player's mixing probability converges to \(2(3+14\,\alpha)/(11-6\,\alpha)\), while Banker's mixing probability converges to \(179/286\). It follows that Player's limiting probability of drawing at two-card totals of \(5\), including \((0,5),(1,4)\), \((6,9)\), and \((7,8)\), is \[p=\frac{1}{2}+\frac{1}{8}\,\frac{2(3+14\,\alpha)}{11-6\,\alpha}+\frac{1}{8}+ \frac{1}{8}=\frac{9-\alpha}{11-6\,\alpha}, \tag{67}\] and Banker's limiting probability of drawing at \((6,\varnothing)\), including \(((0,6),\varnothing)\) and \(((8,8),\varnothing)\), is \[q=\frac{1}{2}\,\frac{179}{286}+\frac{1}{16}=\frac{859}{2288}, \tag{68}\] and we recognize Equations (67) and (68) as the parameters of the Model A1 Nash equilibrium. In the parlor game, the results of the preceding paragraph apply with \(\alpha=0\). ## 5 Summary _Baccara chemin de fer_ is a classical card game to which game theory applies. Six models have been proposed; they are obtained by combining either Model A (cards are dealt with replacement) or Model B (cards are dealt without replacement from a \(d\)-deck shoe) with one of Model 1 (Player and Banker see two-card totals), Model 2 (Player sees two-card totals, Banker sees two-card compositions), or Model 3 (Player and Banker see two-card compositions). It is further assumed that there is a \(100\,\alpha\) percent commission on Banker wins, where \(0\leq\alpha\leq 1/10\). The special case \(\alpha=0\) was studied by Ethier and Gamez (2013). We emphasize Models B2 and B3 in this paper. Foster's algorithm, extended to additive \(2\times 2^{n}\) bimatrix games, can be applied to Model B2 in a straightforward way, and we obtain, with rare exceptions, a unique Nash equilibrium. In Model B3 we identify a Nash equilibrium but cannot prove uniqueness. Here we have a \(2^{5}\times 2^{484}\) bimatrix game, which can be reduced to \(2^{5}\times 2^{n_{d}}\), where \(20\leq n_{d}\leq 28\). We guess that Banker's equilibrium strategy has the same support as it has under Model B2. We find the Nash equilibrium of the resulting \(2^{5}\times 2\) bimatrix game using Foster's algorithm, and finally confirm that this leads to a Nash equilibrium of the full game \((2^{5}\times 2^{n_{d}})\) by applying Foster's algorithm to the appropriate \(2\times 2^{n_{d}}\) bimatrix game. The method fails only when \(d=1\). We notice that Player's equilibrium strategy has support independent of \(d\) and \(\alpha\), which simplifies matters.
2309.03738
On the Iwasawa invariants of Artin representations
We study Iwasawa invariants associated to Selmer groups of Artin representations, and criteria for the vanishing of the associated algebraic Iwasawa invariants. The conditions obtained can be used to study natural distribution questions in this context.
Aditya Karnataki, Anwesh Ray
2023-09-07T14:26:18Z
http://arxiv.org/abs/2309.03738v1
# On the Iwasawa invariants of Artin representations ###### Abstract. We study Iwasawa invariants associated to Selmer groups of Artin representations, and criteria for the vanishing of the associated algebraic Iwasawa invariants. The conditions obtained can be used to study natural distribution questions in this context. Key words and phrases:Iwasawa theory, Artin representations, Euler characteristic, distribution questions 2020 Mathematics Subject Classification: 11R23, 11R34, 11F80 (Primary) ## 1. Introduction Let \(p\) be an odd prime number and \(\mathbb{Z}_{p}\) denote the ring of \(p\)-adic integers. Let \(K\) be a number field and fix an algebraic closure \(\bar{K}\) of \(K\). A \(\mathbb{Z}_{p}\)-extension of \(K\) is an infinite Galois extension \(K_{\infty}/K\) such that the Galois group \(\operatorname{Gal}(K_{\infty}/K)\) is isomorphic to \(\mathbb{Z}_{p}\) as a topological group. Let \(K_{n}\subset K_{\infty}\) be the extension of \(K\) for which \(\operatorname{Gal}(K_{n}/K)\simeq\mathbb{Z}/p^{n}\mathbb{Z}\), and \(h_{k}(K_{n})\) denote the cardinality of the \(p\)-primary part of the class number of \(K_{n}\). Writing \(h_{p}(K_{n})=p^{e_{n}}\), Iwasawa [16] showed that for all large enough values of \(n\), \[e_{n}=p^{n}\mu+n\lambda+\nu,\] where \(\mu,\lambda\in\mathbb{Z}_{\geq 0}\) and \(\nu\in\mathbb{Z}\) are invariants that depend on the extension \(K_{\infty}/K\) and not on \(n\). The results of Iwasawa motivated Mazur [14] to study the growth properties of the \(p\)-primary Selmer groups of \(p\)-ordinary abelian varieties in \(\mathbb{Z}_{p}\)-extensions. Mazur's results were later extended to very general classes of ordinary Galois representations by Greenberg, cf. [11]. Another class of representations that are natural to consider are Artin representations. Let \(K/\mathbb{Q}\) be a finite and totally imaginary Galois extension with Galois group \(\Delta:=\operatorname{Gal}(K/\mathbb{Q})\), and let \(\rho:\Delta\to\operatorname{GL}_{d}(\bar{\mathbb{Q}})\) be an irreducible Artin representation. Let \(p\) be an odd prime number and let \(\sigma_{p}:\bar{\mathbb{Q}}\hookrightarrow\bar{\mathbb{Q}}_{p}\) be an embedding and via \(\sigma_{p}\) we view \(\rho\) as a representation \(\rho:\Delta\to\operatorname{GL}_{d}(\bar{\mathbb{Q}}_{p})\). Let \(v\) denote an archimedian prime of \(K\), and set \(K_{v}\) to denote the \(v\)-adic completion of \(K\). We shall identify \(\operatorname{Gal}(K_{v}/\mathbb{R})\) with a subgroup \(\Delta_{v}\) of \(\Delta\). Set \(d^{+}\) to denote the multiplicity of the trivial character in \(\rho_{|\Delta_{v}}\) and observe that this number is well defined and independent of the choice of the archimedian prime \(v\). When \(K\) is totally real, we find that \(d^{+}=d\). Let \(\mathfrak{p}\) be a prime of \(K\) that lies above \(p\), and let \(\Delta_{\mathfrak{p}}\subset\Delta\) be the decomposition group \(\mathfrak{p}\). Following Greenberg and Vatsal [11], we make the following assumptions. **Assumption 1.1**.: _With respect to notation above, assume that_ 1. \(p\) _does not divide_ \([K:\mathbb{Q}]\)_,_ 2. \(d^{+}=1\)_,_ _._ 3. _there exists a_ \(1\)_-dimensional representation_ \(\epsilon_{\mathfrak{p}}\) _of_ \(\Delta_{\mathfrak{p}}\) _that occurs with multiplicity_ \(1\) _in_ \(\rho_{|\Delta_{\mathfrak{p}}}\)_._ In the special case \(d=2\) and \(d^{+}=1\), such representations are expected to arise from Hecke eigenforms of weight \(1\) on \(\Gamma_{1}(N)\), where \(N\) is the Artin conductor of \(\rho\). The conjecture has been settled in various special cases, cf. [1, 10] and references therein. The choice of the character \(\epsilon_{\mathfrak{p}}\) plays a role in the definition of the Selmer groups associated to \(\rho\). Take \(\mathcal{K}\) be the completion of \(K\) at \(\mathfrak{p}\). There is a natural isomorphism \(\operatorname{Gal}(\mathcal{K}/\mathbb{Q}_{p})\simeq\Delta_{\mathfrak{p}}\), and set \(\epsilon\) to denote the composite \[\operatorname{Gal}(\mathcal{K}/\mathbb{Q}_{p})\xrightarrow{\sim}\Delta_{ \mathfrak{p}}\xrightarrow{\epsilon_{\mathfrak{p}}}\bar{\mathbb{Q}}^{\times}.\] Let \(\chi\) denote the character associated to \(\rho\) and \(\mathbb{Q}(\chi)\) denote the field generated by the values of \(\chi\). Let \(\mathcal{F}\) be the field generated by \(\mathbb{Q}_{p}\), the values of \(\chi\) and the values of \(\epsilon_{\mathfrak{p}}\). We regard \(\rho\) as a representation a vector space \(V\) defined over \(\mathcal{F}\) and note that \(\dim_{\mathcal{F}}V=d\). Let \(V^{\epsilon_{\mathfrak{p}}}\) be the the maximal \(\mathcal{F}\)-subspace of \(V\) on which \(\Delta_{\mathfrak{p}}\) acts by \(\epsilon_{\mathfrak{p}}\). By hypothesis, \(\dim_{\mathcal{F}}V^{\epsilon_{\mathfrak{p}}}=1\). Let \(\mathcal{O}\) be the valuation ring in \(\mathcal{F}\) and \(\varpi\) be its uniformizer. We assume that \(\rho\) is irreducible, and therefore, there is a Galois stable \(\mathcal{O}\)-lattice \(T\subset V\). This lattice is uniquely determined up to scaling by a constant. We consider the divisible Galois module \(D:=V/T\) and let \(D^{\epsilon_{\mathfrak{p}}}\) be the image of \(V^{\epsilon_{\mathfrak{p}}}\) in \(D\). Let \(\mathbb{Q}_{\infty}\) denote the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\). Let \(\mathbb{Q}_{n}\subset\mathbb{Q}_{\infty}\) be the subextension such that \(\operatorname{Gal}(\mathbb{Q}_{n}/\mathbb{Q})\simeq\mathbb{Z}/p^{n}\mathbb{Z}\). We set \[S_{\chi,\epsilon}(\mathbb{Q}_{n}):=\ker\left\{H^{1}(\mathbb{Q}_{n},D) \longrightarrow\prod_{v\mathfrak{p}}H^{1}(\mathbb{Q}_{n,v}^{\mathrm{nr}},D) \times H^{1}(\mathbb{Q}_{n,\eta_{p}}^{\mathrm{nr}},D/D^{\epsilon_{\mathfrak{p }}})\right\},\] where \(\eta_{p}\) is the unique prime that lies above \(p\). **Theorem 1.2** (Greenberg and Vatsal).: _Suppose that the Assumption 1.1 holds. Then, the Selmer group \(S_{\chi,\epsilon}(\mathbb{Q}_{n})\) is finite for all \(n\in\mathbb{Z}_{\geq 0}\)._ Proof.: The above result is [1, Proposition 3.1]. **Definition 1.3**.: _Define the Selmer group over \(\mathbb{Q}_{\infty}\) to denote the direct limit with respect to restriction maps_ \[S_{\chi,\epsilon}(\mathbb{Q}_{\infty}):=\varinjlim_{n}S_{\chi,\epsilon}( \mathbb{Q}_{n}).\] Greenberg and Vatsal show that the Selmer group is cofinitely generated and cotorsion over the Iwasawa algebra, which is a formal power series ring over \(\mathcal{O}\) in \(1\)-variable. Leveraging the results of Greenberg and Vatsal, we study the Euler characteristic formula associated to these Selmer groups and utilize it to study explicit conditions for the vanishing of Selmer group \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\). For instance, we are able to prove that there is an explicit relationship between the vanishing of the Selmer group and the \(p\)_-rationality_ of \(K\) (in the sense of [12]). **Theorem 1.4** (Theorem 3.11).: _Assume that_ 1. _the conditions of Assumption_ 1.1 _are satisfied._ 2. \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})=0\)_,_ 3. \(K\) _is_ \(p\)_-rational,_ 4. \(p\) _does not divide the class number of_ \(K\) _Then, \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})=0\)._ These criteria are illustrated in two special cases, namely that of \(2\)-dimensional irreducible Artin representations (cf. Theorem 3.12) and \(3\)-dimensional icosahedral Artin representations of \(A_{5}\) (cf. Theorem 3.13). The Theorem 3.9 gives a more refined criterion that is equivalent to the vanishing of \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\). The relationship between \(p\)-rationality and the Iwasawa invariants of number fields has been studied by Hajir and Maire, cf. [11]. For simplicity, we then specialize our discussion to \(2\)-dimensional Artin representation of dihedral type. Let \(L\) is an imaginary quadratic field and \(\zeta:\mathrm{G}_{L}\to\bar{\mathbb{Q}}^{\times}\) is a character and \(\rho=\mathrm{Ind}_{L}^{\mathbb{Q}}\zeta:\mathrm{G}_{\mathbb{Q}}\to\mathrm{GL}_ {2}(\bar{\mathbb{Q}})\) be the associated Artin representation. Set \(\zeta^{\prime}\) to be conjugate character defined by setting \(\zeta^{\prime}(x)=\zeta(cxc^{-1})\), where \(c\) denotes complex conjugation. Assume that \(\zeta^{\prime}=\zeta^{-1}\), thus \(\rho\) is dihedral type. Let \(S(\rho)\) be the set of primes \(p\) such that 1. \(p\) is odd and \(p\nmid[K:\mathbb{Q}]\), 2. \(p\) splits in \(L\), \(p\mathcal{O}_{L}=\pi\pi^{*}\), and \(\pi\) is inert in \(K/L\). It is easy to see that if \(\mathfrak{p}|p\) is a prime of \(K\), then, the conditions of Assumption of 1.1 are satisfied. Let \(T(\rho)\) be the set of primes \(p\in S(\rho)\) such that the Selmer group \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) vanishes for all of the primes \(\mathfrak{p}\) that lie above \(p\). Set \(T^{\prime}(\rho):=S(\rho)\backslash T(\rho)\). In section 4, we apply our results to study the following natural question. **Question 1.5**.: _What can be said about the densities of the sets of primes in \(T(\rho)\) and \(T^{\prime}(\rho)\)?_ A conjecture of Gras predicts that any number field \(K\) is \(p\)-rational at all but finitely many primes \(p\). This has conjectural implications to the vanishing of Selmer groups for all but finitely many primes a compatible family of Artin representations. It the case when \(K/\mathbb{Q}\) is an \(S_{3}\)-extension, we prove certain unconditional results by leveraging a result of Maire and Rougnant [11] on the \(p\)-rationality of \(S_{3}\)-extensions of \(\mathbb{Q}\). **Theorem 1.6** (Theorem 4.10).: _Let \(K/\mathbb{Q}\) be an imaginary \(S_{3}\) extension and \(\rho\) be a \(2\)-dimensional Artin representation that factors through \(\mathrm{Gal}(K/\mathbb{Q})\). Then,_ \[\#\{p\leq x\mid p\in T(\rho)\}\geq c\log x.\] The result above is certainly weaker than what one is led to conjecture, however, it does well to illustrate the effectiveness of the Theorem 3.11. Throughout, we motivate our results by drawing upon analogues from the classical Iwasawa theory of class groups. ### Organization Including the introduction, the article consists of four sections. In section 2 we review the classical Iwasawa theory of class groups and Artin representations. We review results of Federer and Gross [10] which gives an explicit relationship between \(p\)-adic regulators and Iwasawa invariants. We end this section by reviewing some results of Greenberg and Vatsal [10] on the Iwasawa theory of Artin representations. In the section 3, we discuss the notion of the Euler characteristic of a cofinitely generated and cotorsion module over the Iwasawa algebra. In section 4, we formulate and study some natural distribution questions, which serve to illustrate our results. ## 2. Iwasawa theory of class groups and Artin representations ### Classical Iwasawa theory and Iwasawa invariants We review the classical Iwasawa theory over the cyclotomic \(\mathbb{Z}_{p}\)-extension of a number fields. The reader may refer to [26] for a more comprehensive treatment. Throughout this section, we shall set \(K\) to denote a number field and \(p\) an odd prime number. Fix an algebraic closure \(\bar{K}\) of \(K\). All algebraic extensions considered in this article shall implicitly be assumed to be in \(\bar{K}\). Let \(\operatorname{Cl}(K)\) denote the class group of \(K\) and \(\operatorname{Cl}_{p}(K)\) its \(p\)-primary part. For \(n\geq 1\), let \(K(\mu_{p^{n}})\) be the number field extension of \(K\) obtained by adjoining the \(p^{n}\)-th roots of unity to \(K\). Denote by \(K(\mu_{p^{\infty}})\) the union of all extensions \(K(\mu_{p^{n}})\). There is a unique \(\mathbb{Z}_{p}\)-extension \(K_{\infty}/K\) which is contained in \(K(\mu_{p^{\infty}})\), which is referred to as the _cyclotomic \(\mathbb{Z}_{p}\)-extension_ of \(K\). For \(n\geq 1\), set \(K_{n}/K\) to be the sub-extension of \(K_{\infty}/K\), for which \(\operatorname{Gal}(K_{n}/K)\) is isomorphic to \(\mathbb{Z}/p^{n}\mathbb{Z}\). Setting \(K_{0}:=K\), refer to \(K_{n}\) as the \(n\)_-th layer_. Taking \(\Gamma_{n}:=\operatorname{Gal}(K_{\infty}/K_{n})\) we identify \(\operatorname{Gal}(K_{n}/K)\) with \(\Gamma/\Gamma_{n}\). For \(n\in\mathbb{Z}_{\geq 0}\), denote by \(H_{p}(K_{n})\) the \(p\)-Hilbert class field of \(K_{n}\). In other words, \(H_{p}(K_{n})\) is the maximal unramified abelian \(p\)-extension of \(K_{n}\) in \(\bar{K}\). Set \(X_{n}\) to denote the Galois group \(\operatorname{Gal}(H_{p}(K_{n})/K_{n})\), and identify \(X_{n}\) with the maximal \(p\)-primary quotient of \(\operatorname{Cl}(K_{n})\). Thus, \([H_{p}(K_{n}):K_{n}]\) is equal to \(\#\operatorname{Cl}_{p}(K_{n})\). Since the primes above \(p\) are totally ramified in \(K_{\infty}\), it follows that \(H_{p}(K_{n})\cap K_{\infty}=K_{n}\), and thus, there are surjective maps \(X_{m}\to X_{n}\) for all \(m\geq n\). Taking \(X_{\infty}\) to be the inverse limit \(\varprojlim_{n}X_{n}\), we find that \(X_{\infty}\) is both a \(\mathbb{Z}_{p}\)-module as well as a module over \(\Gamma\). On the other hand, letting \(H_{p}(K_{\infty})\) denote the maximal unramified abelian pro-\(p\) extension of \(K_{\infty}\), we identify \(X_{\infty}\) with the Galois group \(\operatorname{Gal}(H_{p}(K_{\infty})/K_{\infty})\). On order to better study the algebraic structure of \(X_{\infty}\), it proves fruitful to view \(X_{\infty}\) as a module over a completed group ring known as the Iwasawa algebra. This algebra is defined as follows \[\Lambda:=\mathbb{Z}_{p}\llbracket\Gamma\rrbracket=\varprojlim_{n}\mathbb{Z}_{p }[\Gamma/\Gamma_{n}].\] Choosing a topological generator \(\gamma\in\Gamma\), we identify \(\Lambda\) with the formal power series ring \(\mathbb{Z}_{p}\llbracket T\rrbracket\), by setting \(T:=(\gamma-1)\). As a \(\Lambda\)-module, \(X_{\infty}\) is finitely generated and torsion [26, chapter 13]. Let \(\mathcal{O}\) be a valuation ring with residue characteristic \(p\), and let \(\varpi\) be a uniformizer of \(\mathcal{O}\). Then, the Iwasawa algebra over \(\mathcal{O}\) is defined by extending coefficients to \(\mathcal{O}\), as follows \(\Lambda_{\mathcal{O}}:=\Lambda\otimes_{\mathbb{Z}_{p}}\mathcal{O}\). The Iwasawa algebra \(\Lambda_{\mathcal{O}}\) is a local ring with maximal ideal \(\mathfrak{m}=(\varpi,T)\). A polynomial \(f(T)\in\mathcal{O}\llbracket T\rrbracket\) is said to be _distinguished_ if it is a monic polynomial whose non-leading coefficients are divisible by \(\varpi\). The Weierstrass preparation theorem states that any power series \(f(T)\) decomposes into a product \[f(T)=\varpi^{\mu}\times g(T)\times u(T),\] where \(\mu\in\mathbb{Z}_{\geq 0}\), \(g(T)\) is a distinguished polynomial and \(u(T)\) is a unit in \(\Lambda_{\mathcal{O}}\). The \(\mu\)-invariant of \(\bar{f}(T)\) is the power of \(\varpi\) above, and the \(\lambda\)-invariant is the degree of the distinguished polynomial \(g(T)\). The prime ideals of height \(1\) are the principal ideals \((\varpi)\) and \((g(T))\), where \(g(T)\) is an irreducible distinguished polynomial. Given a finitely generated and torsion \(\Lambda_{\mathcal{O}}\)-module \(M\), there is a homomorphism of \(\Lambda_{\mathcal{O}}\)-modules \[M\longrightarrow\left(\bigoplus_{i=1}^{s}\frac{\mathcal{O}\llbracket T \rrbracket}{(\varpi^{\mu_{i}})}\right)\oplus\left(\bigoplus_{j=1}^{t}\frac{ \mathcal{O}\llbracket T\rrbracket}{(f_{i}(T)^{\lambda_{i}})}\right), \tag{2.1}\] with finite kernel and cokernel. Here, \(\mu_{i},\lambda_{j}\geq 0\), and \(f_{j}(T)\) are irreducible distinguished polynomials. For further details, we refer to [20, Theorem 3.12]. **Definition 2.1**.: _The \(\mu\)-invariant \(\mu_{p}(M)\) is the sum of the entries \(\sum_{i=1}^{s}\mu_{i}\) if \(s>0\), and is set to be \(0\) if \(s=0\). On the other hand, the \(\lambda\)-invariant \(\lambda_{p}(M)\) is defined to be \(\sum_{i=1}^{s}\lambda_{i}\deg(f_{i})\) if \(s>0\), and defined to be \(0\) if \(t=0\). The characteristic element \(f_{M}(T)\) is the product_ \[f_{M}(T):=\prod_{i}\varpi^{\mu_{i}}\times\prod_{j}f_{j}(T)^{\lambda_{j}}.\] We remark that the \(\mu\)-invariant and \(\lambda\)-invariant of \(f_{M}(T)\) are \(\mu_{p}(M)\) and \(\lambda_{p}(M)\) respectively. **Proposition 2.2**.: _Suppose that \(M\) is a finitely generated and torsion \(\Lambda_{\mathcal{O}}\)-module. Then, the following assertions hold,_ 1. \(\mu_{p}(M)=0\) _if and only if_ \(M\) _is finitely generated as an_ \(\mathcal{O}\)_-module. In this case,_ \(\lambda_{p}(M)\) _is the_ \(\mathcal{O}\)_-rank of_ \(M\)_._ 2. _Letting_ \(r_{p}(M)\) _denote the order of vanishing of_ \(f_{M}\) _at_ \(0\)_, we have that_ \[\lambda_{p}(M)\geq r_{p}(M).\] 3. _Write_ \(f_{M}(T)\) _as a power series_ \[f_{M}(T)=a_{r}T^{r}+a_{r+1}T^{r+1}+\cdots+a_{\lambda}T^{\lambda},\] _where_ \(r=r_{p}(M)\) _and_ \(\lambda=\lambda_{p}(M)\)_. The_ \(\mu\)_-invariant_ \(\mu_{p}(M)=0\) _if and only if there is a coefficient_ \(a_{i}\) _not divisible by_ \(\varpi\)_._ 4. _We have that_ \(\varpi^{\mu}||a_{\lambda}\) _and that_ \(\varpi^{\mu+1}|a_{i}\) _for all_ \(i<\lambda\)_._ Proof.: The results are easy observations that follow from the structural homomorphism (2.1) and the definition of the Iwasawa invariants. **Remark 2.3**.: _Let \(\mathcal{O}^{\prime}\) be a valuation ring that is a finite extension of \(\mathcal{O}\), and \(e\) be its ramification index. Setting \(M_{\mathcal{O}^{\prime}}:=M\otimes_{\mathcal{O}}\mathcal{O}^{\prime}\) and regard \(M_{\mathcal{O}^{\prime}}\) as a module over \(\Lambda_{\mathcal{O}^{\prime}}\). Then, it is easy to see that_ \[\mu_{p}(M_{\mathcal{O}^{\prime}})=e\mu_{p}(M)\text{ and }\lambda_{p}(M_{ \mathcal{O}^{\prime}})=\lambda_{p}(M).\] We denote the \(\mu\)-invariant (resp. \(\lambda\)-invariant) of \(X_{\infty}\) by \(\mu_{p}(K)\) (resp. \(\lambda_{p}(K)\)). In this setting, \(\mathcal{O}:=\mathbb{Z}_{p}\). Iwasawa proved that for all large enough values of \(n\), there is an invariant \(\nu_{p}(K)\in\mathbb{Z}\) for which \[\log_{p}\left(\#\operatorname{Cl}_{p}(K_{n})\right)=p^{n}\mu_{p}(K)+n\lambda_ {p}(K)+\nu_{p}(K),\] cf. [20, Theorem 13.13]. Moreover, Iwasawa conjectured that \(\mu_{p}(K)=0\) for all number fields \(K\), cf. [20]. For abelian extensions \(K/\mathbb{Q}\) the conjecture has been proven by Ferrero and Washington [19]. ### The leading coefficient of the characteristic series Let \(K\) be a CM field with totally real subfield \(K^{+}\). The Galois group \(\operatorname{Gal}(K/K^{+})\) acts on \(X_{\infty}\). Let \(\tau\) be the generator of \(\operatorname{Gal}(K/K^{+})\) and set \[X_{\infty}^{-}:=\{x\in X_{\infty}\mid\tau(x)=-x\}.\] Then, \(X_{\infty}^{-}\) is a \(\Lambda\)-module whose \(\mu\)-invariant (resp. \(\lambda\)-invariant) is denoted \(\mu_{p}^{-}(K)\) (resp. \(\lambda_{p}^{-}(K)\)). Let \(f_{K}^{-}(T)\) be the characteristic element associated to \(X_{\infty}^{-}\). Take \(I\) to be the set of primes of \(K^{+}\) that lie above \(p\) and split in \(K\), and set \(r_{p,K}:=\#I\). Let \(S\) be the set of places of \(K\) dividing \(p\) and \(\infty\). Let \(U=U_{K}\) be the group of \(S\)-units of \(K^{*}\) and \(M=M_{K}\) be the free abelian group of divisors at \(S\). Given a \(\operatorname{Gal}(K/K^{+})\)-module \(N\), let \[N^{-}:=\{n\in N\mid\tau(n)=-n\}.\] Let \(R\) be a ring, then we set \(RN^{-}\) to denote the extension of scalars \(N^{-}\otimes R\). It is easy to see that both \(U^{-}\) and \(M^{-}\) are free abelian group of rank \(r_{p,K}\). The map \[\phi:U^{-}\to M^{-}\] is defined by setting \[\phi(x):=\sum_{v\mid p}\operatorname{ord}_{p}\left(\operatorname{Norm}_{K_{v} /\mathbb{Q}_{p}}x\right).\] The induced map \[\phi:\mathbb{Q}U^{-}\to\mathbb{Q}M^{-}\] is an isomorphism (cf. [13, Proposition 1.4]). The inverse of \(\phi\) can be described as follows. For each prime \(v\in I\), we choose a prime \(\widetilde{v}|v\) of \(K\). Note that \(\mathbb{Q}M^{-}\) has a basis \[\mathcal{B}=\{\widetilde{v}-\tau(\widetilde{v})\mid v\in I\}.\] Let \(\mathfrak{p}\) be the prime ideal associated to \(v\), and \(h\) be a positive integer such that \(\mathfrak{p}^{h}=(\alpha)\) is principal. Both \(\alpha\) and \(\tau(\alpha)\) are elements of \(U\), and \(\alpha/\tau(\alpha)\in U^{-}\). Setting \(f_{v}\) to denote the residue class degree of \(v\), take \[\phi^{-1}\left(\widetilde{v}-\tau(\widetilde{v})\right):=\frac{1}{hf_{v}} \otimes\left(\alpha/\tau(\alpha)\right).\] Then, \(\phi^{-1}\) is the inverse to \(\phi\). Define the homomorphism \[\lambda:U^{-}\to\mathbb{Q}_{p}M^{-}\] by setting \[\lambda(y):=\sum_{v\mid p}\log_{p}\left(N_{K_{v}/\mathbb{Q}_{p}}(y)\right).\] Composing \(\phi^{-1}\) with \(\lambda\), we obtain an endomorphism \[\lambda\phi^{-1}:\mathbb{Q}_{p}M^{-}\to\mathbb{Q}_{p}M^{-}.\] **Definition 2.4**.: _With notation as above, define the regulator \(\operatorname{Reg}_{p}(K)\) as follows_ \[\operatorname{Reg}_{p}(K):=\det\left(\lambda\phi^{-1}\mid\mathbb{Q}_{p}M^{-} \right),\] _where it is understood that \(\operatorname{Reg}_{p}(K):=1\) when \(r_{p,K}=0\)._ **Theorem 2.5** (Iwasawa, Greenberg).: _With respect to notation above, \(f_{K}^{-}(T)\) is divisible by \(T^{r_{p,K}}\)._ Proof.: We refer to works of Iwasawa [12] and Greenberg [10] for the proof of the statement. As a consequence of Theorem 2.5, one may write \[f_{K}^{-}(T)=a_{r}T^{r}+a_{r+1}T^{r+2}+\cdots+a_{\lambda}T^{\lambda},\] where \(r=r_{p,K}\) and \(\lambda:=\lambda_{p}^{-}(K)\). We note that as a consequence, \[\lambda_{p}^{-}(K)\geq r_{p,K}.\] For \(a,b\in\mathbb{Q}_{p}\), we write \(a\sim b\) to mean that \(a=ub\) for some \(u\in\mathbb{Z}_{p}^{\times}\). Let \(h_{K}\) (resp. \(h_{K}^{-}\)) denote the class number (resp. number of elements in the minus part of the class group) of \(K\). Let \(w_{K(\mu_{p})}\) be the number of roots of unity contained in \(K(\mu_{p})\). **Theorem 2.6** (Federer-Gross).: _With respect to notation above, the following are equivalent_ 1. \(\operatorname{Reg}_{p}(K)\neq 0\)_,_ 2. \(a_{r}\neq 0\)_._ _Assuming that these equivalent conditions hold, we have that_ \[a_{r}\sim\frac{h_{K}^{-}\left(\prod_{v\in I}f_{v}\right)\operatorname{Reg}_{p }(K)}{w_{K(\mu_{p})}^{r_{p,K}}}.\] Proof.: The above result is [10, Proposition 3.9]. Suppose that \(p\nmid f_{v}\) for all \(v\in I\), and \(p\nmid h_{K}\). Then, the map \(\phi^{-1}\) is defined over \(\mathbb{Z}_{p}\) and it is easy to see that \(\operatorname{Reg}_{p}(K)\) is divisible by \(p^{r_{p,K}}\). The normalized regulator is defined as follows \[\mathcal{R}_{p}(K):=\frac{\operatorname{Reg}_{p}(K)}{p^{r_{p,K}}}.\] **Corollary 2.7**.: _Let \(K\) be a CM field for which_ 1. \(\mathcal{R}_{p}(K)\neq 0\)_,_ 2. \(w_{K(\mu_{p})}\sim p\)_,_ 3. \(p\nmid f_{v}\) _for all_ \(v\in I\)_,_ 4. \(p\nmid h_{K}\)_._ _Then, the following are equivalent_ 1. \(\mu_{p}^{-}(K)=0\) _and_ \(\lambda_{p}^{-}(K)=r_{p,K}\)_,_ 2. \(a_{r}\) _is a unit in_ \(\mathbb{Z}_{p}\)_,_ 3. \(p\nmid\mathcal{R}_{p}(K)\)_._ Proof.: With respect to above notation, write \(f_{K}^{-}(T)=T^{r_{p,K}}g_{K}^{-}(T)\), where \(a_{r}=g_{K}^{-}(0)\). Then, \(a_{r}\) is a unit in \(\mathbb{Z}_{p}\) if and only if \(g_{K}^{-}(T)\) is a unit in \(\Lambda\). This implies that \[\mu_{p}^{-}(K)=0\text{ and }\lambda_{p}^{-}(K)=r_{p,K}.\] Conversely, if \[\mu_{p}^{-}(K)=0\text{ and }\lambda_{p}^{-}(K)=r_{p,K},\] then the factorization \(f_{K}^{-}(T)=T^{r_{p,K}}g_{K}^{-}(T)\) implies that \(g_{K}^{-}(T)\) must be a unit in \(\Lambda\). Thus, we find that (1) and (2) are equivalent. It follows from our assumptions that \(a_{r}\sim\mathcal{R}_{p}(K)\). Therefore, \(a_{r}\) is a unit in \(\mathbb{Z}_{p}\) if and only if \(p\nmid\mathcal{R}_{p}(K)\). This shows that the conditions (2) and (3) are equivalent. When \(K\) is an imaginary quadratic field, \(X_{\infty}=X_{\infty}^{-}\) and thus \(\mu_{p}(K)=\mu_{p}^{-}(K)\) and \(\lambda_{p}(K)=\lambda_{p}^{-}(K)\). In this setting, we find that \[r_{p,K}:=\begin{cases}&0\text{ if }p\text{ splits in }K;\\ &1\text{ if }p\text{ is inert or ramified in }K.\end{cases}\] Note that \(\mu_{p}(K)=0\) by the aforementioned result of Ferrero and Washington [11]. **Corollary 2.8**.: _Let \(K\) be an imaginary quadratic field and \(p\) be an odd prime number. Then, the following assertions hold._ 1. _Suppose that_ \(p\) _splits in_ \(K\)_. Then,_ \(\lambda_{p}(K)=0\) _if and only if_ \(p\nmid h_{K}\)_._ 2. _Suppose that_ \(p\) _is inert in_ \(K\)_. Then,_ \(\lambda_{p}(K)=1\) _if and only if_ \(p\nmid h_{K}\) _and_ \(p\nmid\mathcal{R}_{p}(K/\mathbb{Q})\)_._ Proof.: The result is a special case of Corollary 2.7. Let \(K\) be an imaginary quadratic field and \(p\) be an odd prime which is inert in \(K\). In this case \(\lambda_{p}(K)\geq 1\) and the Corollary 2.8 asserts that \(\lambda_{p}(K)=1\) if and only if \(\mathcal{R}_{p}(K/\mathbb{Q})\) is a unit in \(\mathbb{Z}_{p}\). The analysis of this condition leads to the statement of Gold's criterion. **Theorem 2.9** (Gold's criterion).: _Let \(K\) be an imaginary quadratic field and \(p\) be an odd prime number which splits in \(K\). Assume that \(p\nmid h_{K}\). Let \(\mathfrak{p}|p\) be a prime ideal, and \(r\) be a positive integer not divisible by \(p\), such that \(\mathfrak{p}^{r}\) is principal. Let \(\alpha\in\mathcal{O}_{K}\) be a generator of \(\mathfrak{p}^{r}\). Setting \(\bar{\mathfrak{p}}\) to denote the complex conjugate of \(\mathfrak{p}\), the following conditions are equivalent_ 1. \(\lambda_{p}(K)>1\)_,_ 2. \(\alpha^{p-1}\equiv 1\left(\mod\bar{\mathfrak{p}}^{2}\right)\)_._ Proof.: The result is a consequence of [1, Theorems 3 and 4], and can also be seen to follow from Corollary 2.8, cf. [13, proof of Proposition 2.1]. ### Artin representations We briefly discuss the results of Greenberg and Vatsal [12] on the Iwasawa theory of Selmer groups associated to Artin representations. Let \(K/\mathbb{Q}\) be a finite Galois extension with Galois group \(\Delta:=\operatorname{Gal}(K/\mathbb{Q})\). Fix an irreducible Artin representation of dimension \(d>1\) \[\rho:\Delta\to\operatorname{GL}_{d}(\bar{\mathbb{Q}})\] and \[S_{\chi,\epsilon}(\mathbb{Q}_{\infty}):=\varinjlim_{n}S_{\chi,\epsilon}( \mathbb{Q}_{n})\] the Selmer group over the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\). Note that \(S_{\chi,\epsilon}(\mathbb{Q}_{n})\) (resp. \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\)) is an \(\mathcal{O}[\Gamma/\Gamma_{n}]\)-module (resp. \(\Lambda_{\mathcal{O}}\)-module). It is easy to see that the restriction map \[H^{1}(\mathbb{Q}_{n},D)\to H^{1}(\mathbb{Q}_{\infty},D)^{\Gamma_{n}}\] induces a map between Selmer groups \[\iota_{n}:S_{\chi,\epsilon}(\mathbb{Q}_{n})\to S_{\chi,\epsilon}(\mathbb{Q}_{ \infty})^{\Gamma_{n}}.\] The following control theorem shows that this map is injective with cokernel which is independent of \(n\). **Theorem 2.10** (Greenberg and Vatsal - Control theorem).: _Suppose that \(\chi\) is nontrivial, then, \(\iota_{n}\) fits into a short exact sequence of \(\Gamma/\Gamma_{n}\)-modules_ \[0\to S_{\chi,\epsilon}(\mathbb{Q}_{n})\to S_{\chi,\epsilon}(\mathbb{Q}_{ \infty})^{\Gamma_{n}}\to H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{ p}}})\to 0.\] Proof.: The above result is [13, Proposition 4.1]. We note that \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})\simeq(\mathcal{F}/ \mathcal{O})^{t}\), where \(t\) is the multiplicity of the trivial representation of \(\Delta_{\mathfrak{p}}\) in \(V/V^{\epsilon_{\mathfrak{p}}}\). Here, the action of \(\Gamma/\Gamma_{n}\) on \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})\) is trivial. Let \(\mathcal{D}\) be a discrete \(p\)-primary \(\Lambda_{\mathcal{O}}\)-module. We take \(\mathcal{D}^{\vee}\) to be the Pontryagin dual defined as follows Say that \(\mathcal{D}\) is cofinitely generated (resp. cotorsion) as a \(\Lambda_{\mathcal{O}}\)-module, if \(\mathcal{D}^{\vee}\) is finitely generated (resp. torsion) as a \(\Lambda_{\mathcal{O}}\)-module. We set \(X_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) be the Pontryagin dual of \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\). **Theorem 2.11** (Greenberg-Vatsal).: _With respect to notation above \(X_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) is a finitely generated and torsion \(\Lambda_{\mathcal{O}}\)-module that contains no non-trivial finite \(\Lambda_{\mathcal{O}}\)-submodules._ Proof.: We refer to [13, Propositions 4.5 and 4.7] for the proof of the above result. Set \(\mu_{\chi,\epsilon}\) and \(\lambda_{\chi,\epsilon}\) to denote the \(\mu\) and \(\lambda\)-invariants of \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})^{\vee}\) respectively. The characteristic series of \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})^{\vee}\) is denoted \(f_{\chi,\epsilon}(T)\). ## 3. The Euler characteristic In this section, we introduce the notion of the Euler characteristic associated to a cofinitely generated and cotorsion module over \(\Lambda_{\mathcal{O}}\), and introduce conditions for it to be well defined. For the Selmer group associated to an Artin representation, the Euler characteristic is shown to equal the cardinality of \(S_{\chi,\epsilon}(\mathbb{Q})\). The results presented in this section have consequences to the study of the Iwasawa \(\mu\) and \(\lambda\)-invariants associated to an Artin representation. ### Definition and properties of the Euler characteristic Let \(\mathcal{D}\) be a cofinitely generated and cotorsion module over \(\Lambda_{\mathcal{O}}\). Since \(\Gamma\) is pro-cyclic, it has cohomological dimension \(1\), we find that \(H^{i}(\Gamma,\mathcal{D})=0\) for \(i\geq 2\). Also note that \(H^{1}(\Gamma,\mathcal{D})\) is identified with the module of coinvariants \(\mathcal{D}_{\Gamma}\). This invariant encodes information about the valuation of leading constant term of the characteristic series. For an Artin representation that satisfies the conditions of Assumption 1.1 we discuss conditions for the Euler characteristic to be well defined, and give an explicit formula for it. The Euler characteristic formula in this context can be viewed as an analogue of the result of Gross and Federer. **Proposition 3.1**.: _Let \(\mathcal{D}\) be a cofinitely generated and cotorsion \(\Lambda_{\mathcal{O}}\)-module, then,_ 1. \(\operatorname{corank}_{\mathbb{Z}_{p}}H^{0}(\Gamma,\mathcal{D})=\operatorname{ corank}_{\mathbb{Z}_{p}}H^{1}(\Gamma,\mathcal{D})\)_._ 2. _The module_ \(H^{0}(\Gamma,\mathcal{D})\) _is finite if and only if_ \(H^{1}(\Gamma,\mathcal{D})\) _is finite._ Proof.: Since \(\mathcal{D}\) is cofinitely generated as a \(\Lambda_{\mathcal{O}}\)-module, it follows that both \(\mathcal{D}^{\Gamma}\) and \(\mathcal{D}_{\Gamma}\) are cofinitely generated as a \(\mathbb{Z}_{p}\)-module. Part (1) follows from [13, Theorem 1.1]. Since \(\mathcal{D}\) is cofinitely generated as a \(\Lambda\)-module, it follows that \(\mathcal{D}^{\Gamma}\) and \(\mathcal{D}_{\Gamma}\) are both cofinitely generated as \(\mathbb{Z}_{p}\)-modules. Part (2) follows from (1). **Definition 3.2**.: _Let \(\mathcal{D}\) be a cofinitely generated and cotorsion \(\Lambda_{\mathcal{O}}\)-module. Then, we say that the Euler characteristic of \(\mathcal{D}\) is well defined if \(H^{0}(\Gamma,\mathcal{D})\) (or equivalently \(H^{1}(\Gamma,\mathcal{D})\)) is finite. Assuming that the characteristic of \(\mathcal{D}\) is well defined, the Euler characteristic is defined as follows_ \[\chi(\Gamma,\mathcal{D}):=\prod_{i\geq 0}\#H^{i}(\Gamma,\mathcal{D})^{(-1)^{i}} =\frac{\#H^{0}(\Gamma,\mathcal{D})}{\#H^{1}(\Gamma,\mathcal{D})}.\] Recall from Definition 2.1 that \(f_{\mathcal{D}^{\vee}}(T)\) is the characteristic element of \(\mathcal{D}^{\vee}\) and that \(\mu_{p}(\mathcal{D}^{\vee})\) (resp. \(\lambda_{p}(\mathcal{D}^{\vee})\)) is the \(\mu\)-invariant (resp. \(\lambda\)-invariant). For the ease of notation, we set \[f_{\mathcal{D}}(T):=f_{\mathcal{D}^{\vee}}(T),\mu_{p}(\mathcal{D}):=\mu_{p}( \mathcal{D}^{\vee})\text{ and }\lambda_{p}(\mathcal{D}):=\lambda_{p}( \mathcal{D}^{\vee}).\] From the expansion \[f_{\mathcal{D}}(T)=\sum_{i=0}^{\lambda}a_{i}T^{i},\] we note that \(\lambda=\lambda_{p}(\mathcal{D})\). Moreover, \(\mu_{p}(\mathcal{D})=0\) if and only if \(\varpi\nmid a_{i}\) for at least one coefficient \(a_{i}\). In particular, we have the have that \(\varpi\nmid f_{\mathcal{D}}(0)\) if and only if \(\mu_{p}(\mathcal{D})=\lambda_{p}(\mathcal{D})=0\). Fix an absolute value \(|\cdot|_{\varpi}\) on \(\mathcal{O}\) normalized by setting \[|\varpi|_{\varpi}:=[\mathcal{O}:\varpi\mathcal{O}]^{-1}.\] **Proposition 3.3**.: _Let \(\mathcal{D}\) be a cofinitely generated and cotorsion \(\Lambda\)-module. Then, with respect to the above notation, the following conditions are equivalent_ 1. \(f_{\mathcal{D}}(0)\neq 0\)_,_ 2. _the Euler characteristic_ \(\chi(\Gamma,\mathcal{D})\) _is well defined._ _Moreover, if the above equivalent conditions are satisfied, then,_ \[\chi(\Gamma,\mathcal{D})=|f_{\mathcal{D}}(0)|_{\varpi}^{-1}.\] Proof.: It is easy to see that if \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) are cofinitely generated \(\Lambda_{\mathcal{O}}\)-modules that are pseudo-isomorphic, then \(\mathcal{D}^{\Gamma}\) is finite if and only if \((\mathcal{D}^{\prime})^{\Gamma}\) is finite. Therefore, \(\chi(\Gamma,\mathcal{D})\) is well defined if and only if \(\chi(\Gamma,\mathcal{D}^{\prime})\) is well defined. The characteristic series is determined up to pseudo-isomorphism, and therefore, \(f_{\mathcal{D}}(T)=f_{\mathcal{D}^{\prime}}(T)\). We assume without loss of generality that \[\mathcal{D}^{\vee}=\left(\bigoplus_{i=1}^{s}\frac{\mathcal{O}[\![T]\!]}{( \varpi^{\mu_{i}})}\right)\oplus\left(\bigoplus_{j=1}^{t}\frac{\mathcal{O}[\![T ]\!]}{(f_{i}(T)^{\lambda_{i}})}\right).\] We identify \(\left(\mathcal{D}^{\Gamma}\right)^{\vee}\) with \[\left(\mathcal{D}^{\vee}\right)_{\Gamma}=\mathcal{D}^{\vee}/T\mathcal{D}^{ \vee}\simeq\left(\bigoplus_{i=1}^{s}\mathcal{O}/(\varpi^{\mu_{i}})\right) \oplus\left(\bigoplus_{j=1}^{t}\mathcal{O}/(f_{i}(0)^{\lambda_{i}})\right).\] Since \[f_{\mathcal{D}}(0)=\prod_{i}\varpi^{\mu_{i}}\times\prod_{j}f_{j}(0)^{\lambda_{j}},\] we deduce that \[\#H^{0}(\Gamma,\mathcal{D})=\#H^{1}(\Gamma,\mathcal{D}^{\vee})=\#\left(\mathcal{ D}^{\vee}\right)_{\Gamma}\] is finite if and only if \(f_{\mathcal{D}}(0)\) is non-zero. Therefore, the Euler characteristic is well defined if and only if \(f_{\mathcal{D}}(0)\) is non-zero. Assuming that \(f_{\mathcal{D}}(0)\neq 0\), we calculate the Euler characteristic. First, we note that the previous argument implies that \[\#H^{0}(\Gamma,\mathcal{D})=|f_{\mathcal{D}}(0)|_{\varpi}^{-1}.\] On the other hand, we find that \[\#H^{1}(\Gamma,D)=\#H^{0}(\Gamma,D^{\vee})=\#\left(D^{\vee}\right)^{\Gamma}.\] Identify \((\mathcal{D}^{\vee})^{\Gamma}\) with the kernel of the multiplication by \(T\) endomorphism of \(\mathcal{D}^{\vee}\). Since \(f_{\mathcal{D}}(0)\) is assumed to be non-zero, it follows that none of the terms \(f_{j}(T)\) are divisible by \(T\). It follows from this that the multiplication by \(T\) map is injective and hence, \((\mathcal{D}^{\vee})^{\Gamma}=0\). Thus it has been shown that \[\chi(\Gamma,D)=|f_{\mathcal{D}}(0)|_{\varpi}^{-1}.\] **Corollary 3.4**.: _Suppose that the Euler characteristic of \(\mathcal{D}\) is well defined. Then the following conditions are equivalent_ 1. \(\chi(\Gamma,\mathcal{D})=1\)_,_ 2. \(\mu_{p}(\mathcal{D})=0\) _and_ \(\lambda_{p}(\mathcal{D})=0\)_._ Proof.: It is easy to see that \(\mu_{p}(\mathcal{D})=0\) and \(\lambda_{p}(\mathcal{D})=0\) if and only if \(f_{\mathcal{D}}(T)\) is a unit in \(\Lambda_{\mathcal{O}}\). Thus, the condition (2) is equivalent to the condition that \(\varpi\nmid f_{\mathcal{D}}(0)\). According to Proposition 3.3, \[\chi(\Gamma,\mathcal{D})=|f_{\mathcal{D}}(0)|_{\varpi}^{-1}.\] Therefore \(\chi(\Gamma,\mathcal{D})=1\) if and only if \(f_{\mathcal{D}}(0)\) is a unit in \(\mathcal{O}\). ### Calculating the Euler characteristic Let \(\rho\) be an Artin representation with character \(\chi\) and set \(\epsilon\) to be the chosen character with respect to which the Selmer group \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) is defined. We shall assume throughout that the Assumption 1.1 is satisfied. **Proposition 3.5**.: _With respect to above notation, the following conditions are equivalent_ 1. _the Euler characteristic_ \(\chi(\Gamma,S_{\chi,\epsilon}(\mathbb{Q}_{\infty}))\) _is defined,_ 2. \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})=0\)_._ Proof.: The Euler characteristic is defined if and only if \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})^{\Gamma}\) is finite. By the control theorem, we have the short exact sequence \[0\to S_{\chi,\epsilon}(\mathbb{Q})\to S_{\chi,\epsilon}(\mathbb{Q}_{\infty})^{ \Gamma}\to H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})\to 0,\] where \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})\) is a cofree \(\mathcal{O}\)-module. According to Theorem 1.2, the Selmer group \(S_{\chi,\epsilon}(\mathbb{Q})\) is finite. Therefore, the Euler characteristic is well defined if and only if \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})=0\). **Theorem 3.6**.: _Let \(\rho\) be an Artin representation for which_ 1. _the conditions of Assumption_ 1.1 _are satisfied._ 2. \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\mathfrak{e}_{\mathfrak{p}}})=0\)_._ _Then, the Euler characteristic is given by_ \[\chi(\Gamma,S_{\chi,\epsilon}(\mathbb{Q}_{\infty}))=\#S_{\chi,\epsilon}( \mathbb{Q}).\] Proof.: According to Proposition 3.5, the Euler characteristic is well defined. We are to calculate the orders of the finite abelian groups \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})^{\Gamma}\) and \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})_{\Gamma}\). According to Theorem 2.11, \(X_{\chi,\epsilon}(\mathbb{Q}_{\infty}):=S_{\chi,\epsilon}(\mathbb{Q}_{\infty} )^{\vee}\) does not contain any nontrivial finite \(\Lambda_{\mathcal{O}}\)-submodules. Since the Euler characteristic is well defined, \(X_{\chi,\epsilon}(\mathbb{Q}_{\infty})^{\Gamma}\) is finite, and hence is zero. In other words, \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})_{\Gamma}=0\). On the other hand, it follows from Theorem 2.10 that \[S_{\chi,\epsilon}(\mathbb{Q}_{\infty})^{\Gamma}=S_{\chi,\epsilon}(\mathbb{Q}).\] Therefore, we find that \(\chi(\Gamma,S_{\chi,\epsilon}(\mathbb{Q}_{\infty}))=\#S_{\chi,\epsilon}( \mathbb{Q})\). **Corollary 3.7**.: _Let \(\rho\) be an Artin representation and \(\epsilon:\operatorname{Gal}(\mathcal{K}/\mathbb{Q}_{p})\to\mathcal{O}^{\times}\) be a character for which the following assumptions are satisfied._ 1. _The conditions of Assumption_ 1.1 _are satisfied._ 2. \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\mathfrak{e}_{\mathfrak{p}}})=0\)_._ _Then, the following conditions are equivalent_ 1. \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})=0\)_,_ 2. \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) _is finite,_ 3. \(\mu_{\chi,\epsilon}=\lambda_{\chi,\epsilon}=0\)_,_ 4. \(\chi(\Gamma,S_{\chi,\epsilon}(\mathbb{Q}_{\infty}))=1\)_,_ 5. \(S_{\chi,\epsilon}(\mathbb{Q})=0\)_._ Proof.: It is clear that (1) implies (2). On the other hand, by Theorem 2.11, \(X_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) contains no nontrivial finite \(\Lambda_{\mathcal{O}}\)-submodules. Hence, if \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) is finite, it must be equal to \(0\). Therefore, (1) and (2) are equivalent. We note that a \(\Lambda_{\mathcal{O}}\)-module \(M\) is finite if and only if it is pseudo-isomorphic to the trivial module. On the other hand, \(M\sim 0\) if and only if the \(\mu\) and \(\lambda\)-invariants of \(M\) are \(0\). Therefore (2) and (3) are equivalent. The equivalence of (3) and (4) follows from Corollary 3.4, and the equivalence of (4) and (5) follows from Proposition 3.3. ### The structure of the Selmer group \(S_{\chi,\epsilon}(\mathbb{Q})\) In this subsection, we analyze the structure of \(S_{\chi,\epsilon}(\mathbb{Q})\) and establish conditions for the vanishing of \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\). We set \(d(\chi)\) to denote \(d\) and write \(d(\chi)=d^{+}(\chi)+d^{-}(\chi)\), reflecting the action of the archimedian places on the representation. At each prime \(\mathfrak{p}|p\), let \(\mathcal{U}_{\mathfrak{p}}\) denote the principal units at \(\mathfrak{p}\), and set \(\mathcal{U}_{\mathfrak{p}}\) to denote the product \(\prod_{\mathfrak{p}|p}\mathcal{U}_{\mathfrak{p}}\). The decomposition group \(\Delta_{\mathfrak{p}}\) naturally acts on \(\mathcal{U}_{\mathfrak{p}}\), and \(\mathcal{U}_{\mathfrak{p}}\) is identified with the induced representation \(\operatorname{Ind}_{\Delta_{\mathfrak{p}}}^{\Delta}\mathcal{U}_{\mathfrak{p}}\). Let \(U_{K}\) be the group of units of \(\mathcal{O}_{K}\) that are principal units at all primes \(\mathfrak{p}|p\). The diagonal inclusion map \[U_{K}\hookrightarrow\mathcal{U}_{\mathfrak{p}}\] is a \(\Delta\)-equivariant map, and induces a \(\mathbb{Z}_{p}\)-linear map \[\lambda_{p}:U_{K}\otimes\mathbb{Z}_{p}\to\mathcal{U}_{\mathfrak{p}}.\] We recall that the character \(\epsilon\) arises from a choice of prime \(\mathfrak{p}|p\) and character \(\epsilon_{\mathfrak{p}}:\Delta_{\mathfrak{p}}\to\mathcal{O}^{\times}\). Let \(\mathfrak{p}^{\prime}\) be any other prime above \(p\), and write \(\mathfrak{p}^{\prime}=\delta(\mathfrak{p})\) for \(\delta\in\Delta\). Then conjugation by \(\delta\) gives an isomorphism \(c_{\delta}:\Delta_{\mathfrak{p}^{\prime}}\xrightarrow{\sim}\Delta_{\mathfrak{ p}}\). We set \(\epsilon_{\mathfrak{p}^{\prime}}\) to denote the composite \[\Delta_{\mathfrak{p}^{\prime}}\xrightarrow{c_{\delta}}\Delta_{\mathfrak{p}} \xrightarrow{\epsilon_{\mathfrak{p}}}\mathcal{O}^{\times}.\] Thus for each prime \(\mathfrak{p}|p\), we have made a choice of character \(\epsilon_{p}\) of \(\Delta_{\mathfrak{p}}\). For \(\mathfrak{p}|p\), we have a decomposition of \(\mathcal{O}[\Delta_{\mathfrak{p}}]\)-modules, \[\mathcal{U}_{\mathfrak{p},\mathcal{O}}=\mathcal{U}_{\mathfrak{p},\mathcal{O} }^{\epsilon_{\mathfrak{p}}}\times\mathcal{V}_{\mathfrak{p},\mathcal{O}},\] where the action of \(\Delta_{\mathfrak{p}}\) on \(\mathcal{U}_{\mathfrak{p},\mathcal{O}}^{\epsilon_{\mathfrak{p}}}\) is via \(\epsilon_{\mathfrak{p}}\). Use the following notation \[\mathcal{U}_{p,\mathcal{O}}^{[\epsilon]}:=\prod_{\mathfrak{p}|p}\mathcal{U}_ {\mathfrak{p},\mathcal{O}}^{\epsilon_{\mathfrak{p}}},\text{ and }\mathcal{V}_{p,\mathcal{O}}=\prod_{ \mathfrak{p}|p}\mathcal{V}_{\mathfrak{p},\mathcal{O}}.\] Then, we find that \[\mathcal{U}_{p,\mathcal{O}}=\mathcal{U}_{p,\mathcal{O}}^{[\epsilon]}\times \mathcal{V}_{p,\mathcal{O}}.\] The above decomposition is that of \(\mathcal{O}[\Delta]\)-modules. In the same way, \(\bar{U}_{p,\mathcal{O}}\) decomposes into a product \[\bar{U}_{p,\mathcal{O}}=\bar{U}_{p,\mathcal{O}}^{[\epsilon]}\times\bar{V}_{p,\mathcal{O}}.\] Let \(\left(\bar{U}_{p,\mathcal{O}}^{[\epsilon]}\right)^{\chi}\) (resp. \(\left(\bar{U}_{p,\mathcal{O}}^{[\epsilon]}\right)^{\chi}\)) denote the \(\chi\)-isotypic component of \(\left(\bar{U}_{p,\mathcal{O}}^{[\epsilon]}\right)\) (resp. \(\left(\bar{U}_{p,\mathcal{O}}^{[\epsilon]}\right)\)). Greenberg and Vatsal show that \(\left(\bar{U}_{p,\mathcal{O}}^{[\epsilon]}\right)^{\chi}\) has finite index in \(\left(\mathcal{U}_{p,\mathcal{O}}^{[\epsilon]}\right)^{\chi}\), provided the assertions of Assumption 1.1 are satisfied. **Proposition 3.8**.: _With respect to notation above, suppose that the Assumption 1.1 is satisfied. Then, the following assertions hold_ 1. \(\left(\bar{U}_{p,\mathcal{O}}^{[\epsilon]}\right)^{\chi}\) _has finite index in_ \(\left(\mathcal{U}_{p,\mathcal{O}}^{[\epsilon]}\right)^{\chi}\)_,_ 2. _there is a short exact sequence of_ \(\mathcal{O}\)_-modules_ \[0\to H_{\mathrm{nr}}^{1}(\mathbb{Q},D)\to S_{\chi,\epsilon}(\mathbb{Q})\to \mathrm{Hom}_{\mathcal{O}[\Delta]}\left(\left(\mathcal{U}_{p,\mathcal{O}}^{[ \epsilon]}\right)^{\chi}/\left(\bar{U}_{p,\mathcal{O}}^{[\epsilon]}\right)^{ \chi},D\right)\to 0.\] _Here,_ \(H_{\mathrm{nr}}^{1}(\mathbb{Q},D)\) _is the subgroup of_ \(H^{1}(\mathbb{Q},D)\) _consisting of cohomology classes that are unramified at all primes._ Proof.: The result follows from the proof of [13, Proposition 3.1]. **Theorem 3.9**.: _Assume that_ 1. _the conditions of Assumption_ 1.1 _are satisfied._ 2. \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})=0\)_._ _Then, the following conditions are equivalent_ 1. \(H_{\mathrm{nr}}^{1}(\mathbb{Q},D)=0\) _and_ \(\left(\mathcal{U}_{p,\mathcal{O}}^{[\epsilon]}\right)^{\chi}=\left(\bar{U}_{p, \mathcal{O}}^{[\epsilon]}\right)^{\chi}\)_._ 2. \(\mu_{\chi,\epsilon}=\lambda_{\chi,\epsilon}=0\)_._ 3. \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})=0\)_,_ Proof.: The result follows as a direct consequence of Corollary 3.7 and Proposition 3.8. **Lemma 3.10**.: _With respect to notation above, assume that \(p\nmid\#\operatorname{Cl}(K)\). Then, we find that \(H^{1}_{\operatorname{nr}}(\mathbb{Q},D)=0\)._ Proof.: Since \(p\nmid\#\Delta\), we find that \(H^{1}(\Delta,D^{\operatorname{G}_{K}})=0\), and thus from the inflation-restriction sequence, \(H^{1}(\mathbb{Q},D)\) injects into \(\operatorname{Hom}\left(\operatorname{G}_{K}^{\operatorname{ab}},D\right)^{\Delta}\). Let \(H_{K}\) be the Hilbert class field of \(K\). We find that \(H^{1}_{\operatorname{nr}}(\mathbb{Q},D)\) injects into \(\operatorname{Hom}(\operatorname{Gal}(H_{K}/K),D)^{\Delta}\). Since \(p\nmid\#\operatorname{Cl}(K)\), we find that \[\operatorname{Hom}(\operatorname{Gal}(H_{K}/K),D)=0,\] and hence, \(H^{1}_{\operatorname{nr}}(\mathbb{Q},D)=0\). **Theorem 3.11**.: _Assume that_ 1. _the conditions of Assumption_ 1.1 _are satisfied._ 2. \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\mathfrak{e}_{\mathfrak{p}}})=0\)_,_ 3. \(K\) _is_ \(p\)_-rational,_ 4. \(p\nmid\#\operatorname{Cl}(K)\)_._ _Then, \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})=0\)._ Proof.: Since \(p\nmid\#\operatorname{Cl}(K)\), it follows from Lemma 3.10 that \(H^{1}_{\operatorname{nr}}(\mathbb{Q},D)=0\). Let \(G_{p}\) be the maximal pro-\(p\) extension of \(K\) which is unramified outside \(p\). Since \(p\nmid\#\operatorname{Cl}(K)\), we find that \(G^{\operatorname{ab}}_{p}\) is isomorphic to \(\mathcal{U}_{p}/\bar{U}\). On the other hand, \(K\) is \(p\)-rational if and only if the Leopoldt conjecture is true at \(p\) and there is no \(p\) torsion in \(G^{\operatorname{ab}}_{p}\) (cf. [10] or [11]). This implies that \[\left(\mathcal{U}^{[\epsilon]}_{p,\mathcal{O}}\right)^{\chi}/\left(\bar{U}^{[ \epsilon]}_{p,\mathcal{O}}\right)^{\chi}=0.\] The results then follow from Theorem 3.9. ### Special cases In this subsection, we consider certain special cases. First, we consider some \(2\)-dimensional Artin representations of dihedral type. Let \((L,\zeta)\) be a pair, where \(L\) is an imaginary quadratic extension of \(\mathbb{Q}\) and \(\zeta:\operatorname{G}_{L}\to\bar{\mathbb{Q}}^{\times}\) is a character. Consider the \(2\)-dimensional Artin representation \[\rho=\rho_{(L,\zeta)}:=\operatorname{Ind}_{\operatorname{G}_{L}}^{ \operatorname{G}_{\mathbb{Q}}}\zeta.\] When restricted to \(L\), the representation \(\rho\) decomposes into a direct sum of characters \[\rho_{|\operatorname{G}_{L}}=\left(\begin{array}{cc}\zeta&\\ &\zeta^{\prime}\end{array}\right).\] Here, \(\zeta^{\prime}\) is given as follows \[\zeta^{\prime}(x)=\zeta(cxc^{-1}),\] where \(c\) denotes the complex conjugation. We remark that the representation \(\rho\) is a self-dual representation of dihedral type if and only if \(\zeta^{\prime}=\zeta^{-1}\). Let \(p\) be an odd prime number which splits in \(L\) as a product \(p\mathcal{O}_{L}=\mathfrak{p}\mathfrak{p}^{*}\). We set \(\epsilon_{\mathfrak{p}}:=\zeta_{|\operatorname{G}_{\mathfrak{p}}}\) and note that \(\epsilon_{\mathfrak{p}^{*}}=\zeta^{\prime}_{|\operatorname{G}_{\mathfrak{p}}}\). The field \(K=L(\zeta,\zeta^{\prime})\) is the extension of \(L\) generated by \(\zeta\) and \(\zeta^{\prime}\). Choose an extension \(\mathcal{F}/\mathbb{Q}_{p}\) containing \(\mathbb{Q}_{p}(\chi,\epsilon_{\mathfrak{p}})\) and set \(\mathcal{O}\) to denote its valuation ring, and \(D\) the associated \(\mathcal{O}\)-divisible module. With respect to such a choice, we let \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) be the associated the Selmer group over the cyclotomic \(\mathbb{Z}_{p}\)-extension. **Theorem 3.12**.: _With respect to notation above, assume that the following conditions hold_ 1. \(\rho\) _is irreducible,_ 2. _the order of_ \(\zeta\) _is coprime to_ \(p\)_,_ 3. \(\epsilon_{\mathfrak{p}}\) _is nontrivial and_ \(\epsilon_{\mathfrak{p}}\neq\epsilon_{\mathfrak{p}^{*}}\)_._ _Then, the following conditions are equivalent_ 1. \(H^{1}_{\mathrm{nr}}(\mathbb{Q},D)=0\) _and_ \(\left(\mathcal{U}^{[\epsilon]}_{p,\mathcal{O}}\right)^{\chi}=\left(\bar{U}^{[ \epsilon]}_{p,\mathcal{O}}\right)^{\chi}\)_._ 2. \(\mu_{\chi,\epsilon}=\lambda_{\chi,\epsilon}=0\)_._ 3. \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})=0\)_._ _If \(p\nmid h_{K}\) and \(K\) is \(p\)-rational, then, the above conditions are satisfied._ Proof.: The result follows from Theorem 3.9 once we show that 1. the conditions of Assumption 1.1 are satisfied. 2. \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})=0\). Consider the conditions for Assumption 1.1 to hold. 1. The field \(K\) is the extension of \(L\) cut out by \(\zeta\) and \(\zeta^{\prime}\). Since the order of \(\zeta\) is assumed to be coprime to \(p\), the same holds for \(\zeta^{\prime}\). Since \(p\) is odd, it follows that \(p\nmid[K:\mathbb{Q}]\). 2. Since \(K\) is an imaginary quadratic field complex conjugation acts via the matrix \(\left(\begin{array}{cc}&1\\ 1&\end{array}\right)\), whose eigenvalues are \(1\) and \(-1\) respectively. Therefore, \(d^{+}=d^{-}=1\). 3. The third condition follows from the assumption that \(\epsilon_{\mathfrak{p}}\) is nontrivial and \(\epsilon_{\mathfrak{p}}\neq\epsilon_{\bar{\mathfrak{p}}}\). Finally, we note that since \(\epsilon_{\bar{\mathfrak{p}}}\) is nontrivial, and hence \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})=0\). By the proof of Theorem 3.11, we note that \(H^{1}_{\mathrm{nr}}(\mathbb{Q},D)=0\) and \(\left(\mathcal{U}^{[\epsilon]}_{p,\mathcal{O}}\right)^{\chi}=\left(\bar{U}^{[ \epsilon]}_{p,\mathcal{O}}\right)^{\chi}\) if \(p\) is such that \(p\nmid h_{K}\) and \(K\) is \(p\)-rational, then, the above conditions are satisfied. Next, we consider the group of rotational symmetries of the icosahedron. This group is simply \(A_{5}\). An orthogonal set is a set of \(6\) points, so that three pairwise orthogonal lines can be drawn between pairs of them. The midpoints of the \(30\) edges of an icosahedron can be partitioned into \(5\) orthogonal sets such that these sets are permuted by \(A_{5}\). Let \(g\in A_{5}\) be an element that corresponds to rotation of the icosahedron with axis \((x,y,z)\) and angle \(\theta\). Then the corresponding matrix is \[\left(\begin{array}{ccc}\cos\theta+(1-\cos\theta)x^{2}&(1-\cos\theta)xy-z \sin\theta&(1-\cos\theta)xz+y\sin\theta\\ (1-\cos\theta)xy+z\sin\theta&\cos\theta+(1-\cos\theta)x^{2}&(1-\cos\theta)yz- x\sin\theta\\ (1-\cos\theta)xz-y\sin\theta&(1-\cos\theta)yz+x\sin\theta&\cos\theta+(1-\cos \theta)z^{2}.\end{array}\right)\] This gives a representation \[r:A_{5}\rightarrow\mathrm{GL}_{3}(\bar{\mathbb{Q}}).\] Let \(K/\mathbb{Q}\) be a Galois extension with Galois group \(\mathrm{Gal}(K/\mathbb{Q})\) isomorphic to \(A_{5}\), and \(\varrho\) be the composite \[\varrho:\mathrm{G}_{\mathbb{Q}}\rightarrow\mathrm{Gal}(K/\mathbb{Q}) \xrightarrow{\sim}A_{5}\xrightarrow{r}\mathrm{GL}_{3}(\bar{\mathbb{Q}}),\] where the initial map is the quotient map. Any \(5\)-cycle corresponds to a rotation by an angle of \(\frac{2\pi}{5}\), and therefore has eigenvalues \(e^{\frac{2\pi i}{5}},e^{-\frac{2\pi i}{5}},1\). Let \(g\in A_{5}\) be a \(5\)-cycle and \(D\) be the group generated by \(g\), and \(L:=K^{D}\) be the field fixed by \(D\). Let \(p\geq 7\) be a prime which is split in \(L\) and is inert in \(K/L\). Then, the restriction to the decomposition group at \(p\) is of the form \(\operatorname{diag}\left(\alpha_{p},\alpha_{p}^{-1},1\right)\), where \(\alpha_{p}\) is an unramified character of order \(5\). Let \(\psi:\operatorname{G}_{\mathbb{Q}}\to\mathcal{O}^{\times}\) be an even character which is ramified at \(p\), and set \(\rho:=\varrho\otimes\psi\). **Theorem 3.13**.: _With respect to the above notation, the following conditions are equivalent_ 1. \(H^{1}_{\operatorname{nr}}(\mathbb{Q},D)=0\) _and_ \(\left(\mathcal{U}^{[\varepsilon]}_{p,\mathcal{O}}\right)^{\chi}=\left(\bar{U} ^{[\varepsilon]}_{p,\mathcal{O}}\right)^{\chi}\)_._ 2. \(\mu_{\chi,\varepsilon}=\lambda_{\chi,\varepsilon}=0\)_._ 3. \(S_{\chi,\varepsilon}(\mathbb{Q}_{\infty})=0\)_._ _If \(p\nmid h_{K}\) and \(K\) is \(p\)-rational, then, the above conditions are satisfied._ Proof.: As in the proof of Theorem 3.12, the result follows from Theorem 3.9 once we show that 1. the conditions of Assumption 1.1 are satisfied. 2. \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})=0\). Let us verify the conditions of the Assumption 1.1. 1. Since it is assumed that \(p\geq 7\), it follows that \(p\nmid\#A_{5}\), and hence, \(p\nmid[K:\mathbb{Q}]\). 2. Let \(v\) be an archidean prime, and \(\sigma_{v}\) be a generator of the decomposition group at \(v\). Then, \(\rho(\sigma_{v})=\varrho(\sigma_{v})\psi(\sigma_{v})=\varrho(\sigma_{v})\). Since \(\varrho\) only gives rise to rotational symmetries and \(\varrho(\sigma_{v})\) has order \(2\), it must correspond to the rotation with \(\theta=\pi\). This means that it is conjugate to \(\operatorname{diag}(1,-1,-1)\), and hence, \(d^{+}=1\). 3. For the third condition, it has been arranged that \(\varrho_{|\Delta_{\mathfrak{p}}}=\operatorname{diag}(\alpha_{p},\alpha_{p}^{ -1},1)\), where \(\alpha_{p}\) is an unramified character of order \(5\), and hence, \(\epsilon_{p}:=\alpha_{p}\psi\) is a ramified character which occurs with multiplicity \(1\). Finally, we note that \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{p}})=0\) since the characters \(\alpha_{p}^{-1}\psi\) and \(\psi\) are nontrivial. This is because \(\psi\) is ramified at \(p\), while \(\alpha_{p}\) is not. ## 4. Distribution questions for Artin representations ### Iwasawa theory of class groups In this section, we introduce and study some natural distribution questions for Iwasawa invariants of class groups. These discussions serve to motivate similar questions for Artin representations, which we discuss in the next subsection. Given a number field \(K\) and a prime number \(p\), the Iwasawa invariants \(\mu_{p}(K)\) and \(\lambda_{p}(K)\) are natural invariants to consider associated with the growth of \(p\)-primary parts of class groups in the cyclotomic extension of \(K\). One considers the following question. **Question 4.1**.: _Given an imaginary quadratic \(K/\mathbb{Q}\), how does \(\mu_{p}(K)\) and \(\lambda_{p}(K)\) vary as \(p\to\infty\)? In other words, for a given pair \((\mu,\lambda)\), what can be said about the upper and lower densities of the set of primes \(p\) for which_ \[\lambda_{p}(K)=\lambda.\] We call the above set \(\mathcal{F}_{\lambda}\), and its upper (resp. lower) density \(\bar{\mathfrak{d}}_{\lambda}\) (resp. \(\mathfrak{\underline{d}}_{\lambda}\)). We note that if \(p\) is inert or ramified in \(K\) and \(p\nmid h_{K}\), then, \(h_{p}(K_{n})=0\) for all \(n\). In particular, \(\lambda_{p}(K)=0\). This implies in particular that \(\mathfrak{\underline{d}}_{0}\geq\frac{1}{2}\). On the other hand, if \(p\) splits in \(K\), then, \(r_{p,K}\geq 1\) and thus, \(\lambda_{p}(K)\geq 1\). This implies that \(\mathfrak{d}_{0}=\frac{1}{2}\). Let \(p\) be a prime which splits in \(K\) for which \(p\nmid h_{K}\). The former condition is satisfied by \(\frac{1}{2}\) of the primes and the latter condition is satisfied for all but finitely many primes. The following result is a corollary to Gold's criterion. **Corollary 4.2**.: _Let \(p\) be an odd prime number which splits into \(\mathfrak{pp}^{*}\) in \(\mathcal{O}_{K}\). Suppose that \(r>1\) is an integer not divisible by \(p\) and such that \(\mathfrak{p}^{r}=(\alpha)\). Then, the following conditions are equivalent_ 1. \(\lambda_{p}(K)>1\)_,_ 2. \(\operatorname{Tr}(\alpha)^{p-1}\equiv 1\mod p^{2}\)_._ Indeed we find that for each number \(a_{0}\in[1,p-1]\), there is precisely one congruence class \(a\) modulo-\(p^{2}\) such that is \(a_{0}\) mod-\(p\) and satisfies \(a^{p-1}\equiv 1\mod p^{2}\). Therefore, the probability that an integer \(a\in\mathbb{Z}/p^{2}\mathbb{Z}\) satisfies the congruence \(a^{p-1}\equiv 1\mod p^{2}\) is \(\frac{p-1}{p^{2}}=\frac{1}{p}-\frac{1}{p^{2}}\). Let \(N_{K}(X)\) be the number of primes \(p\leq X\) such that \(p\) is split in \(K\) and \(\lambda_{p}(K)>1\). Thus, the heuristic suggests that \[N_{K}(X)\sim\sum_{p\in\Omega^{\prime}(X)}\left(\frac{1}{p}-\frac{1}{p^{2}} \right)\sim\log\log X,\] where \(\Omega^{\prime}(X)\) is the set of primes \(p\leq X\) that split in \(K\). This leads us to the following expectation. **Conjecture 4.3**.: _Let \(M_{p}(X)\) be the number of primes \(\leq X\) that split in \(K\) for which \(\lambda_{p}(K)>1\), then, \(M_{p}(X)=\frac{1}{2}\log\log X+O(1)\)._ In particular the set of primes \(p\) for which \(\lambda_{p}(K)>1\) is infinite of density \(0\), and thus, the conjecture in particular predicts that \(\mathfrak{d}_{1}=\frac{1}{2}\). Horie [10] has proven the infinitude of primes \(p\) such that \(\lambda_{p}(K)>1\) and Jochnowitz [11] on the other hand the infinitude of primes \(p\) for which \(\lambda_{p}(K)=1\). On the other hand, one can fix a prime and study the variation of \(\lambda_{p}(K)\) as \(K\) ranges over all imaginary quadratic fields. We separate this problem into two cases, namely that of imaginary quadratic fields in which \(p\) is inert, and those in which \(p\) splits. For the primes \(p\) that are inert in \(K\) Ellenberg, Jain and Venkatesh [1] make the following prediction based on random matrix heuristics. **Conjecture 4.4** (Ellenberg, Jain, Venkatesh).: _Amongst all imaginary quadratic fields \(K\) in which \(p\) is inert, the proportion for which \(\lambda_{p}(K)=r\) is equal to_ \[p^{-r}\prod_{t>r}\left(1-p^{-t}\right).\] We note that \(\lambda_{p}(K)=0\) if and only if \(p\nmid h_{K}\). The probability that \(p\nmid h_{K}\) is, according to the Cohen-Lenstra heuristic, predicted to be equal to \[\prod_{t>0}\left(1-p^{-t}\right).\] ### Artin representations Given an Artin representation \(\rho\), we are interested in understanding the variation of \(\mu\) and \(\lambda\)-invariants as \(p\) ranges over all prime numbers. We specialize our discussion to odd \(2\)-dimensional Artin representations of dihedral type \[\rho=\operatorname{Ind}_{\operatorname{G}_{L}}^{\operatorname{G}_{\operatorname {Q}}}\zeta,\] where \(L\) is an imaginary quadratic field and \(\zeta:\mathrm{G}_{L}\to\bar{\mathbb{Q}}^{\times}\) is a character. Let \(\zeta^{\prime}\) be the character defined by \(\zeta^{\prime}(\sigma)=\zeta(c\sigma c^{-1})\), where \(c\in\mathrm{G}_{\mathbb{Q}}\) denotes the complex conjugation. Here, \(L/\mathbb{Q}\) is an imaginary quadratic field. Since it is assumed that \(\rho\) is of dihedral type, it follows that \(\zeta^{\prime}=\zeta^{-1}\) and the extension \(K\) is simply the extension of \(L\) that is fixed by the kernel of \(\zeta\). Furthermore, assume that \(2\nmid[K:L]\). Let \(S(\rho)\) be the set of primes \(p\) such that 1. \(p\) is odd and \(p\nmid[K:\mathbb{Q}]\), 2. \(p\) splits in \(L\), \(p\mathcal{O}_{L}=\pi\pi^{*}\), and \(\pi\) is inert in \(K/L\). For \(p\in S(\rho)\), it is then clear from the assumptions that the Assumption 1.1 holds, and that \(H^{0}(\Delta_{\mathfrak{p}},D/D^{\epsilon_{\mathfrak{p}}})=0\) for any of the primes \(\mathfrak{p}|p\) of \(K\) and the character \(\epsilon_{\mathfrak{p}}=\zeta_{|\Delta_{\mathfrak{p}}}\). Let \(\epsilon\) be the character associated to this choice of \(\mathfrak{p}\) and \(\epsilon_{\mathfrak{p}}\). **Definition 4.5**.: _Let \(T(\rho)\) be the set of primes \(p\in S(\rho)\) such that the Selmer group \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\) vanishes for all of the primes \(\mathfrak{p}\) that lie above \(p\). Set \(T^{\prime}(\rho):=S(\rho)\backslash T(\rho)\)._ We note that the Theorem 3.11 implies that for \(p\in S(\rho)\) such that \(K\) is \(p\)-rational, then, \(p\in T(\rho)\). Let us recall some heuristics on the \(p\)-rationality condition. The \(\mathbb{Z}_{p}\)-rank of \(\mathcal{U}_{p}\) is \(n=[K:\mathbb{Q}]=r_{1}+2r_{2}\). On the other hand, the rank of \(U_{K}\) is \(k:=r_{1}+r_{2}-1\). The probabilty that a random matrix with \(n\) columns and \(k\) rows does not have full \(\mathbb{Z}_{p}\)-rank is \[\Pr_{k,n}:=1-\frac{\prod_{i=0}^{k-1}\left(p^{n}-p^{i}\right)}{p^{n}}\leq\frac {1}{p^{n-k+1}}+\frac{1}{p^{n-k+2}}+\cdots+\frac{1}{p^{n}}.\] Therefore, the expected number of primes \(p\) at which \(K\) is not \(p\)-rational is finite, since according to this heuristic, \[\sum_{p}\Pr_{k,n}\leq\sum_{p}\left(\frac{1}{p^{r_{2}+2}}+\frac{1}{p^{r_{2}+3 }}+\cdots+\frac{1}{p^{n}}\right)=\zeta(r_{2}+2)+\zeta(r_{2}+3)+\cdots+\zeta(n )<\infty.\] This is indeed a conjecture due to Gras. **Conjecture 4.6** (Gras).: _Let \(K\) be a number field. Then for all large values of \(p\), \(K\) is \(p\)-rational._ This leads us to make the following conjecture. **Conjecture 4.7**.: _With respect to above notation, the set of primes \(T^{\prime}(\rho)\) is finite. Thus, there are only finitely many pairs \((\mathfrak{p},\epsilon)\) where \(\mathfrak{p}\) is a prime above \(p\in S(\rho)\) and \(\epsilon:\mathrm{G}_{L_{\mathfrak{p}}}\to\bar{\mathbb{Q}}_{p}^{\times}\) is a character, such that the associated Selmer group \(S_{\chi,\epsilon}(\mathbb{Q}_{\infty})\neq 0\)._ There is a relationship between Gras' conjecture and the generalized abc-conjecture, stated below, cf. [20]. **Conjecture 4.8** (Generalized abc-conecture).: _Let \(K\) be a number field and \(I\) be an ideal in \(\mathcal{O}_{K}\), the radical of \(I\) is defined as follows_ \[\mathrm{Rad}(I):=\prod_{\mathfrak{p}|p}N(\mathfrak{p}),\] _where the product is over all prime ideals \(\mathfrak{p}\) dividing \(I\), and \(N(\mathfrak{p}):=\#\left(\mathcal{O}_{K}/\mathfrak{p}\mathcal{O}_{K}\right)\) is the norm of \(\mathfrak{p}\). The generalized abc conjecture predicts that for any \(\epsilon>0\), there exists a constant \(C_{K,\epsilon}>0\) such that_ \[\prod_{v}\max\{|a|_{v},|b|_{v},|c|_{v}\}\leq C_{K,\epsilon}\left(\operatorname{ Rad}(abc)\right)^{1+\epsilon},\] _holds for all non-zero \(a,b,c\in\mathcal{O}_{K}\) such that \(a+b=c\)._ Let us recall a recent result of Maire and Rougnant, cf. [13, Theorem A]. **Theorem 4.9** (Maire-Rougnant).: _Let \(K/\mathbb{Q}\) be an imaginary \(S_{3}\) extension. Then, the generalized abc-conjecture for \(K\) implies that there is a constant \(c>0\) such that_ \[\#\{p\leq x\mid K\text{ is $p$-rational}\}\geq c\log x.\] The above result has implication to the vanishing of Iwasawa modules (and invariants). **Theorem 4.10**.: _Let \(K/\mathbb{Q}\) be an imaginary \(S_{3}\) extension and \(\rho\) be a \(2\)-dimensional Artin representation that factors through \(\operatorname{Gal}(K/\mathbb{Q})\). Then,_ \[\#\{p\leq x\mid p\in T(\rho)\}\geq c\log x.\] Proof.: Theorem 3.11 implies that for \(p\in S(\rho)\) such that \(K\) is \(p\)-rational, then, \(p\in T(\rho)\). The result thus follows.
2309.12635
Population synthesis of exoplanets accounting for orbital variations due to stellar evolution
In this paper, the evolution of exoplanet orbits at the late stages of stellar evolution is studied by the method of population synthesis. The evolution of stars is traced from the Main Sequence stage to the white dwarf stage. The MESA package is used to calculate evolutionary tracks. The statistics of absorbed, ejected, and surviving planets by the time of the transformation of parent stars into white dwarfs are calculated taking into account the change in the rate of star formation in the Galaxy over the entire time of its existence. Planets around stars in the range of initial masses 1-8 $M_\odot$ are considered since less massive stars do not have time to leave the Main Sequence during the lifetime of the Galaxy, and more massive ones do not lead to the formation of white dwarfs. It is shown that with the initial $a$~--~$M_\mathrm{pl}$ distribution of planets adopted in this work, most (about 60\%) of the planets born from stars in the mass range under study are absorbed by their parent stars at the giant stage. A small fraction of the planets (less than one percent) are ejected from their systems because of the mass loss due to the stellar wind. The estimated number of ejected planets with masses ranging from 0.04 Earth masses to 13 Jupiter masses in the Milky way is approximately equal to 300 million.
A. S. Andriushin, S. B. Popov
2023-09-22T06:07:41Z
http://arxiv.org/abs/2309.12635v1
# Population synthesis of exoplanets accounting for orbital variations due to stellar evolution ###### Abstract In this work, the evolution of exoplanet orbits at the late stages of stellar evolution is studied by the method of population synthesis. The evolution of stars is traced from the Main Sequence stage to the white dwarf stage. The MESA package is used to calculate evolutionary tracks. The statistics of absorbed, ejected, and surviving planets by the time of the transformation of parent stars into white dwarfs are calculated taking into account the change in the rate of star formation in the Galaxy over the entire time of its existence. Planets around stars in the range of initial masses 1-8 \(M_{\odot}\) are considered since less massive stars do not have time to leave the Main Sequence during the lifetime of the Galaxy, and more massive ones do not lead to the formation of white dwarfs. It is shown that with the initial \(a\) - \(M_{\rm pl}\) distribution of planets adopted in this work, most (about 60%) of the planets born from stars in the mass range under study are absorbed by their parent stars at the giant stage. A small fraction of the planets (less than one percent) are ejected from their systems because of the mass loss due to the stellar wind. The estimated number of ejected planets with masses ranging from 0.04 Earth masses to 13 Jupiter masses in the Milky way is approximately equal to 300 million. ## 1 Introduction About three decades have passed since the discovery of the first exoplanets [1], [2]. During this time, the number of confirmed extrasolar planets discovered with the help of such instruments as Kepler, HARPS, HIRES, TESS, etc., exceeded 4300. Of these, more than a hundred are planets around evolved stars: red giants and subgiants. The statistics of the detection of planets around white dwarfs are more modest. We can mention a planet around the star WD 0806-661 [3], a recently discovered candidate for WD 1856+534 [4], as well as objects in binary systems "white dwarf - Main Sequence star" (NN Ser, Gliese 86). However, there are many more examples - of the order of 1000 - of "planetary debris" detection around white dwarfs and in their atmospheres, and those are products of planets and/or asteroid destruction. Such conclusions can be made from the analysis of the observed chemical composition of the atmospheres of dwarfs and the observations of their circumstellar disks of dust and rock fragments [5], [6]. Thus, it can be considered an established fact that objects of planetary masses can not only remain in the system after the star has shed its envelope at the later stages of evolution but also go into low orbits around a compact object. This makes it relevant to analyze the properties of planets in the late stages of evolution and their previous history. In order to adequately interpret the growing amount of data on exoplanets around evolved stars and to be able to judge from these data what kind of evolution the observed planetary system has undergone, it is necessary to theoretically understand the processes that determine the evolution of planetary systems, including those stages when their parent star retires from the Main sequence (MS). A model of the evolution of planetary systems under the influence of the evolution of their parent stars would make it possible to make assumptions about the past of these systems in relation to the discovered and observed planets around evolved stars. In addition, it is desirable that the model also has a predictive potential for planetary systems around MS stars. Over the past 10 years, a lot of studies have been devoted to modeling the evolution of planetary systems of stars after the MS stage. Key results and unresolved issues are discussed, for example, in the review [7]. The evolution of planetary systems in the late stages of a star's life occurs under the influence of various factors and at different levels, depending on the value of the semimajor axis of the orbit and the mass of a substellar object (for example, a planet or asteroid) during the life of a star on the MS, and on how this object in the future -- after the star leaves the MS, -- can be influenced by factors such as mass loss by the parent star, tidal effects in a "star-planet" system, radiation (Yarkovsky effect, YORP-effect), and magnetic fields. The impact of these factors can manifest itself both in a change in the orbit of a substellar object and in a change in its physical parameters (mass and size, temperature, composition of the surface and atmosphere, etc.). The impact can be so strong that the object will be ejected from the system, or it may happen that at the giant stage, the parent star absorbs it and it ceases to exist. Thus, in this regard, it is worth mentioning that in addition to the already mentioned examples of planets around white dwarfs and giant stars, there are also examples of free-floating planets: WISE \(J085510.83\)-\(071442.5\)[8], SDSS J1110+0116 [9], PSO J318.5-22 [10] and others. Also the discovery of a free-floating terrestrial planet [11] deserves a special mention. The number of discovered free-floating planets is growing, and among them there may be those that became unbound after they were ejected from their parent planetary systems as a result of the mass loss of a star due to a strong stellar wind. The final fate of planetary systems is determined not only by stellar evolution but also by the initial parameters of the planets. There is a large number of modern studies that are devoted to the theory of planet formation and modeling of planetary systems (see a review in [12]). Along with a detailed study of individual systems (for example, the Solar System) or the development of details of various stages of the formation of planets and the evolution of their orbits, an important place is occupied by the construction of population models, which at a coarser level include the processes of formation and evolution of objects in a wide range of initial parameters. The population synthesis of exoplanets is discussed, for example, in the works of Christophe Mordasini, Jan Alibert et al. [15], [17], [18], [16]. In our article, we actively use the results of these studies. The aim of this work is to model planetary orbits accounting for the evolution of a star after the MS stage, as well as to calculate the statistics of absorbed, ejected from the system, and surviving planets by the time their parent star turns into a white dwarf. Calculating properties of the Galactic population of planets we take into account the history of star formation in the Milky Way. In section 2 we present the model that underlies our population synthesis, describe the initial distributions of planets over masses and orbits carried out in our simulation, as well as the evolutionary models of stars used in the work. Then, in section 3 we briefly describe the code for population synthesis written in the MatLab package. Section 4 is devoted to the results of our study. In section 5 we discuss our results and approach. The final section briefly summarizes the main results of this study. ## 2 Model The population synthesis and modeling of the evolution of exoplanetary systems carried out in this work are based on modern approaches to formation of planetary systems and stellar evolution, as well as on a simple model that links the evolution of a star and changes in orbits of the planets. This simple model does not take into account possible orbital variations resulting from interplanetary gravitational interactions. As for binary and multiple star systems, the model is suitable only for that fraction of them for which the distance between the parent star and the planet significantly exceeds the distance to the second star in the system (for binary systems, where a planet orbits at a large distance from a pair of stars close to each other, the model does not work since in this case the evolutionary tracks of stars due to mutual influence in many cases will be different from those used in this study). ### Model of the orbital evolution The problem of orbital evolution due to isotropic mass loss by a central more massive body is well known. Changes in the semimajor axis of the orbit, eccentricity, and true anomaly with time are generally described by the following differential equations (see [13], [14]): \[\frac{da}{dt}=a\frac{1+e^{2}+2e\cos(f)}{1-e^{2}}\frac{\dot{M}_{\rm tot}}{M_{ \rm tot}} \tag{1}\] \[\frac{de}{dt}=\left(e+\cos(f)\right)\frac{\dot{M}_{\rm tot}}{M_{\rm tot}} \tag{2}\] \[\frac{df}{dt}=-\frac{\sin(f)}{e}\frac{\dot{M}_{\rm tot}}{M_{\rm tot}}+\frac{n( 1+e\cos(f))^{2}}{(1-e^{2})^{3/2}}, \tag{3}\] where \(f\) - is the true anomaly of the orbit, \(a\) - its semi-major axis, \(e\) -- eccentricity, \(\dot{M}_{\rm tot}\) - mass loss rate in the system, which in our case is connected only with the stellar wind from the parent star (\(\dot{M}_{\rm tot}\equiv\dot{M}_{\star}\)), \(M_{\rm tot}\) - total mass of the star - planet system (for our systems \(M_{\rm tot}\approx M_{\star}\), where \(M_{\star}\) - is the stellar mass), \(n\) - mean motion (\(n=2\pi\sqrt{M_{\rm tot}/a^{3}}\)). The system of equations is complemented with equations for the orbital inclination \(i\), ascending node longitude \(\Omega\), periapsis longitude \(\varpi\), and periapsis argument \(\omega\)[13]: \[\frac{di}{dt}=\frac{d\Omega}{dt}=0,\] \[\frac{d\varpi}{dt}=\frac{d\omega}{dt}=\frac{\sin(f)}{e}\frac{\dot{M}_{\rm tot }}{M_{\rm tot}}.\] The system of these equations does not have a complete analytical solution, but there are mass loss regimes under which an analytical solution is available. We are interested in one of these regimes. To classify them a dimensionless mass loss parameter \(\psi\) is introduced. It is defined as follows: \[\psi=\frac{1}{2\pi}\left(\frac{a}{1{\rm au}}\right)^{3/2}\left(\frac{M_{\star }}{M_{\odot}}\right)^{-3/2}\frac{\dot{M}_{\star}}{M_{\odot}\,{\rm yr}^{-1}}. \tag{4}\] For \(\psi\ll 1\) an adiabatic regime takes place. Then the evolution of the orbit is slow and can be described by a simple analytical formula: \[a(\Delta t)=a_{\rm in}\left(1-\frac{\Delta t\dot{M}_{\star}}{M_{\rm tot}} \right)^{-1}. \tag{5}\] Here \(\Delta t\) is a duration of the evolutionary stage, \(a(\Delta t)\) - semimajor axis at the end of the evolutionary stage, \(a_{\rm in}\) - semimajor axis at the beginning of the evolutionary stage, \(M_{\rm tot}\) - current total mass of the star-planet system (the planet mass is considered to be constant). The stellar mass evolution is calculated as follows: \[M_{\star}(\Delta t)=M_{\rm in}-\Delta t\dot{M}_{\star}, \tag{6}\] where \(M_{\star}(\Delta t)\) is the stellar mass at the end of the evolutionary stage, \(M_{\rm in}\) - stellar mass at the beginning of the evolutionary stage. For cases where \(\psi\) approaches unity (more precisely \(\psi\)>0.1) we numerically solve a system of four differential equations, three of which are given above, and the fourth describes the evolution of the mass loss rate (see section 3). ### Initial distributions of planetary parameters The key point in our modeling is the choice of initial distributions of planetary parameters. At the moment, they are not well known, so one can use different approaches to specify them. For example, as a basis for the distribution of exoplanets by masses and semimajor axes, one can use the data from one of the catalogs of confirmed exoplanets. However, modern observational data are burdened by various selection effects. Therefore, we decided to use the results of theoretical modeling of planetary systems. In recent years, population models of the formation of planetary systems have been significantly advanced. In our simulation, when creating the initial distribution of planets on the plane "semimajor axis of the planet's orbit - planetary mass" (\(a\) - \(M_{\rm pl}\)), we rely on the results from the article by Alibert et al. [17]. In this paper, the authors calculated distributions over masses and semimajor axes at the end of the rapid initial dynamical evolution of planetary systems. The emphasis is on the fact that the calculations were carried out taking into account the interactions between planetary embryos and planets. The initial orbits of the embryos ranged from 0.1 to 20 AU, the initial masses were 0.01 Earth mass. The mass of the central star was taken equal to one solar mass. The metallicity of the star was chosen randomly from the metallicities of the stars in the CORALIE list of objects. The inner radius of the disk was taken to be 0.05 AU, the mass of the disks was in the range from 0.01 to 0.03 \(M_{\odot}\), and the surface density at a distance of 5.2 AU - from 0 to 10 g/cm\({}^{2}\) with a long "tail" of distribution up to 50 g/cm\({}^{2}\). Following the approach used by Popkov and Popov [20] we approximate the \(a\) - \(M_{\rm pl}\) diagram presented in [17] by several groups (see Fig. 2). Each of the groups I, IV - VI is approximated by a two-dimensional log-normal distribution, which consists of two one-dimensional distributions: \[p(x)=\frac{1}{x\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{(\ln(x)-\zeta)^{2}}{2 \sigma^{2}}\right), \tag{7}\] where \(\sigma\) and \(\zeta\) are parameters of the distributions, see table 1. Group II is approximated by a bivariate Gauss distribution of the following form: \[\begin{split} p(x,y)=\frac{1}{2\pi xy\sigma_{x}\sigma_{y}\sqrt{1- \rho^{2}}}\times\\ \times\exp\left(-\frac{\phi^{2}/\sigma_{x}^{2}+\psi^{2}/\sigma_{y }^{2}-2\rho\psi\phi/\sigma_{x}\sigma_{y}}{2(1-\rho^{2})}\right),\end{split} \tag{8}\] where \(\phi=\lg(x)-\zeta_{x}\), \(\psi=\lg(y)-\zeta_{y}\), and \(\sigma_{x},\sigma_{y},\zeta_{x},\zeta_{y},\rho\) are parameters given in table 1. Group III is approximated by a two-dimensional uniform (in a logarithmic scale) distribution. Since the paper [20] focuses on those planets that can merge with their stars, i.e. on planets situated relatively close to their stars, the authors limit their modeling to the listed six groups of planets, which describe most of the distribution obtained by Alibert et al. in the \(a\) - \(M_{\rm pl}\) plane. Analysis of Fig. 5 in [17] allows us to state that it has one more small group of planets (in our Fig. 2 this is group VIII) -- objects that finally appear at wide orbits as a result of gravitational interaction with other bodies in multiplanetary systems at early stages of their lives. According to [17], the fraction of such planets in the population considered by us is slightly less than 1%. This group is described by a two-dimensional "triangular" distribution uniform in a logarithmic scale, in which the "hypotenuse" is given by a straight line connecting the points with coordinates (lg(20), lg(0.1)) and (lg(700), lg(1200)), and "legs" are defined in the table 1. Figure 1: Distribution of confirmed exoplanets in the plane “semimajor axis of the planetary orbit - planetary mass” according to the catalog exoplanet.eu. The figure shows data for approximately 1700 planets, including those for which only the lower limit \(M\sin(i)\) is known as a mass. Planets around pulsars and white dwarfs are not included. We also added a group that is not presented in [17]. These are the planets formed as a result of fragmentation of a self-gravitating protoplanetary disk. In describing this group, we follow the study by Forgan et al. [19]. Approximating the mass distribution of circumstellar disks by a normal distribution on a logarithmic scale, as was done in [18] (Fig. 4 and Table 2 in that paper and Fig. 3 here) and assuming that the distribution is typical for the entire range of stellar masses in our work, we calculate the fraction of planetary systems with planets formed as a result of self-gravity of disk fragments. It should be noted that several assumptions are made in these calculations based on the results from [19]. Following this work, firstly, such planets can form only in disks with masses in the range from 0.125 to 0.4 stellar masses (lighter disks are unlikely to be self-gravitating, and heavier ones accrete onto the star very quickly). Secondly, the average number of disk fragments that can become planets in a given system was assumed to be equal to four. Finally, on average, about 40% of these fragments survive. Taking into account these assumptions, the fraction of \(\delta\) planets of this group in the population was calculated using the following equation: \[\delta=0.4\times 4\times\] \[\times\frac{\int_{\lg(0.125)}^{\lg(0.4)}\exp\left(-\frac{(x+1.66) ^{2}}{2\times 0.6^{2}}\right)/\sqrt{2\pi\times 0.6^{2}}dx}{\int_{\lg(0.001)}^{\lg(1)} \exp\left(-\frac{(x+1.66)^{2}}{2\times 0.6^{2}}\right)/\sqrt{2\pi\times 0.6^{2}}dx} \tag{9}\] \[\approx 0.1395.\] When choosing functions for the distributions of this group of planets we used data presented in Fig.3 and Fig.7 in [19] as a guide. The distribution of semimajor axes is approximated by a function similar to the Maxwell distribution: \[f(x)=\sqrt{\frac{2}{\pi}}\frac{x^{2}}{\kappa^{3}}\times\exp(-\frac{x^{2}}{2 \kappa^{2}}).\] Parameter \(\kappa\) is given in table 1. The mass distribution is taken to be log-normal. The fraction of each of the first six groups of planets is calculated according to [20] (and corrected accordingly to account for the existence of two more groups). The percentage of all groups is given in the fourth column of table 1. Regardless of which of the groups a planet belongs to, for the entire population, the lower and upper limits on the mass are determined. The lower limit is 0.04 Earth masses. The upper one is 13 Jupiter masses (i.e., about 4120 Earth masses). The lower limit roughly corresponds to the mass of the least massive planet in the Solar system (we also note that at the moment only three exoplanets with masses below this limit are known, see e.g. exoplanet.eu). The upper limit is related to the lower limit on the mass of brown dwarfs. For numerical calculations of planetary orbits for large values of the \(\psi\) parameter, the values of eccentricity and true anomaly are required. For the entire population, we take the initial value of the true anomaly \(f=0\), and the distribution of the initial eccentricity is set to be uniform in the range \(0.01<e<0.1\). In our opinion, observational and/or theoretical data do not allow one to specify the distribution of this parameter with sufficient accuracy. Fixing the initial eccentricity at \(e=0.1\) and \(e=0.01\) led to the following change in the key results: the number of absorbed planets changed at the level of \(\lesssim 1\%\), the number of ejected planets - at the level of \(\lesssim 0.04\%\). ### Evolutionary tracks For calculations of the evolutionary tracks we use the MESA package (Modules for Experiments in Stellar Astrophysics, Release 10398) [21]. Evolutionary tracks of stars with metallicity \(Z=0.02\) are constructed for the following initial masses: from 1 to 2.6 solar masses with a step 0.1 \(M_{\odot}\), then - for masses 2.8 \(M_{\odot}\), 3 \(M_{\odot}\) and 3.25\(M_{\odot}\), and finally, for larger masses with a step 0.25 Figure 3: Mass distribution of circumstellar disks according to [18]. Distribution parameters: \(\sigma=0.6,\mu=-1.66\). The mass interval in which planets can form in a self-gravitating disk is noted. Figure 2: Initial distribution of planets by masses and semimajor axes of the orbit (15,000 planets are shown). \(M_{\odot}\) up to 8 solar masses. Tracks for less massive stars were not used in the simulation, because the red giant stage in these tracks is reached in a time exceeding the age of the Galaxy (see subsection "Masses of white dwarfs" in the section devoted to the results of the work). For calculations of the mass loss rate at the stage of a red giant (Red Giant Branch -- RGB), we apply the Reimers formula [22]: \[\frac{\dot{M}_{\star}}{M_{\odot}\,\mathrm{year}^{-1}}=4\times 10^{-13}\times \eta_{R}\times\left(\frac{L}{L_{\odot}}\right)\left(\frac{R}{R_{\odot}}\right) \left(\frac{M_{\star}}{M_{\odot}}\right)^{-1}. \tag{10}\] Here \(L\) is the current luminosity of the star, \(R\) -- the current radius of a star, and \(M\) -- the current mass of a star (all variable are taken in solar units). \(\eta_{R}\) is a free parameter: for stars with initial masses up to 3 solar masses its values are set in the range 0.1-0.7 for both the RGB and the asymptotic giant branch (AGB); for more massive stars we use same range for the RGB, but for AGB we take \(\eta_{R}\) in the range from 0.5 to 7. To calculate the mass loss rate at the AGB stage, we use the Blocker equation [22]: \[\frac{\dot{M}_{\star}}{M_{\odot}\,\mathrm{year}^{-1}}=4\times 10^{-13 }\times\eta_{\mathrm{R}}\left(\frac{L}{L_{\odot}}\right)\left(\frac{R}{R_{ \odot}}\right)\left(\frac{M_{\star}}{M_{\odot}}\right)^{-1}\times \tag{11}\] \[\times 4.83\times 10^{-9}\times\left(\frac{L}{L_{\odot}}\right)^{2.7}\left(\frac{M_{\star}}{M_{\odot}}\right)^{-2.1}.\] \begin{table} \begin{tabular}{||c|c|c|c||} \hline Group & Distribution & Parameters & Fraction of \\ & & & planets \\ \hline \hline & & \(\zeta_{a}=\ln(0.5)\) & \\ I & 2D Log-normal & \(\zeta_{M}=\ln(500)\) & 6.72\% \\ & & \(\sigma_{a}=0.9\) & \\ & & \(\sigma_{M}=1\) & \\ \hline & & \(\zeta_{a}=\lg(0.5)\) & \\ & & \(\zeta_{M}=0\) & \\ II & Bivariate Gauss & \(\sigma_{a}=0.25\) & 46.78\% \\ & & \(\sigma_{M}=0.45\) & \\ & & \(\rho=-0.8\) & \\ \hline & & \(log10(a_{min})=-0.7\) & \\ III & Uniform in logarithm & \(\log 10(M_{min})=-1.39\) & 5.19\% \\ & & \(\lg(a_{max})=1.3\) & \\ & & \(log10(M_{max})=1.6\) & \\ \hline & & \(\zeta_{a}=\ln(0.2)\) & \\ IV & 2D Log-normal & \(\zeta_{M}=\ln(0.4)\) & 17.61\% \\ & & \(\sigma_{a}=0.5\) & \\ & & \(\sigma_{M}=0.8\) & \\ \hline & & \(\zeta_{a}=\ln(0.045)\) & \\ V & 2D Log-normal & \(\zeta_{M}=\ln(0.7)\) & 6.04\% \\ & & \(\sigma_{a}=0.2\) & \\ & & \(\sigma_{M}=0.8\) & \\ \hline & & \(\zeta_{a}=\ln(0.06)\) & \\ VI & 2D Log-normal & \(\zeta_{M}=\ln(12)\) & 2.72\% \\ & & \(\sigma_{a}=0.05\) & \\ & & \(\sigma_{M}=0.5\) & \\ \hline & Normal in logarithm (for masses) & \(\zeta_{M}=23\)\(m_{\mathrm{Jup}}\) & \\ VII & & \(\sigma_{M}=20\)\(m_{\mathrm{Jup}}\) & 13.95\% \\ & Maxwellian (for orbits) & a = 40 & \\ \hline & & \(\lg(a_{min})=\lg(20)\) & \\ VIII & Triangle uniform in logarithm & \(\lg(M_{min})=\lg(0.04)\) & 1\% \\ & & \(\lg(M_{max})=\lg(1200)\) & \\ & & \(\lg(a_{max})=\lg(700)\) & \\ \hline \end{tabular} \end{table} Table 1: Groups of initial distributions of planets in the plane \(a\) - \(M_{\mathrm{pl}}\) and their parameters (units of measurement – AU and Earth’s masses, unless otherwise indicated). All the tracks used for modeling have been calculated up to the stage of white dwarf. The criterion for the onset of this stage is the drop in luminosity (due to cooling) below the critical value (see 4) after the end of the mass loss. Models of stars with an initial mass greater than 3 solar masses have also been brought to the stage of a white dwarf. In this case, the asymptotic giant branch in the models of massive stars used in this work ends with the stage of thermal bursts (Thermal Pulse Asymptotic Giant Branch -- TPAGB), which is limited to one or two short-term increases in luminosity (and mass loss rate). During them, a large part of the helium-hydrogen shell of the star is lost (see Fig. 18 and 19 in the Appendix), after which the track "turns" towards an increase in the effective temperature, and the star passes to the planetary nebula stage where mass loss also occurs (several tenths of \(M_{\odot}\) is lost then). Evolutionary tracks for stars with an initial mass less than \(3M_{\odot}\) start at the pre-main sequence stage. For more massive stars tracks start at the MS. Results of calculations of the stellar evolutionary track for each initial mass include a group of profile files, each of which describes the stellar structure at the corresponding moment of time, and a history file containing information about changes along the track in the main parameters of the star. The parameters whose changes are recorded in the history include the current age of the star, its effective temperature, its luminosity, mass, radius, mass loss rate, hydrogen and helium content in the center of the star, and many others. The history files of the tracks, which we calculated in MESA, contain information about changes in the main parameters of the star over numerous time intervals along the track. The number of these time points varies from \(\sim 1200\) up to \(30000\) for different stellar masses. Most of these intervals correspond to the evolution of the star after the MS, including the asymptotic branch and the red giant branches. From the parameters contained in the history files, for the population synthesis we directly need the mass loss rate, the radius of the star, and the corresponding age of the star, or the duration of the current stage of evolution. The main procedure of our population synthesis is a calculation of the planetary orbit at each evolutionary stage of the parent star. In this regard, one of the tasks is not to overload the program code with too much computation due to a large number of evolutionary stages. Another task is to monitor excluding some "superfluous" evolutionary stages, that the remaining stages and the corresponding mass loss rates give the same (within the error) final mass of the star (i.e., the white dwarf mass) which is obtained in the MESA calculation. The third task is not to exclude as a "superfluous" the stage when the radius of the star reaches its current maximum (because a growing star can swallow a nearby planet). Solving these problems and working with the dependencies of the mass loss rate on time and of the stellar radius on time obtained from the data of history files, we make truncated versions of the tracks containing from 30 to 170 evolutionary stages, depending on the mass of the star. The longest ones are those where the largest number of thermal flashes occurs at the TPAGB stage (an example of a fragment of such track is shown in Fig. 17 in the Appendix). The masses of white dwarfs obtained from the truncated tracks are systematically larger than the masses from the original tracks within 1%. The first stage in each of the tracks turned out to be the longest. This is the MS stage. The rate of mass loss on the MS is calculated as the average value of the rate of loss at the beginning and at the end of the stage of MS. We consider the zero-age MS as the stage when the central content of hydrogen decreased by one hundredth compared to its initial content. The end of the MS is the moment when the central content of hydrogen becomes less than one-hundredth of the initial one. In all truncated tracks for each of the stages its duration, the average rate of mass loss at this stage, and the radius of the star at the end of the stage are prescribed. In some tracks of massive stars, there are stages whose duration does not exceed several years. This is done in cases when the rate of mass loss is very high (higher than \(10^{-4}M_{\odot}\) yr\({}^{-1}\), see Fig. 18 in the Appendix). ## 3 Population synthesis. The code The code is realized using the MatLab package. At the first step, we assign the total number of "star-planet" pairs as one of the constants. This value determines the number of repetitions in the cycle of the procedures described below, as well as the number of planets of each of the groups in the \(a\) - \(M_{\rm pl}\) plane. Each "star-planet" pair in the code randomly gets values of the semimajor axis of the planet's orbit, the mass of the planet, and Figure 4: Hertzsprung-Russell diagram. Evolutionary tracks from the MS stage to the white dwarf stage are shown. The initial masses are indicated. the mass of the star. This is done by the pseudo-random number generator built into MatLab in accordance with the described initial distributions. The initial distribution of stellar masses is given by the Salpeter function: \(dN/dM\sim M^{-2.3}\)[23], [24]). The generated masses of stars lie in the range from 1 to 8 \(M_{\odot}\). The initial eccentricity and the true anomaly of the planetary orbit are also defined. These values are chosen to be the same for all systems. Also, at the first step, for each system, a bin in the star formation history ("age group") is determined as well as the maximum possible age of the star corresponding to this group. To do this, the entire history of star formation in the Galaxy is divided into several stages (see Fig. 5) with different rates of star formation, following the approximation in Fig. 1 in [25]. It is assumed that throughout the history of star formation, the initial mass distribution of stars is given by the Salpeter function. For each stage, we calculate the ratio of the total mass of stars formed during this time interval to the total mass of stars in the Galaxy in the range from 1 to 8 solar masses. The latter is equal to \(19.55\times 10^{9}M_{\odot}\), since the total initial mass of all stars in the Galaxy, formed during its lifetime, is assumed to be equal to \(50\times 10^{9}M_{\odot}\), see [25]. This ratio determines the range of random numbers corresponding to stars formed at a given stage in the history of star formation. Next, using a pseudo-random number generator, we get a value that determines the "age group" of the star. Then, again using the pseudo-random number generator, as well as conditional operators, we define the maximum possible age of the star. Then the coefficient \(N_{\rm planets}\) is calculated, which determines the number of planets around the star. The formula for calculating this coefficient is taken from [20]: \[N_{\rm planets}=\begin{cases}(M_{\star}/M_{\odot})^{1.2}\times N_{\rm planet, sun}&\text{if}\,M_{\star}<1.5M_{\odot};\\ 10,&\text{if}\,M_{\star}>1.5M_{\odot}.\end{cases} \tag{12}\] Here \(N_{\rm planet,sun}=8\) -- the number of planets in the Solar system. This coefficient allows us to calculate the average number of planets around a star, it is used as an additional factor in obtaining the final distributions of planets (see eq. 13). At the next step, the mass of the star in the current system is compared with the masses of the stars for which the evolutionary tracks are built, and for further work, the model with the closest mass is selected and the corresponding truncated track file is read. That is, the mass distribution is binned according to the selected values of the initial track masses calculated in MESA. The value of the semimajor axis of the orbit and the value of the mass of the star at the first evolutionary stage of the track are assigned equal to the initial values of the orbit and the mass of the star. Next, the \(\psi\) parameter is calculated. If its value is less than 0.1, then the value of the semimajor axis of the orbit and the mass of the star at the end of this evolutionary stage are determined by the formulas (5) and (6), and the eccentricity and the true anomaly are not changed. If \(\psi\geq\)0.1, then the system of equations (1) - (3) with the condition \(\dot{M}_{\star}=\rm const\) is solved numerically with the Runge-Kutta method (RK4); the constant in this condition is determined by the value of the mass loss rate read from the file at this stage. The grid spacing is selected based on the duration of the evolutionary stage: the minimum spacing -- 0.1 yr, -- is chosen for very short stages and for stages when \(\psi\) > 3. For stages longer than 100 years, a spacing of 5 years is used, with a stage duration of more than 1000 years -- 50 years, for all others -- 1 year. After solving the specified system of equations, for the next evolutionary stage in addition to new values of the stellar mass and the semimajor axis of the planetary orbit, we define new values of eccentricity and true anomaly. Before moving on to the next evolutionary stage, the current value of the orbit is compared with the current radius of the star, and the current age, calculated as the sum of all past evolutionary stages, is compared with the maximum possible age. If the value of the planet's pericentric distance and the current value of the stellar radius become equal or the stellar radius becomes larger than the planet's orbit, then the index of absorbed planets is increased by one. The number of evolutionary stages in the track determines the number of repetitions of the calculation of the current mass of the star. It also determines the number of repetitions of the calculation of the value of the semimajor axis of the planet's orbit, if the value of \(\psi\) does not exceed 0.1. The calculations stop if the current age of the star at some stage reaches or exceeds the final age determined Figure 5: The history of star formation in the Galaxy used in the simulation. The bin width is 50 million years. The number of stars in the sample is 500000. The normalization is made in such a way that the mass of all stars formed in the Galaxy in the range from 1 to 8 solar masses is equal to \(19.55\times 10^{9}M_{\odot}\). initially for it. If the eccentricity reaches the value \(e\geq 0.998\) (in the middle or at the end of the evolutionary stage), then the planet is considered as escaped; the evolution of the elements of the orbit is suspended/stops (until the end of the system evolution on all at the next stages their values are preserved, fixed at the moment, when the eccentricity became greater than 0.998). The mass of the star continues to be calculated in accordance with the loss rates both at the current and at each of the following evolutionary stages using eq. (6). For a test we varied the critical value of the eccentricity in the range from 0.99 to 0.999 - this did not lead to a significant change in the number of runaway (and absorbed) planets. At the critical value of \(e>0.999\), we encounter instability in the code performance. The values of the elements of the orbit, as well as the values of the mass and radius of the star, are saved at the end of each evolutionary stage (i.e., their values are also available after the completion of the entire program). If the eccentricity value turned out to be less than zero at the current grid then the stage can be re-calculated with a reduced time step (down to 0.01 year). In such case, we numerically solve the system of equations (1) - (3). For the final estimate of the Galactic population of escaped and absorbed planets and to obtain the final distributions of surviving planets in orbits and eccentricity (for the range of initial masses of stars stated above) we use the coefficient \(N_{\rm planets}\) from eq. 12 and the coefficient \(k\) (see eq. 14). So the desired number \(N\) corresponding to certain characteristics of the planets (for example, escaped or absorbed, etc.) in the Galaxy is determined as follows: \[N=k\frac{N_{\rm calc}\sum_{n=1}^{N_{\rm tot}}N_{\rm planets,n}}{N_{\rm tot}} \tag{13}\] where \(N_{\rm calc}\) is the number of planets with the corresponding characteristics in the results of modeling, \(N_{\rm tot}\) is the total number of "star-planet" pairs in the modeling (it is equal 500000 in all our runs). The coefficient \(k\) is calculated as: \[\begin{split} k=\frac{\int_{1}^{8}M^{-2.3}dM}{N_{\rm tot}}\times \\ \times\frac{M_{\rm Gal}}{\int_{0.1}^{0.5}M^{-0.3}dM+\int_{0.5}^{1 50}M^{-1.3}dM},\end{split} \tag{14}\] where \(M_{\rm Gal}\) - the total mass of all the stars in the Galaxy (\(50\times 10^{9}M_{\odot}\), see [25]), \(N_{\rm tot}=500000\), as it is in eq. 13. For these parameters, we obtain the following value: \(k\approx 17851\). ## IV Results ### Orbit distribution. Escaped planets The modeling shows that about 60.2% of the population of 500,000 planets turned out to be absorbed by parent stars at the red giant stage (RGB and AGB), about 0.3% left their planetary systems and became free-floating planets. Using the eqs. 13 and 14 and basing on the obtained statistics of runaway planets, we estimate their number in the Galaxy. The obtained values are in the range of about 278 - 297 million. Moreover, in those ranges of stellar masses that are not covered in the code, according to our assumption, stars practically do not produce runaway planets -- either due to insufficient mass loss and stellar wind in the case of small stellar masses, or due to the short lifetime of the circumstellar disk in the case of massive stars with a powerful radiation flux [34]. Of course, a certain number of planets can leave their systems due to dynamic interaction with other objects, but such a channel is not considered in our study. For the surviving planets around the stars that have gone through all the evolutionary stages and have become white dwarfs, the minimum values of the semimajor axis of orbit are observed for planets from the I-IV groups of \(a\) - \(M_{\rm pl}\) distribution (the smallest value -- about 1.036 AU, -- for a planet from group II with initial semimajor axis of about 0.538 AU, and initial eccentricity \(e\approx 0.01\), which has changed little during the life of the star). The maximal orbits are obtained for planets of groups VII and VIII; the runaway planets also appear only in groups VII and VIII. Most of the escaped planets have initial orbits close to 100 AU, see Fig. 7. No planets of the groups V and VI (see also Fig. 2) survive around the stars that managed to evolve to the white dwarf stage, see Fig. 6. They are absorbed by Figure 6: Modeled distribution of planets by mass and semimajor axis at the final stage of evolution. Top: for the stars of the population that evolved to the white dwarf (WD) stage; bottom: for all stars. The “+” signs mark the planets around the stars that have not reached the WD stage. In order not to clutter up the figure, just 100000 points are shown, i.e. 20% of the population considered in the modeling. the expanded envelopes of their parent stars at the giant stage. This is due to the fact that by the time they turn into a red giant, stars manage to lose such a fraction of their mass that the orbits of the planets increase little and the planets of the indicated groups situated close to the star are absorbed; the least massive stars shed a larger fraction of their mass on the RGB than on the AGB, but their radii also increase more significantly at this stage. It is also worth noting that among the population of stars that did not have time to evolve to the white dwarf stage, there are also those that swallowed up their planets. A significant group of planets are moved to highly eccentric orbits (Fig. 8). Since the formal criterion for a planet to leave its parent system is to reach \(e=0.998\), among the surviving planets there are several examples with an eccentricity \(e>0.99\) and an orbit of more than \(10^{5}\) AU, and a couple of planets have orbits even more than a parsec. It is clear that such planets can be considered as bounded only according to the formal criteria indicated above, but taking into account, for example, Galactic tides they should be classified as "escaped". However, in the statistics presented here, they are not included in the number of escapes. This is justified as the number of planets with orbits larger than \(10^{4}\) AU, but which did not formally leave their parent systems, turned out to be relatively small - about \(0.03\pm 0.003\%\) of the considered population, and in terms of the Galaxy -- about 30 million planets. The distribution of the remaining (survived) planets by orbits around white dwarfs is shown in Figure 9. In particular, it shows the presence of a local maximum in the distribution of the number of planets in the region of 100-200 AU. This maximum is associated with the presence of a fairly large group of planets with large initial values of the semimajor axis of the orbit (see also Fig. 2, Fig. 6 and table 1). ### Future of the Earth Using the code described in this study we also perform modeling for the Earth-Sun system (the current age of the Sun is taken as 4.58 billion years). In our model, by the time the Sun turns into a white dwarf with a mass of about 0.52 \(M_{\odot}\), the Earth will not be absorbed by the star at the red giant stage and will have a semimajor axis of about 1.922 AU. (Fig. 10). However, there are studies that show that the Earth will be absorbed by the Sun when the latter is at the red giant stage. Thus, according to calculations presented in the paper by Schroeder and Smith [32] the Earth will be swallowed up due to tidal effects, which are not taken into account in our work. However, judging by the maximum value of the radius of the Sun in our model, even accounting for the tides would not lead to the absorption of the Earth. The maximum value of the radius of a star with Figure 8: Distribution of survived planets by the eccentricity for host stars that evolved to the white dwarf stage. The bin width is 0.05. The number of objects is normalized to the parameters of the Galaxy. Figure 7: Distribution of the initial orbits of runaway planets. The bin is 20 AU. The number of objects is normalised to the parameters of the Galaxy. Figure 9: Final distribution of planets by the semimajor axis of orbit for stars that have evolved to the white dwarf stage. Left: the bin width is 20 AU, Right: the bin width is 200 AU. The number of objects is normalized to the parameters of the Galaxy. an initial mass of one solar mass that we obtained from the MESA tracks is inferior to the values given in the paper [32]: 255 \(R_{\odot}\), or about 1.188 AU, versus 185 \(R_{\odot}\), or 0.844 AU, in our model. It should be noted, however, that the evolutionary model for the Sun in the mentioned work was obtained for the metallicity \(Z=0.0188\), which is closer to the real solar metallicity than the value of \(Z=0.02\) used in all tracks in our simulation. We also note that in addition to the radii, the time that the Sun lives before it reaches the peak of the giant stage also differs (in the model by Schroeder and Smith this occurs approximately 20 million years earlier, compare Fig. 10 in our work and Fig. 1 in [32]). ### Escaped planets and planets around massive stars and giants There are currently very few observational examples of planets around stars with masses of 3 or more solar masses in exoplanet databases. Confirmed examples include the following planets: o UMa b, Hip 79098 (AB) b, HD 17092 b, HD 13189 b, HD 119445, \(\nu\) Oph b and c, BD+20 2457 b and c12. Moreover, the latter four are more likely to be brown dwarfs than planets. There are studies and observational programs devoted to the search for planets around giant stars, and no planet was discovered during one of such studies in which more than a hundred stars more massive than 2.7 \(M_{\odot}\) were observed. Discussing these results, the authors suggest that the conditions in the protoplanetary disks around stars with initial masses above 2.5-3 \(M_{\odot}\) are such that, in principle, giant planets cannot form there. At the same time, there are other studies devoted to the planets around stars with masses in the range from 6 to 8 solar masses [34] and showing the theoretical possibility of the survival of planets around such stars. Footnote 1: exoplanet.eu/catalog/tyc_8998-760-1_b Footnote 2: [https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/mph-tblView?app=ExoTbls&config=PS](https://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/mph-tblView?app=ExoTbls&config=PS) Massive stars are of interest to us because they lose a sufficient amount of mass during their evolution prior to the white dwarf stage and achieve, at the corresponding stages of their evolution, such rates of mass loss that can lead to the loss of planets (see Fig. 11). In our simulation, the least massive star which ejected its planet due to mass loss by the end of evolution, has an initial mass of 2.6 \(M_{\odot}\) (the initial orbit of this planet belonging to the VIII distribution group \(a\) - \(M_{\rm pl}\), is \(a_{\rm in}\approx 663\) AU). For the escaped planets from the VII distribution group \(a\) - \(M_{\rm pl}\), the least massive stars were those with masses starting from 3.5 \(M_{\odot}\). Since the star is losing its mass, observational examples of interest may not be only stars more massive than 3 \(M_{\odot}\), but also planets of slightly less massive stars that are at giant stage and have already lost some of their mass. There are many more examples of planets around giants than among stars more massive than 3 solar masses - about 150 objects, discovered in most cases by variations in radial velocity. Most of these planets have orbits with semimajor axis less than 5 AU (Fig. 12). Examples of planets with a semimajor axis larger than 10 AU around giant stars have not been discovered, yet (although there are several examples among subgiants -- TYC 8998-760-1 b 3, \(\kappa\) And b 4, 51 Eri b 5). Footnote 3: [http://exoplanet.eu/catalog/tyc_8998-760-1_b](http://exoplanet.eu/catalog/tyc_8998-760-1_b) Footnote 4: [http://exoplanet.eu/catalog/kappa_and_b](http://exoplanet.eu/catalog/kappa_and_b) Footnote 5: [http://exoplanet.eu/catalog/51_eri_b](http://exoplanet.eu/catalog/51_eri_b) Our simulation shows that on average, for escaped planets with small initial orbits the total mass ejected by the star should obviously be larger than for the planets initially remote from their stars. The closest of the Figure 11: Distribution of the initial masses of host stars whose planets were ejected from the parent systems. The bin width is 0.5 \(M_{\odot}\). The number of objects is normalised to the parameters of the Galaxy. Figure 10: The result of calculation of the Earth orbital evolution (the asterix symbols, upper curve) and the evolution of the solar radius (lower curve) using the MESA track for a star with the initial mass of 1\(M_{\odot}\). escaped planets in our simulations has the initial orbit \(a_{\rm in}\approx 52\) AU and eccentricity close to e = 0.1. It was ejected by the star with the initial mass of 7.5 \(M_{\odot}\). Thus, one can conclude that according to the observational data from exoplanet catalogs, a candidate for future escaped planets cannot be found among the already discovered planets. However, there are several examples of objects with high eccentricity among the confirmed planets of giant stars: HD 76920 b has an eccentricity \(e=0.856\) and the semimajor axis \(a=1.15\) AU, HD 75458 b has \(e=0.713\) and \(a=1.275\) AU, HD 238914 b has \(e=0.56\) and \(a=5.7\) AU, HD 102272 c has \(e=0.68\) and \(a=1.57\) AU, HD 14067 b has \(e=0.533\)and \(a=3.4\) AU, Hip 97233 b has \(e=0.61\) and \(a=2.55\) AU, HD 1690 b has \(e=0.64\) and \(a=1.3\) AU, Kepler-432 c has \(e=0.64\) and \(a=1.188\) AU, BD+48 740 b has \(e=0.76\) and \(a=1.7\) AU6. Among the indicated systems there are those where the star has a mass of about 1.5 and even \(\sim 2\)\(M_{\odot}\). Depending on the rate at which these stars will lose most of this mass, the eccentricity of the planetary orbits in the future theoretically may turn out to be larger than unity, i.e. the planets will be no longer connected to their host stars. If we consider not only planets around evolved stars but also around MS stars, then we can also find candidates for future escaped planets. Footnote 6: [http://exoplanet.eu/catalog](http://exoplanet.eu/catalog) ## V Discussion ### White dwarf masses In the simulation, 82-83% of the stars in the considered population manage to reach the white dwarf stage. The resulting mass distribution of white dwarfs is shown in Fig. 13. The most massive white dwarf obtained in our calculations has a mass of about 1.16 \(M_{\odot}\), and the lightest - 0.519 \(M_{\odot}\). The comparison with modern theoretical and observational data on white dwarfs shows a relatively good correspondence between our results and the data if we take into account that our modeling does not include the evolution of low-mass stars of low metallicity (compare Fig. 13 with the data from [31], [29] and [30]). As it is mentioned in the subsection 2 (3) "Evolutionary Tracks", the lifetime of stars with masses below 1 \(M_{\odot}\) and metallicity \(Z=0.02\), before they become white dwarfs, exceeds the lifetime of the Galaxy, according to MESA calculations. Thus, for a star with an initial mass of 0.9 \(M_{\odot}\), the lifetime on the MS is about 13.3 Gyr, and for such a star it takes about another 4.5 Gyr to reach the peak of the giant branch. A star with a mass of 0.95 \(M_{\odot}\) lives on the MS for approximately 10.6 Gyr, and another 4 Gyr passes before it reaches its maximum radius on the giant branch. We compare our results obtained in MESA with the results of calculations of evolutionary tracks carried out by different scientific groups. In particular, the article [28] presents evolutionary models of low-mass stars. In this study the following MS lifetimes for stars with metallicity \(Z=0.02\) and an initial mass of 0.8 and 0.9 solar masses are obtained: 22.7 and 14 Gyr, respectively. For a star with the initial mass 0.95 \(M_{\odot}\), a comparison was made with the PARSEC tracks by the Padova group [35]. After 13.7 - 13.8 Gyr the central hydrogen abundance of a star of the indicated initial mass manages to fall by more than two orders of magnitude (which can indicate the end of the MS stage). But the star is still far from even reaching the stage of a giant, not to mention a white dwarf. Thus, comparison with modern advanced calculations of stellar evolution allows us to consider our choice of the mass interval of the white dwarfs' progenitors as appropriate. Figure 12: Distribution of exoplanets in the \(a\) – \(M_{\rm pl}\) plane around giants and subgiants according to the exoplanet.eu catalog. Figure 13: Calculated white dwarf mass distribution for the stars with initial masses in the interval 1-8 \(M_{\odot}\). The bin width is 0.1 \(M_{\odot}\). The number of objects is normalized to the parameters of the Galaxy. ### Evolutionary tracks Since the radii of stars and their mass loss rates at different stages of evolution obtained using the MESA package play a decisive role in the final statistics of the planets in our modeling, they deserve discussion and comparison with known calculations. We have already discussed the differences for stars with the mass 1 \(M_{\odot}\) above. For more massive stars comparison of our results with others is complicated due to the presence of the TPAGB stage where the radius increases repeatedly over \(n\) pulsations of the star, and not all the tracks are calculated to the end of TPAGB. Table 2 provides comparisons for some initial masses. Among the tracks constructed in MESA and used in the modeling, the maximum radius is achieved by a star with an initial mass of 6 \(M_{\odot}\). It is \(\approx\) 2.3 AU. Judging by our comparison, the maximum radii of the models obtained in MESA are smaller than in the SSE models. Many of the available PARSEC tracks do not reach the end of the TPAGB stage, thus comparison is not possible. As for the mass loss rates, in MESA for the tracks of the most massive of the considered stars (from 3.5 \(M_{\odot}\) up to 8 \(M_{\odot}\)) we obtained very large values at certain very short stages of evolution. This does not correspond to the existing observations. The largest value -- \(10^{-2.2}\)\(M_{\odot}\) yr\({}^{-1},-\) is obtained for models with initial masses 6.0 and 7.5\(M_{\odot}\). In such cases, a star loses mass at this rate over a period of about fifty years. Note, that our models for stars with masses in the mentioned interval evolve from AGB to white dwarf stage losing their envelopes almost without thermal pulses (see Figs. 19, 18), which is also not typical according to the observations. Still, in the case of massive stars, it is important for the fate of the planets that these stars lose much more than half of their mass during their evolution after the onset of the MS. Due to this, it becomes possible that the planets are no longer bound to their parent star. Calculations of stellar evolution with mass loss rates close to ours (\(4\div 7\times 10^{-4}\)\(M_{\odot}\) yr\({}^{-1}\)) at the end of the AGB stage and during the ejection of a planetary nebula can be found in [36, 37]. The estimates of the stellar "superwind" for some of the observed OH/IR stars [38, 39, 41, 40] reach up to \(\sim 10^{-3}\)\(M_{\odot}\) yr\({}^{-1}\). Note, that the famous Blocker's equation (see eq. (11) above) was proposed in the context of a discussion of high mass loss rates from Mira Ceti type stars and OH/IR stars. ### Model development One way to improve the model is to consider the heterogeneity of the chemical composition of the stars in the population. As already indicated, all evolutionary tracks of stars in MESA are calculated for an initial metallicity of \(Z=0.02\). Focusing on modern models of the chemical evolution of the Milky Way, it is necessary to reflect the heterogeneity of the metallicity of the stellar population of the thin and thick disks and, possibly, the bulge of the Galaxy [43, 42]. To do this, it is necessary to supplement the track grid by calculating the evolution of stars with metallicity \(Z\approx 0.005\), corresponding to the peak of the distribution for thick disk stars (see Fig. 3 in [42]) and, possibly, metallicity \(Z\approx 0.04\) for bulge stars (Fig. 5 in [43]). An important factor determining the statistics of planets obtained in the work is the assumptions made about their initial distribution in the \(a\) - \(M_{\rm pl}\) plane. This distribution may be very different from the one used here. It is also important that the same distribution is used for different stellar masses. This is a significant simplification, which is made due to the lack of population calculation data differentiated by masses. Apparently, the corresponding data will appear in the nearest years (this is evidenced by the first works of a large cycle, that the Bern group [44, 45, 46] began to publish). Also, a very significant parameter for the statistics obtained in our study is the stellar mass loss rate. It should be noted that the decisive factor that prompted us to work with this grid of tracks calculated in MESA was the successful calculation of evolution up to the white dwarf stage and we obtained the final masses of stellar remnants directly. Thus, we did not have to resort to third-party sources or approximation formulas connecting the initial and final masses of stars in order to determine the mass of the star at the end of evolution for each track. If it is possible to obtain evolutionary tracks brought to the white dwarf stage with a more convincing evolution of mass loss rates, stellar radius, and other physical characteristics of stars then they will be used to improve the model. In the current code, we do not take into account the influence of tides. We can try to estimate how the final statistics of absorbed and surviving planets will change, based on the obtained distributions and on data on the tidal absorption of planets by giant stars. However, calculations of tidal interaction still suffer from a number of uncertainties. Tidal effects can affect the results of calculations for planets in close orbits around white dwarfs. In our simulation it turned out that at the end of evolution (at the white dwarf stage) there are a fairly large number of surviving planets in orbits close to their stars (Fig.14): within 2 AU -- about 3.7 billion which is less than one \begin{table} \begin{tabular}{|l c c c|} \hline Initial mass & PARSEC & SSE & MESA \\ \hline \(2M_{\odot}\) & 1.147 AU & 1.869 AU & 1.536 AU \\ \(5M_{\odot}\) & 2.16 AU & 4.98 AU & 1.899 AU \\ \(6M_{\odot}\) & 3.034 AU* & 5.97 AU & 2.252 AU \\ \hline \end{tabular} * By the time TPAGB stage begins in a star with the mass \(6M_{\odot}\) the radius is 2.95 AU. \end{table} Table 2: Comparison of the maximum stellar radii in different models fifth of all surviving planets (about 18%), and within 4 AU -- about 30%. However, the distribution of the masses of these planets shows that only a small fraction of the surviving planets close to their stars - planets from group I - have Jovian-scale masses. While among other surviving planets close to their parent stars, the average masses are about 1-5 Earth masses in group III (planets more massive than 0.15 Jupiter mass are not found in the results of calculations), and less than one Earth mass in group II (the most massive are also about 0.15 Jupiter mass). Such results give reason to believe that taking into account tidal effects _with other assumptions unchanged_ will increase the fraction of absorbed planets by no more than a few percent compared to our results since the role of tides is more important for massive planets. Finally, there are a number of poorly known parameters associated with the general normalization of the number of planets. For example, we used a specific type of dependency presented in equation (12). Most likely in the future, for example, after increasing the statistics of known planets supplemented by the _Gaia_ and _PLATO_ satellite data, it will be possible to specify the number of planets in different types of systems with greater accuracy. For now, we have used a simplified form of the number of planets versus mass dependence, which leads to some systematic uncertainty in the total number of planets absorbed, survived, and ejected. ## VI Conclusion In this paper we presented population synthesis modeling of properties of planets at the late stages of stellar evolution. Using the MESA package we model the evolution of stars from the Main Sequence stage to the formation of a white dwarf. We calculate the statistics of planets with different fates: absorbed, ejected from the system, and surviving by the time their parent stars transform into white dwarfs. We demonstrate that for the initial distributions of planets in the plane \(a\) - \(M_{\rm pl}\) accepted in the work, the majority (\(\sim\)60%) of planets born around stars in the mass range from 1 \(M_{\odot}\) up to 8 \(M_{\odot}\) is absorbed by their parent stars at the giant stage. A small fraction of planets (\(\sim\)0.3%) is ejected from their systems due to the mass loss by their host star. We estimate the number of escaped planets with masses in the range from 0.04 Earth masses to 13 Jupiter masses in the Galaxy. It amounted to about 300 million objects. We thank the anonymous reviewer for useful comments that helped to improve the manuscript. The work is partially supported by the Interdisciplinary Scientific and Educational School of Lomonosov Moscow State University "Fundamental and Applied Space Research". _Translated by the authors_
2309.12972
License Plate Recognition Based On Multi-Angle View Model
In the realm of research, the detection/recognition of text within images/videos captured by cameras constitutes a highly challenging problem for researchers. Despite certain advancements achieving high accuracy, current methods still require substantial improvements to be applicable in practical scenarios. Diverging from text detection in images/videos, this paper addresses the issue of text detection within license plates by amalgamating multiple frames of distinct perspectives. For each viewpoint, the proposed method extracts descriptive features characterizing the text components of the license plate, specifically corner points and area. Concretely, we present three viewpoints: view-1, view-2, and view-3, to identify the nearest neighboring components facilitating the restoration of text components from the same license plate line based on estimations of similarity levels and distance metrics. Subsequently, we employ the CnOCR method for text recognition within license plates. Experimental results on the self-collected dataset (PTITPlates), comprising pairs of images in various scenarios, and the publicly available Stanford Cars Dataset, demonstrate the superiority of the proposed method over existing approaches.
Dat Tran-Anh, Khanh Linh Tran, Hoai-Nam Vu
2023-09-22T16:12:45Z
http://arxiv.org/abs/2309.12972v1
# License Plate Recognition Based on MULTI-Angle View Model ###### Abstract _Tom tat noi dung--_In the realm of research, the detection/recognition of text within images/videos captured by cameras constitutes a highly challenging problem for researchers. Despite certain advancements achieving high accuracy, current methods still require substantial improvements to be applicable in practical scenarios. Diverging from text detection in images/videos, this paper addresses the issue of text detection within license plates by amalgamating multiple frames of distinct perspectives. For each viewpoint, the proposed method extracts descriptive features characterizing the text components of the license plate, specifically corner points and area. Concretely, we present three viewpoints: view-1, view-2, and view-3, to identify the nearest neighboring components facilitating the restoration of text components from the same license plate line based on estimations of similarity levels and distance metrics. Subsequently, we employ the CnOCR method for text recognition within license plates. Experimental results on the self-collected dataset (PTITPlates), comprising pairs of images in various scenarios, and the publicly available Stanford Cars Dataset, demonstrate the superiority of the proposed method over existing approaches. deep learning, license plate recognition and detection. ## I Introduction In recent decades, the traffic situation has become significantly more complex due to the global population increase [1]. Intelligent Transportation Systems (ITS) have been developed as a solution to the global traffic issue. To deploy ITS models, the management and automated recognition of vehicle license plates are considered crucial components. Figure 1 illustrates the basic flow of license plate recognition software. License Plate Recognition (LPR) using smart camera systems typically involves four steps: firstly, the conversion of camera images into a format suitable for computer processing; next, the identification of regions of interest within the monitored camera image; subsequently, the recognition of characters on the license plate; finally, the presentation of the license plate recognition results [2]. The traditional approach views a vehicle license plate as a region of interest and recognizes the characters as a sequence, followed by comparing two sequences to identify the vehicle. In the research study by Vaishnav et al. [3], the authors proposed a system that utilizes optical character recognition techniques and compares characters with stored templates to obtain specific information about the license plate. Comparing license plate numbers yields accurate results; however, this method is effective only when the license plate is clearly displayed in the image. If the license plate is obscured or not securely attached to the registered vehicle, inaccurate results may be obtained. These issues can be addressed by utilizing additional features of the license plate for comparison. Unlike the traditional approach, deep learning models based on multi-layered architectures can learn license plate characteristics at different levels. These deep learning models take raw images (without feature extraction) as direct input. Most deep learning methods for license plate recognition learn the plate features within the model [14, 5]. Kessentini et al. [6] designed a two-stream architecture: (i) stream 1 processes input vehicle features; (ii) stream 2 processes input license plate features. However, this method only performs license plate recognition from a single viewing angle. Recently, with the rapid development of widely distributed camera systems, multi-angle data collection has become feasible. Consequently, license plate recognition systems can benefit from this multi-angle data. Different viewing angles of the license plate provide the opportunity to extract diverse features, which are useful for recognition. In this study, we apply the YOLOv8 architecture, take license plates from multiple viewing angles as input, and propose a deep learning model for accurate license plate recognition in various real-world situations. ## II Related Work License plate recognition is divided into two main stages: (1) the license plate detection stage and (2) the license plate recognition stage. ### _License plate detection stage._ Recently, computer vision-based methods for license plate detection have garnered significant attention in Intelligent Transportation System (ITS) applications. Achieving highly accurate license plate detection is a fundamental component of traffic monitoring aimed at increasing safety and automation [7]. A comprehensive survey evaluating license plate detection systems is presented in [8]. With the recent strong advancements of deep learning algorithms in various image processing and pattern recognition domains [9, 10, 11, 12], single-camera object detection systems based on Convolutional Neural Networks (CNNs) have been investigated [13, 14]. However, these single-camera systems might not be able to detect partially obscured license plates in congested traffic contexts. An alternative approach to overcome this challenge is to employ multi-camera systems and integrate information from each independent camera stream [15, 16]. Mukhija et al. [17] proposed a method based on wavelet transform and Empirical Mode Decomposition (EMD) for license plate localization in images, addressing real-world challenges such as lighting variations, complex backgrounds, and changes in surroundings. MASK-RCNN [18] introduced a simplified, flexible, and popular segmentation framework that can create masks for potential objects and accurately segment targets. The YOLO model and its upgraded versions [19] consider object detection as a regression task, enabling efficient object detection with high accuracy and fast speed. Deep learning models and architectures based on YOLO are increasingly popular in the research community. Therefore, YOLOv8 is utilized in this paper as the framework for the license plate detection component in our system. ### _License plate recognition stage._ Some license plate recognition systems are designed to segment characters before recognizing them. Segmentation methods can be categorized into connected-component analysis [20], projection profile analysis [21], prior character knowledge [22], contour analysis around characters [23], and combinations of these methods [24]. It becomes evident that accurately classifying all characters within a license plate is challenging when the character segmentation component performs poorly. Consequently, some researchers focus on proposing reliable character segmentation methods for license plate recognition. Meanwhile, other studies concentrate on suggesting license plate recognition methods without character segmentation, transforming the problem into a sequence labeling task [25]. Leveraging the strengths of improved YOLO models, license plate characters have been segmented and recognized in [26]. The accuracy of character segmentation depends on the segmentation performance and can be affected by external conditions like light intensity, blurriness of the license plate, etc. These conditions can reduce the accuracy of license plate recognition. Currently, most researchers apply non-character segmentation methods. RPnet, proposed by Xu et al. [27], swiftly and accurately predicts license plate bounding boxes, simultaneously determining the corresponding license plate by extracting features of Regions of Interest (ROIs) from different convolutional layers. This model surpasses existing object detection and recognition models in both accuracy and speed, achieving a 98.5% accuracy rate. ### _Character Recognition methods_ In order to recognize characters within images, many research groups [29] have relied on image features for identification. The CRNN study [28] initially combines CNN and RNN to extract sequential visual features from a specific text image. These features are then fed directly into the CTC decoding mechanism to predict the best character type for each time step. CTC [30] in this context is a loss function used to train deep learning models. Most methods recognize characters in a unidirectional manner. For example, Lee et al. [32] encoded input text images horizontally into 2D sequential image features, which are subsequently input into the corresponding regions with semantic information assistance from the previous time step. To mitigate mischaracterizations due to scene distortion and skewed distribution, Yang et al. [31] introduced an improved module prior to character recognition. This module employed a spatial transformation network with multiple control point pairs. In our research framework, the CnOCR module is utilized, applying unidirectional character recognition to accurately locate character features and enhance the recognition performance of the model. ## III Proposed Method Our proposed approach is depicted in Figure 2, comprising three main components: (i) the YOLO model for license plate detection; (ii) the License Plate Image Fusion algorithm for selecting the highest-quality license plate image; and (iii) the OCR model for character feature extraction and license plate recognition. The input consists of individual frames captured from cameras, which are divided into different viewpoints (views) for each camera. Each viewpoint becomes the input for a YOLO model. In this paper, we optimize the YOLO license plate detection model based on experimental results from various benchmark datasets. The quality of license plate images varies across different views, including factors like angle, visibility, blurriness, and distortion. Thus, we develop a License Plate Image Fusion algorithm to combine similar license plate images into a single image with enhanced information. Finally, the fused license plate image is passed through the OCR model for license plate recognition. ### _YOLO model_ The YOLOv8 model is employed to detect license plates appearing within frames. YOLOv8 is chosen due to its high accuracy in detection and fast processing times, making it suitable for real-time applications. Moreover, the YOLOv8 model provides various versions with different sizes, allowing deployment in diverse environments. Our system processes high-resolution input images (3840 x 2160) decoded from high-resolution videos. We collected images of various vehicle types and real Vietnamese license plates to create a custom dataset. Subsequently, this custom dataset was used to train the YOLOv8 model in order to construct two custom detection pipelines. The detection model is capable of identifying seven different object types (six vehicle types and different Vietnamese license plates) within the input images. In this phase, vehicle types and license plate occurrences are detected by the detection model. If a license plate is detected, the license plate image is cropped and passed on to phase 2. In summary, the detection model is first called to identify vehicle types and license plates. In cases where the input image contains a large number of license plates, the iterative process of recognizing each license plate may take longer compared to scenarios with only one license plate present in the frame. ### _Image Fusion algorithm_ In evaluating object detection methods, the Intersection over Union (IoU) metric [33] is commonly utilized. The IoU is a crucial measure for assessing the accuracy of object detection results. The underlying principle of the IoU is depicted in Figure 3. The IoU is calculated by dividing the area of intersection between the predicted bounding box and the ground-truth bounding box by the area of their union. This provides a quantitative measure of how well the predicted bounding box aligns with the actual object location. To enhance the IoU metric, the Generalized IoU (GIoU) was introduced to address situations where the IoU loss becomes zero when the predicted bounding box and the ground-truth bounding box do not directly overlap. Moreover, DIoU [33], introduces the Euclidean distance between the center points of the predicted and ground-truth bounding boxes based on the GIoU metric. This addition further refines the evaluation by considering the spatial distance between the bounding boxes, thereby aiding in speeding up the convergence of object detection model training. While the IoU metric has undergone multiple refinements across its various iterations, it still may not be inherently suitable for the construction of automated license plate recognition (LPR) models. This stems from the fact that within LPR models, if a license plate is detected with an excess of supplementary information (the detection area surpasses the actual license plate area), subsequent license plate recognition models may encounter fewer hindrances in accurately extracting characters from the detected license plate. However, when a license plate is detected with information deficits (the detection area is smaller than the genuine license plate area), this can pose challenges for subsequent recognition modules, thereby resulting in recognition inaccuracies. To expound upon this matter, we offer illustrative examples as depicted in Figure 4. It is evident that the predicted bounding box, as depicted in Figure 4, manifests a larger area than the actual license plate region. However, if the area of the predicted bounding box happens to be smaller than the actual license plate area, it can result in certain characters not being encompassed within the predicted region. Such information loss can lead to errors in the final license plate recognition outcome. Consequently, models based on loss functions utilizing the existing IoU metric are unable to effectively address these issues, as they assign similar priorities to regions with both information loss and surplus. A novel loss function based on the IoU metric is imperative to achieve a more balanced treatment between information loss and surplus, thereby providing a more effective means of handling these challenges. After the IoU calculation, two regions are identified: (1) Non-overlapping region; (2) Overlapping region. For the overlapping region, we employ an image fusion method to generate an image with the best quality. Given the source images denoted as \(\mathbf{I_{1}}\) and \(\mathbf{I_{2}}\), a DenseNet model is trained to produce the fused image. The output of the feature extractor comprises feature maps \(\phi C_{1}(\mathbf{I_{1}}),...,\phi C_{5}(\mathbf{I_{1}})\) and \(\phi C_{1}(\mathbf{I_{2}}),...,\phi C_{5}(\mathbf{I_{2}})\) where \(C_{i}\) represents a specific layer within the feature extractor and \(\phi\) is the feature extractor. Subsequently, an information measure is performed on these feature maps, resulting in two measures denoted as \(g\mathbf{I_{1}}\) and \(g\mathbf{I_{2}}\). In the subsequent processing, the degree of information preservation is denoted as \(\omega_{1}\) and \(\omega_{2}\). \(\mathbf{I_{1}}\), \(\mathbf{I_{2}}\), \(\mathbf{I_{f}}\), \(\omega_{1}\), and \(\omega_{2}\) are utilized in the loss function without requiring ground truth labels. During the training phase, \(\omega_{1}\) and \(\omega_{2}\) are computed to determine the loss function. Afterwards, a DenseNet module is optimized to minimize the loss value. In the testing phase, \(\omega_{1}\) and \(\omega_{2}\) need not be computed as the DenseNet model has been optimized. Therefore, \(\omega_{1}\) and \(\omega_{2}\) are defined as: \[[\omega_{1},\omega_{1}]=softmax([\frac{g\mathbf{I_{1}}}{c},\frac{g\mathbf{I_{1} }}{c}]) \tag{1}\] In this context, we employ the softmax function to map \(\frac{g\mathbf{I_{1}}}{c}\) and \(\frac{g\mathbf{I_{2}}}{c}\) to real numbers within the range of 0 to 1, ensuring that the sum of \(\omega_{1}\) and \(\omega_{2}\) equals 1. Subsequently, \(\omega_{1}\) and \(\omega_{2}\) are utilized in the loss function to control the degree of information preservation of specific source images. The loss function is primarily designed to preserve essential information and to train a single model that can be applied to various tasks. It is defined as follows: \[\mathcal{L}=\mathbb{E}(\omega_{1}\cdot MSE_{\mathbf{I_{r}},\mathbf{I_{1}}}+ \omega_{2}\cdot MSE_{\mathbf{I_{r}},\mathbf{I_{2}}}) \tag{2}\] This loss function is then utilized to train the feature aggregation model, which combines features from multiple different frames into an optimized feature representation for the license plate character recognition task in images. ### _OCR model_ Figure 5 illustrates the chosen architecture of the CnOCR network for the character recognition part. Initially, a convolutional layer with 40 kernels of size 3 x 3 is applied to the input image, which is a matrix block, to extract basic features. A subsequent pooling layer aims to reduce the resolution by selecting the most prominent features within a 1 x 2 region. Two additional convolutional sets (60 and 80 kernels) and max pooling layers are added. However, the final pooling layer employs a filter size of 2 x 2. Traditional architectures usually perform 2 x 2 pooling, halving both dimensions. In contrast, we apply two pooling layers to halve only the height, not the width. The reason is that the maximum number of predicted labels corresponds to the size of the time axis of the last layer, which is the width in our case. Due to the dense and overlapping nature of some license plates characters, we incorporate only a 2 x 2 pooling layer. After the final pooling layer, a fourth convolutional layer is added with an 80-sized kernel, followed by a bidirectional LSTM layer with 100 hidden nodes.The LSTM layer is instrumental in capturing contextual information and dependencies between characters within the text while convolutional layers are used as feature extractors to analyze the visual characteristics of characters and text regions. Lastly, a dense layer with softmax activation transforms the 100 output nodes at each position into probabilities for the 53 (52 + blank) target characters at each horizontal position for character recognition. The key advantage of CnOCR lies in its fast prediction speed, achieving both accuracy and prediction time of 0.03 seconds for a single license plate. ### _The license plate dataset and configurations_ The PTITPlates dataset consists of 500 license plate images labeled using the LabelMe tool. We collected these images through cameras placed in various industrial and road areas. The images capture different angles and have been filtered to include only those with visible license plates, which aids in training and testing the proposed model. The training parameters for our proposed method are presented in Table I. The total trainable parameters of the YOLOv8 model are around 11 million, while for the feature fusion and OCR models, the parameter count is smaller. The optimization algorithm we employ is Stochastic Gradient Descent, and the loss function used is cross-entropy. We utilize loss value transformation graphs over time to evaluate and compare the performance of different loss functions. These graphs provide a visual representation that allows us to observe and analyze the variation of the loss function during training. We present three graph images in Figure 6, corresponding to three different loss functions: (i) Localization Loss, depicting the training process of license plate detection; (ii) Classification Loss, illustrating the training of license plate classification and image fusion model; and (iii) Connectionist Temporal Classification Loss, indicating the character recognition process within the license plate. Specifically, for the YOLOv8 loss function, the Variational Focal Loss (VFL) is employed as the classification loss, while the combination of Distribution Focal Loss (DFL) and Complete Intersection over Union Loss (CIOU) serves as the segmentation loss. Similarly to conventional classification tasks, in the license plate classification step, we employ the Categorical Cross-Entropy loss with the following formula (equation 3): \[\mathcal{L}_{CE}=-\sum\frac{1}{N}(y_{true}*log(y_{pred})) \tag{3}\] In which \(y_{true}\) represents the ground truth vector (one-hot encoding) of the sample with a size of \(C\), which is the number of different classes, \(y_{pred}\) is the predicted vector of the sample and \(N\) is the number of data samples. Finally, in the recognition model, we utilize the loss function with the general formula of CTC as follows: \[\mathcal{L}_{CTC}=-log(P(Y|X)) \tag{4}\] Where \(P(Y|X)\) represents the probability of the actual output sequence \(Y\) given the input sequence \(X\). The results of the training process are depicted in Figure 6. The loss values and accuracy during the training process exhibit a pronounced fluctuation in the initial epochs, gradually stabilizing in the subsequent epochs, indicates that the model is learning and improving over time. If the loss values and accuracy exhibit instability during training then proposed model may not be suitable for the dataset. Our proposed model tends to converge to an optimal value after around 20 epochs. The training process was concluded after 25 epochs. The validation loss curves are generally close to the training loss curves,implying that the model is not overfitting to the training data. Additionally, the gradual, sustained observed in the validation loss curves signifies a continuous enhancement in the model's capability to generalize and excel when applied to unobserved data. ## IV Experimental Results ### _Accuracy Evaluation_ We evaluated the models using the F1 score, considering equal importance in the accurate classification of each class. From the results presented in Table II, it can be observed that our proposed method achieves higher F1 scores compared to the baseline methods, such as the YOLOv5 network which is currently considered one of the best methods, as well as other methods like YOLOv8 with Tesseract and YOLOv8 with CnOCR. This performance difference can be attributed to the variations in the angles of license plates and the impact of weather conditions. Our model effectively identifies the partitioning mechanism using blocks and image fusion, resulting in improved performance. The F1 score for the YOLOv5 network is only 75.2% with the PTITPlates dataset and 78.6% with the Stanford Cars dataset. In contrast, our proposed method achieves F1 scores of 91.3% with the PTITPlates dataset and 90.8% with the Stanford Cars dataset. This phenomenon arises due to the presence of substantial noise within the Stanford Cars dataset and its limited variety of viewing angles, as compared to the PTITPlates dataset. This observation underscores a vulnerability commonly associated with datasets that lack diversity in viewing angles. The detailed confusion matrix of the proposed license plate detection model is provided in Figure 7. As displayed in the confusion matrix, the proposed method accurately categorized the majority of specific license plates, as evidenced by elevated counts of True Positives (TP). However, it encounters challenges in certain instances, particularly when confronted with blurred plates, leading to the presence of False Positives (FP) and False Negatives (FN). ### _Real-world testing_ In this experiment, we also integrated the model into a practical application. Some images of our model's predictions are shown in Figure 8. We deployed the system to handle 30 cameras, grouped into 10 columns, to monitor vehicle entries and exits in an industrial zone. Due to the substantial volume of incoming images, we designed a distributed system with 10 integrated API endpoints, each utilizing our model. On average, each model processed data from 3 cameras, achieving a license plate detection latency of 0.1 seconds. This result is quite impressive for the deployment of a license plate detection and recognition system. ## V Conclusion In this paper, we proposed a feature fusion model from multiple perspectives based on the YOLOv8 and CnOCR architectures for license plate recognition. Through evaluations on various datasets, experimental results demonstrated that the proposed model achieved impressive results, with an F1 score of 91.3% on the self-collected PTITPlates dataset. Notably, this dataset is noisy and affected by weather conditions, resulting in poor image quality of license plates. The proposed model showcased robust performance across diverse contexts with high accuracy. In the future, Generative Adversarial Networks (GANs) could be employed for data augmentation, simulating license plates from different regions to address class imbalance issues among Vietnamese character groups within the dataset. This could potentially enhance accuracy. Additionally, self-supervised learning models, such as zero-shot learning, could be utilized to fine-tune the network based on the localization accuracy of specific character features within license plates, potentially leading to even better results.
2303.17851
Large deviation principle for white noise SPDEs with oblique reflection
In this paper, we consider Fredlin-Wentzell type large deviation principle (LDP) of multidimensional reflected stochastic partial differential equations in a convex domain, allowing for oblique direction of reflection. To prove the LDP, a sufficient condition for the weak convergence method and penalized method plays an important role.
Hong Shaopeng, Liu Xiangdong
2023-03-31T07:29:00Z
http://arxiv.org/abs/2303.17851v1
# Large deviation principle for white noise SPDEs with oblique reflection ###### Abstract. In this paper, we consider Fredlin-Wentzell type large deviation principle (LDP) of multidimensional reflected stochastic partial differential equations in a convex domain, allowing for oblique direction of reflection. To prove the LDP, a sufficient condition for the weak convergence method and penalized method plays an important role. Key words and phrases:Stochastic partial differential equations, Oblique reflection, Large deviation principle, Weak convergence 2020 Mathematics Subject Classification: Primary 60H15, Secondary 60F10 ###### Contents * 1 Introduction * 2 Framwork * 2.1 Small noise white noise SPDEs with oblique reflection * 2.2 Weak convergence approach: the abstract setting * 3 Well-posedness of the Skeleton equation * 4 Large deviation principle ## 1. Introduction In recent years, stochastic partial differential equations (SPDEs) with reflection have become a highly researched topic in the field of stochastic analysis. Many scholars in this field have devoted significant research efforts to it. As an effective mathematical model, reflection SPDE has been widely applied in various fields, such as the description of limit order books and the characterization of multiphase systems, see e.g. [8, 12] and their references. As pioneers in this research area, [4] and [14] respectively studied the existence of solutions for additive noise SPDEs and quasi-linear additive noise SPDEs using penalization methods. Meanwhile, [18] considered the well-posedness of parabolic SPDEs with reflection and proved the existence and uniqueness of solutions by utilizing comparison theorems. Compared to penalization methods, their approach provides a more concise proof and uses weak convergence methods [1] to prove the large deviation principle of the SPDEs. [23] used penalization methods to prove the existence and uniqueness of solutions for SPDEs with two reflecting walls. For reflection SPDEs in convex domains, since there are no comparison theorems for these types of equations, [20] obtained the uniqueness of solutions using finite-dimensional approximation methods and obtained the existence of solutions using penalization methods. Recently, [5] studied the well-posedness of oblique reflection SPDEs in convex domains and obtained the existence and uniqueness of solutions using penalization methods and a series of refined prior estimates. The large deviation principle is an important tool for studying rare events and has always been a hot topic in the field of probability. [7] first considered the large deviation principle for Markovian stochastic differential equations. In recent years, the weak convergence method proposed by [2] has been well developed. This method Introduction The problem of finding a solution to the problem We need make some assumptions about domain \(\mathcal{O}\), reflected direction and coefficients to make sure the the well-posedness of (2.2). 1. Convex domain. \(\mathcal{O}\) is a smooth, bounded convex domain in \(\mathbb{R}^{d}\) 2. Uniform interior cone. Let \(x,z\in\mathbb{R}^{d}\) with \(|z|=1\). For \(b>0\) and \(0\leq a<1\), define the set \[C(x,z,b,a):=\left\{y\in\mathbb{R}^{d}:|y-x|<b,\langle y-x,z\rangle_{d}\geq a| y-x|\right\}\] Suppose that \(\gamma\in C_{b}^{2}\left(\mathbb{R}^{d}\right)\) satisfies \(|\gamma(x)|=1\) for \(x\in\partial\mathcal{O}\). We have 1. There exists \(b>0\) and \(0\leq a<1\) such that for each \(x\in\partial\mathcal{O}\), \[C(x,\gamma(x),b,a)\subset\mathcal{O}\] Let \(n(x)\) be the unique outward unitary normal vector to \(\partial\mathcal{O}\) at the point \(x\in\partial\mathcal{O}\). If \(\partial\mathcal{O}\in C^{1}\) and \(\gamma\) is continuous, then condition (A) is equivalent to the following condition (B). 2. There exists \(\rho>0\) such that for every \(x\in\partial\mathcal{O}\), \[\langle\gamma(x),n(x)\rangle_{d}\geq\rho\] Condition (B) says that in the case of a smooth \(\partial\mathcal{O}\), the uniform interior cone condition is equivalent to nontangentiality of \(\partial\mathcal{O}\) to \(\gamma\) 3. Lipschitz condition. There exists a constant \(C>0\) such that \[|b(u)-b(v)|+|\sigma(u)-\sigma(v)|\leq C|u-v|\] **Remark 2.1**.: Since domain \(\mathcal{O}\) is convex, then it satisfies uniform exterior sphere condition. That means there exists \(r_{0}>0\) such that \[B_{r_{0}}\left(x+r_{0}n(x)\right)\subseteq\mathcal{O}^{c},\forall x\in \partial\mathcal{O}\] where \(n(x)\) is the unit outward normal and \(B_{\beta}(x):=\left\{y\in\mathbb{R}^{d}:|y-x|<\beta\right\},\beta>0\). Since \(\mathcal{O}\) is convex, the unit outward normal equivalently there exists constant \(C_{0}\geq 1/(2r_{0})\)such that \[\langle x-y,n(x)\rangle_{d}+C_{0}|x-y|^{2}\geq 0,\forall x\in\partial\mathcal{ O},\forall y\in\overline{\mathcal{O}}\] From [9] we have a Lemma on properties of the oblique vector field \(\gamma\). **Lemma 2.2**.: _Let \(\gamma\in C_{b}^{2}\left(\mathbb{R}^{d}\right)\) satisfy (2.3), then there exists a \(d\times d\) symmetric matrix-valued function \((a_{ij}(x))\) satisfying_ \[(a_{ij}(x))\geq\theta I_{d}\quad\text{ for some }\quad\theta>0,\quad a_{ij} \in C_{b}\left(\mathbb{R}^{d}\right)\] \[\sum_{j=1}^{d}a_{ij}(x)\gamma_{j}(x)=n_{i}(x)\quad\text{ for }\quad 1\leq i\leq d,\quad\forall x\in\partial\mathcal{O}.\] _In particular, there exists \(C_{0}\geq 0\) such that_ \[C_{0}|x-y|^{2}+\sum_{i,j=1}^{d}a_{ij}(x)\left(x_{i}-y_{i}\right)(\gamma_{j}(x ))\geq 0,\quad\text{ for all }x\in\partial\mathcal{O},y\in\overline{\mathcal{O}}.\] _In addition, if \(\gamma\in C_{b}^{1}\) (resp. \(W^{1,\infty}\)), then \((a_{ij})\in C_{b}^{1}\left(\text{ resp. }W^{1,\infty}\right)\). And if \(\gamma\in C_{b}^{2}\left(\text{ resp. }W^{2,\infty}\right)\), then \((a_{ij})\in C_{b}^{2}\left(\text{resp. }W^{2,\infty}\right)\)_ We give the definition of the solution of (2.2). **Definition 2.3**.: A pair \((u^{\varepsilon},\eta^{\varepsilon})\) is said to be a solution of (2.2) if 1. \(u^{\varepsilon}\) is a continuous random field on \(\mathbb{R}_{+}\times[0,1];u(t,x)\) is \(\mathcal{F}_{t}\) measurable and \(u(t,x)\in\overline{\mathcal{O}}\) a.s. 2. \(\eta^{\varepsilon}\) is an \(\mathbb{R}^{d}\)-valued random vector on \(\mathbb{R}_{+}\times[0,1]\) such that 1. \(E\left[Var_{Q_{T}}(\eta^{\varepsilon})\right]<\infty,\forall T\geq 0\), where \(Var_{Q_{T}}(\eta^{\varepsilon})\) denotes the total variation of \(\eta\) on \(Q_{T}=[0,T]\times[0,1]\). \(\eta^{\varepsilon}\) is adapted in the sense that for any bounded measurable mapping \(\psi\) : \[\int_{0}^{t}\int_{0}^{1}\psi(s,x)\eta^{\varepsilon}(ds,dx)\text{ is }\mathcal{F}_{t}\text{ measurable.}\] 3. \((u^{\varepsilon},\eta^{\varepsilon})\) solves the parabolic PDE with oblique reflection in the following sense: for any \(t\in\mathbb{R}_{+}\), \(\varphi\in C_{0}^{2}\left([0,1];\mathbb{R}^{d}\right)\) with \(\varphi(0)=\varphi(1)=0\), \[\langle u^{\varepsilon}(t),\varphi\rangle-\int_{0}^{t}\left\langle u ^{\varepsilon}(s),\varphi^{\prime\prime}\right\rangle ds-\int_{0}^{t}\langle b (u^{\varepsilon}(s)),\varphi\rangle ds\] \[=\langle u(0),\varphi\rangle+\sqrt{\varepsilon}\sum_{k=1}^{m}\int_{0}^{t} \left\langle\sigma_{k}(u^{\varepsilon}(s)),\varphi\right\rangle dB_{k}(s)- \int_{0}^{t}\int_{0}^{1}\varphi(x)\eta^{\varepsilon}(ds,dx)\quad a.s.,\] 4. for any \(\phi\in C([0,T]\times[0,1];\bar{\mathcal{O}})\) \[\int_{0}^{T}\int_{0}^{1}\left\langle u^{\varepsilon}(t,x)-\phi(t,x),\left(a_{ij }(u)\right)\eta^{\varepsilon}(dt,dx)\right\rangle_{d}\geq 0\] ### Weak convergence approach: the abstract setting **Definition 2.4** (Large deviation).: A family \(\left\{X^{\varepsilon}\right\}_{\varepsilon>0}\) of \(\mathcal{E}\)-valued random variable is said to satisfy the large deviation principle on \(\mathcal{E}\), with the good rate function \(I\) and with the speed function \(\lambda(\varepsilon)\) which is a sequence of positive numbers tending to \(+\infty\) as \(\varepsilon\to 0\), if the following conditions hold: 1. for each \(M<\infty\), the level set \(\left\{x\in\mathcal{E}:I(x)\leq M\right\}\) is a compact subset of \(E\); 2. for each closed subset \(F\) of \(\mathcal{E},\limsup_{\varepsilon\to 0}\frac{1}{\lambda(\varepsilon)}\log \mathbb{P}\left(X^{\varepsilon}\in F\right)\leq-\inf_{x\in F}I(x)\); 3. for each open subset \(G\) of \(\mathcal{E},\liminf_{\varepsilon\to 0}\frac{1}{\lambda(\varepsilon)}\log \mathbb{P}\left(X^{\varepsilon}\in G\right)\geq-\inf_{x\in G}I(x)\). We recall here several results from [2] and [11] which give an abstract framework of LDP and a sufficient condition. Let \(\mathcal{H}\) be a Cameron-Martin space. \[\mathcal{H}:=\left\{h:[0,T]\rightarrow\mathbb{R};\text{$h$ is absolutely continuous and }\int_{0}^{T}|\dot{h}(s)|^{2}ds<+\infty\right\}.\] The space \(\mathcal{H}\) is a Hilbert space with inner product \(\left\langle h_{1},h_{2}\right\rangle_{\mathcal{H}}:=\int_{0}^{T}\left\langle \dot{h}_{1}(s),\dot{h}_{2}(s)\right\rangle ds\). The Hilbert space \(\mathcal{H}\) is endowed with the weak topology, i.e., for any \(h_{n},h\in\mathcal{H},n\geq 1\), we say that \(h_{n}\) converges to \(h\) in the weak topology, if for any \(g\in\mathcal{H}\), \[\left\langle h_{n}-h,g\right\rangle_{\mathcal{H}}=\int_{0}^{T}\left\langle \dot{h}_{n}(s)-\dot{h}(s),\dot{g}(s)\right\rangle ds\to 0,\quad\text{ as }n \rightarrow\infty.\] Let \(\mathcal{A}\) denote the class of real-valued \(\left\{\mathcal{F}_{t}\right\}\)-predictable processes \(\phi\) belonging to \(\mathcal{H}\) a.s. Let \[S_{N}:=\left\{h\in\mathcal{H};\int_{0}^{T}|\dot{h}(s)|^{2}ds\leq N\right\}.\] \(S_{N}\) is endowed with the weak topology induced from \(\mathcal{H}\). Define \[\mathcal{A}_{N}:=\left\{\phi\in\mathcal{A};\phi(\omega)\in S_{N},\mathbb{P}\text {-a.s. }\right\}.\] **Theorem 2.5**.: _For any \(\varepsilon>0\), let \(\mathcal{G}^{\varepsilon}:C([0,T];H)\rightarrow\mathcal{E}\) is a measurable mapping, \(X^{\varepsilon}:=\mathcal{G}^{\varepsilon}\left(B(\cdot)\right)\). Assume there exists a measurable mapping \(\mathcal{G}^{0}:C([0,T];H)\rightarrow\mathcal{E}\) such that_ 1. _For any_ \(N<\infty\)_,_ \(\left\{h^{\varepsilon};\varepsilon>0\right\}\in\mathcal{A}_{N}\) _and_ \(\delta>0\)_, we have_ \[\lim_{\varepsilon\to 0}\mathbb{P}\left(\rho\left(Y^{\varepsilon},Z^{ \varepsilon}\right)>\delta\right)=0\] _where,_ \(Y^{\varepsilon}=\mathcal{G}^{\varepsilon}\left(B(\cdot)+\frac{1}{\sqrt{ \varepsilon}}\int_{0}^{\cdot}\dot{h}^{\varepsilon}(s)ds\right)\)_,_ \(Z^{\varepsilon}=\mathcal{G}^{0}\left(\int_{0}^{\cdot}\dot{h}^{\varepsilon}(s) ds\right)\)_,_ \(\rho(\cdot,\cdot)\) _is the metric of_ \(\mathcal{E}\)_._ 2. _For any_ \(N<\infty\) _and_ \(\left\{h^{\varepsilon};\varepsilon>0\right\}\in S_{N}\)_,_ \(h^{\varepsilon}\to h\) _as_ \(\varepsilon\to 0\)_, we have_ \(\mathcal{G}^{0}\left(\int_{0}^{\cdot}\dot{h}^{\varepsilon}(s)ds\right)\) _converge to_ \(\mathcal{G}^{0}\left(\int_{0}^{\cdot}\dot{h}(s)\right)ds\)_._ _Then, \(\left\{X^{\varepsilon}\right\}_{\varepsilon>0}\) satisfies LDP in \(\mathcal{E}\), and rare function \(I\) is_ \[I(g)=\inf_{\left\{h\in\mathcal{H},g=\mathcal{G}^{0}(\int_{0}^{t}\dot{h}(s)ds) \right\}}\left\{\frac{1}{2}\left\|h\right\|_{\mathcal{H}}^{2}\right\}. \tag{2.3}\] _We denote \(\inf\varnothing=\infty\)_ ## 3. Well-posedness of the Skeleton equation For any \(h\in\mathcal{H}\), consider the PDEs with oblique reflection (3.1). \[\left\{\begin{aligned} du_{i}^{h}(t,x)=& \frac{\partial^{2}u_{1}^{h}(t,x)}{\partial x^{2}}dt+b_{i}(u^{h}(t,x))dt+\sum_{ j=1}^{m}\sigma_{ij}(u^{h}(t,x))\dot{h}(t)dt\\ &-\gamma_{i}(u^{h}(t,x))dk_{i}(t,x),\quad x\in[0,1],\quad i=1, \ldots,d\\ u^{h}(0,\cdot)=&\left(u_{1}^{h}(0,\cdot),u_{2}^{h}(0,\cdot),\cdots,u_{d}^{h}(0,\cdot)\right)^{T}\in\overline{\mathcal{O}}\\ u^{h}(t,0)=& u^{h}(t,1)=0\end{aligned}\right.\quad. \tag{3.1}\] Let \[\eta^{h}(t,x)=\int_{0}^{t}\gamma\left(u^{h}(s,x)\right)dk(s,x).\] The following is the definition of a solution to (3.1). **Definition 3.1**.: A pair \((u^{h},\eta^{h})\) is said to be a solution of (3.1) if 1. \(u^{h}\) is a continuous field on \(\mathbb{R}_{+}\times[0,1]\) and \(u^{h}(t,x)\in\bar{\mathcal{O}}\) 2. \(\eta^{h}\) is an \(\mathbb{R}^{d}\) valued vector on \(\mathbb{R}_{+}\times[0,1]\) such that 1. \(Var_{Q_{T}}(\eta^{h})<\infty\), where \(Var_{Q_{T}}(\eta^{h})\) denotes the total variation of \(\eta^{h}\) on \(Q_{T}=[0,T]\times[0,1]\). 2. For any bounded measurable mapping \(\psi\): \[\int_{0}^{t}\int_{0}^{1}\psi(s,x)\eta^{h}(dt,dx)\] is measurable. 3. \((u^{h},\eta^{h})\) solves the parabolic PDE with oblique reflection in the following sense: for any \(t\in\mathbb{R}_{+}\), \(\varphi\in C_{0}^{2}\left([0,1];\mathbb{R}^{d}\right)\) with \(\varphi(0)=\varphi(1)=0\), (3.2) 4. for any \(\phi\in C([0,T]\times[0,1];\bar{\mathcal{O}})\) \[\int_{0}^{T}\int_{0}^{1}\left\langle u^{h}(t,x)-\phi(t,x),\left(a_{ij}(u) \right)\eta^{h}(dt,dx)\right\rangle_{d}\geq 0.\] **Theorem 3.2**.: _Skeleton equation (3.1) exists unique solution \((u^{h},\eta^{h})\) in \(C([0,T];H)\cap L^{2}([0,T];V)\)._ For \(y\in\mathbb{R}^{d}\), denote by \(\pi(y)\) the projection of y onto the domain \(\bar{\mathcal{O}}\). Since \(\mathcal{O}\) is convex, \(\pi\) is a contraction mapping, that means \(\left|\pi(x)-\pi(y)\right|\leq\left|x-y\right|\), \(\forall x\in\mathbb{R}^{d}\). Moreover, we may need the assumption: \[\exists\delta>0\text{ such that }\left\langle\pi(x),\gamma(x)\right\rangle_{d} \geq\delta,\quad x\in\mathbb{R}^{d}. \tag{3.3}\] The penalized system of (3.1). \[\begin{split} u^{n,h}(t,x)=& u(0,x)+\int_{0}^{t} \frac{\partial^{2}u^{n,h}(s,x)}{\partial x^{2}}ds+\int_{0}^{t}b_{i}(u^{n,h}(s, x))ds+\sum_{j=1}^{m}\int_{0}^{t}\sigma_{ij}(u^{n,h}(s,x))\dot{h}(s)ds\\ &-n\int_{0}^{t}\gamma(u^{n,h}(t,x))\left|u^{n,h}(s,x)-\pi\left(u^ {n,h}(s,x)\right)\right|ds.\end{split} \tag{3.4}\] We prepare a number of a priori estimates for (3.4) in order to proof Theroem 3.2 **Lemma 3.3**.: _The following estimates hold._ \[\sup_{n}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}(t)\right\|_ {H}^{4}<\infty \tag{3.5}\] \[\sup_{n}\sup_{h\in\mathcal{S}_{N}}\left(n\int_{0}^{T}dt\left\|u^{n,h}(t)\right\| _{H}^{2}\int_{0}^{1}\left\langle u^{n,h}(t,x),\gamma(u^{n,h}(t,x))\right\rangle _{d}\left|u^{n,h}(t,x)-\pi(u^{n,h}(t,x))\right|dx\right)\leq L \tag{3.6}\] Proof.: We have \[\begin{split}\left\|u^{n,h}(t)\right\|_{H}^{4}&= \left\|u^{n,h}(0)\right\|_{H}^{4}+4\int_{0}^{t}ds\left\|u^{n,h}(s)\right\|_{H}^ {2}\int_{0}^{1}\left\langle u^{n,h}(t,x),\frac{\partial^{2}u^{n,h}(s,x)}{ \partial x^{2}}\right\rangle_{d}dx\\ &+4\int_{0}^{t}\left\|u^{n,h}(s)\right\|_{H}^{2}\left\langle u^{ n,h}(s),b(u^{n,h}(s))\right\rangle_{d}ds\\ &+4\sum_{k=1}^{m}\int_{0}^{t}\left\|u^{n,h}(s)\right\|_{H}^{2} \left\langle u^{n,h},\sigma_{k}(u^{n,h}(s))\dot{h}(s)\right\rangle ds\\ &-4n\int_{0}^{t}\left\|u^{n,h}(s)\right\|_{H}^{2}\left\langle u^ {n,h}(s),\gamma\left(u^{n,h}(s)\right)\left|u^{n,h}(s,x)-\pi\left(u^{n,h}(s,x) \right)\right|\right\rangle ds\end{split} \tag{3.7}\] Note that \[\begin{split}& 4\int_{0}^{t}ds\left\|u^{n,h}(s)\right\|_{H}^{2}\int_{0}^{1} \left\langle u^{n,h}(s,x),\frac{\partial^{2}u^{n,h}(s,x)}{\partial x^{2}} \right\rangle_{d}dx=-4\int_{0}^{t}\left\|u^{n,h}(s)\right\|_{H}^{2}\left\| \frac{\partial u^{n,h}(s)}{\partial x}\right\|_{H}^{2}ds\leq 0\\ &\quad\left\langle u^{n,h}(s),\gamma\left(u^{n,h}(s)\right)\left|u ^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle\\ =&\int_{0}^{1}\left|u^{n,h}(s,x)-\pi\left(u^{n,h}(s,x )\right)\right|\left\langle u^{n}(s,x)-\pi\left(u^{n,h}(s,x)\right),\gamma \left(u^{n,h}(s,x)\right)\right\rangle_{d}dx\\ &+\int_{0}^{1}\left\langle\pi\left(u^{n,h}(s,x)\right),\gamma \left(u^{n,h}(s,x)\right)\right\rangle_{d}\left|u^{n,h}(s,x)-\pi\left(u^{n,h}( s,x)\right)\right|dx\\ \geq&\rho\int_{0}^{1}\left|u^{n,h}(s,x)-\pi\left(u^{ n,h}(s,x)\right)\right|^{2}dx+\delta\int_{0}^{1}\left|u^{n,h}(s,x)-\pi\left(u^{n,h}(s,x) \right)\right|dx\geq 0\end{split}\] By Cauchy-Schwarz inequality and Young's inequality, we have \[\begin{split}&\sum_{k=1}^{m}\int_{0}^{t}\left\|u^{n,h}(s)\right\|_ {H}^{2}\left\langle u^{n,h}(s),b(u^{n,h}(s))\right\rangle_{d}ds\\ &\qquad\leq\max_{0\leq s\leq T}\left\|u^{n,h}(s)\right\|_{H}^{2} \left\{\sum_{k=1}^{m}\int_{0}^{t}\left\langle u^{n,h}(s,x),\sigma_{k}(u^{n,h}( s,x))\right\rangle\dot{h}_{k}(s)ds\right\}\\ &\qquad\leq\max_{0\leq s\leq T}\left\|u^{n,h}(s)\right\|_{H}^{2} \int_{0}^{t}\int_{0}^{1}u^{n,h}(s,x)\left(\sum_{j=1}^{m}\left|\sigma_{j}\left(u ^{n,h}(s)\right)\right|^{2}\right)^{\frac{1}{2}}\left(\sum_{j=1}^{m}\left|\dot {h}_{j}(s)\right|^{2}\right)^{\frac{1}{2}}dxds\\ &\qquad\leq\max_{0\leq s\leq T}\left\|u^{n,h}(s)\right\|_{H}^{2} C\left\{\int_{0}^{t}\left[\int_{0}^{1}u^{n,h}(s,x)\left(1+\left|u^{n,h}(s,x) \right|^{2}\right)^{\frac{1}{2}}dx\right]^{2}ds\right\}^{\frac{1}{2}}\left( \int_{0}^{t}|\dot{h}(s)|^{2}ds\right)^{\frac{1}{2}}\\ &\qquad\leq CN\left(\sup_{0\leq s\leq t}\left\|u^{n,h}(s)\right\| _{H}^{2}\right)\left(\int_{0}^{t}\left(1+\left\|u^{n,h}(s)\right\|_{H}^{2} \right)ds\right)^{\frac{1}{2}}\\ &\qquad\leq\frac{CN}{2}\left(\sup_{0\leq s\leq t}\left\|u^{n,h}(s) \right\|_{H}^{4}\right)+\frac{1}{2}\left(\int_{0}^{t}\left(1+\left\|u^{n,h}(s) \right\|_{H}^{2}\right)ds\right)\end{split} \tag{3.8}\] We deduce that \[\begin{split}&\sup_{0\leq s\leq t}\left\|u^{n,h}(s)\right\|^{4}+4n \int_{0}^{t}\left\|u^{n,h}(s)\right\|^{2}_{H}\left\langle u^{n,h}(s),\gamma \left(u^{n,h}(s)\right)\left|u^{n,h}(s,x)-\pi\left(u^{n,h}(s,x)\right)\right| \right\rangle ds\\ &\leq C\left\|u^{n,h}(0)\right\|^{4}_{H}+C\int_{0}^{t}\left(1+ \left\|u^{n,h}(s)\right\|^{4}_{H}\right)ds\end{split} \tag{3.9}\] which implies (3.5) and (3.6) by Gronwall's inequality. **Lemma 3.4**.: _There exists a constant \(M_{T}\) such that_ \[\sup_{n}\sup_{h\in\mathcal{S}_{N}}\left(n\int_{0}^{T}\left\|u^{n,h}(t)-\pi \left(u^{n,h}(t)\right)\right\|_{L^{1}([0,1])}dt\right)^{2}\leq M_{T},\quad T>0\] Proof.: We have \[\begin{split}\left\|u^{n,h}(t)\right\|^{2}_{H}=& \left\|u^{n,h}(0)\right\|^{2}_{H}+2\int_{0}^{t}\left\langle u^{n,h}(s),\frac{ \partial^{2}u^{n,h}(s)}{\partial x^{2}}\right\rangle ds+2\int_{0}^{t}\left \langle u^{n,h}(s),b(u^{n,h}(s))\right\rangle ds\\ &+2\sum_{k=1}^{m}\int_{0}^{t}\left\langle u^{n,h}(s),\sigma_{k}(u ^{n,h}(s))\right\rangle h_{k}(s)ds\\ &-2n\int_{0}^{t}\left\langle u^{n,h}(s),\gamma(u^{n,h}(s))\left| u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle ds\end{split} \tag{3.10}\] Note that \(\left\langle u^{n,h}(s),\frac{\partial^{2}u^{n,h}(s)}{\partial x^{2}}\right\rangle =-\left\|u^{n,h}(s)\right\|^{2}_{V}\) and \[\left\|\sigma_{k}\left(u^{n,h}(s)\right)\right\|_{H}+\left\|b\left(u^{n,h}(s) \right)\right\|_{H}\leq C\left(1+\left\|u^{n,h}(s)\right\|_{H}\right).\] In view of (3.3), we have \[\begin{split}& 2n\int_{0}^{t}\left\langle u^{n,h}(s),\gamma\left(u^{n,h}(s) \right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle ds\\ =& 2n\int_{0}^{t}ds\int_{0}^{1}\left|u^{n,h}(s,x)-\pi \left(u^{n,h}(s,x)\right)\right|\left\langle u^{n,h}(s,x)-\pi\left(u^{n,h}(s,x )\right),\gamma\left(u^{n,h}(s,x)\right)\right\rangle_{d}dx\\ +& 2n\int_{0}^{t}ds\int_{0}^{1}\left\langle\pi\left(u^{n, h}(s,x)\right),\gamma\left(u^{n,h}(s,x)\right)\right\rangle_{d}\left|u^{n,h}(s,x)- \pi\left(u^{n,h}(s,x)\right)\right|dx\\ \geq& 2\delta n\int_{0}^{t}\left\|u^{n,h}(s)-\pi \left(u^{n,h}(s)\right)\right\|_{L^{1}([0,1])}ds\end{split} \tag{3.11}\] Combing (3.10), (3.11) and lemma 3.3, we have \[\begin{split}&\sup_{n}\sup_{h\in\mathcal{S}_{N}}\left(n\int_{0}^{T} \left\|u^{n,h}(t)-\pi\left(u^{n,h}(t)\right)\right\|_{L^{1}([0,1])}dt\right)^{ 2}\\ &\leq C+C\sup_{n}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T} \left\|u^{n,h}(t)\right\|^{4}_{H}\leq M_{T}\end{split} \tag{3.12}\] Mimicking the proof of [6][Lemma 4.5], we can prove the following result. **Lemma 3.5**.: _It holds that_ \[\sup_{n}\sup_{h\in\mathcal{S}_{N}}\left(n^{2}\int_{0}^{T}\left\|u^{n,h}(t)- \pi\left(u^{n,h}(t)\right)\right\|^{2}_{H}dt\right)\leq C_{T} \tag{3.13}\] _for some positive constant \(C_{T}\)_ **Lemma 3.6**.: _Assume that \(u(0)\in V\). Then we have follow estimates_ \[\sup_{n}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}\right\|^ {2}_{V}<\infty \tag{3.14}\] \[\sup_{n}\sup_{h\in\mathcal{S}_{N}}\int_{0}^{T}\left\|u^{n,h}(t)\right\|_{H^{2}}^{2}<\infty \tag{3.15}\] Proof.: We have \[\begin{split}\left\|u^{n,h}(t)\right\|_{V}^{2}&=\left\| u^{n,h}(0)\right\|_{V}^{2}+2\int_{0}^{t}\left\langle u^{n,h}(s),\frac{\partial^{2}u^{n,h} (s)}{\partial x^{2}}\right\rangle_{V}ds+2\int_{0}^{t}\left\langle u^{n,h}(s), b(u^{n,h}(s)\right\rangle_{V}ds\\ &\quad+2\sum_{k=1}^{m}\int_{0}^{t}\left\langle u^{n,h}(s), \sigma_{k}\left(u^{n,h}(s)\right)\right\rangle_{V}\dot{h}(s)ds\\ &\quad-2n\int_{0}^{t}\left\langle u^{n,h}(s),\gamma\left(u^{n,h} (s)\right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle_{V} ds\end{split} \tag{3.16}\] Using the integration by parts formula, we deduce that \[\int_{0}^{t}\left\langle u^{n,h}(s),\frac{\partial u^{n,h}(s)}{\partial x^{2} }\right\rangle_{V}ds=-\int_{0}^{t}\left\|u^{n,h}(s)\right\|_{H^{2}}^{2}ds\leq 0\] and \[\begin{split}&\left\langle u^{n,h}(s),\gamma(u^{n,h}(s))\left|u^{ n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle_{V}\\ &=-\int_{0}^{1}\left\langle\frac{\partial^{2}u^{n,h}(s,x)}{ \partial x^{2}},\gamma\left(u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right) \right\rangle_{d}dx\end{split} \tag{3.17}\] From the boundedness of \(\gamma(x)\), we have \[\begin{split}\sup_{0\leq t\leq T}\left\|u^{n,h}(t)\right\|_{V}^ {2}+2\int_{0}^{T}\left\|u^{n,h}(s)\right\|_{H^{2}}^{2}ds&\leq C \left\|u^{n,h}(0)\right\|+C\int_{0}^{T}\left\langle u^{n,h}(s),b(u^{n,h}(s) \right\rangle_{V}ds\\ &+C\sum_{k=1}^{m}\int_{0}^{T}\left\langle u^{n,h}(s),\sigma_{k} \left(u^{n,h}(s)\right)\right\rangle_{V}\dot{h}(s)ds\\ &+Cn\int_{0}^{T}\left\|u^{n,h}(s)\right\|_{H^{2}}\left|u^{n,h}(s) -\pi\left(u^{n,h}(s)\right)\right|_{H}ds\end{split} \tag{3.18}\] \[\begin{split}\sum_{k=1}^{M}\int_{0}^{T}\left\langle u^{n,h}(s), \sigma_{k}(u^{n,h}(s)\right\rangle_{V}\dot{h}(s)ds&\leq C\left( \sum_{k=1}^{M}\int_{0}^{T}\left\langle u^{n,h}(s),\sigma_{k}(u^{n,h}(s)\right \rangle_{V}^{2}ds\right)^{\frac{1}{2}}\left(\int_{0}^{T}h^{2}(s)ds\right)^{ \frac{1}{2}}\\ &\leq C_{1}\left(\sum_{k=1}^{M}\int_{0}^{T}\left\langle u^{n,h}(s ),\sigma_{k}(u^{n,h}(s)\right\rangle_{V}^{2}ds\right)^{\frac{1}{2}}\\ &\leq\frac{1}{4}\sup_{0\leq t\leq T}\left\|u^{n,h}(t)\right\|_{V} ^{2}+C_{2}\sum_{k=1}^{m}\int_{0}^{T}\left\|\sigma_{k}\left(u^{n,h}(s)\right) \right\|_{V}^{2}ds\end{split} \tag{3.19}\] Using the following elementary inequality \[0\leq ab\leq na^{2}+\frac{1}{n}b^{2}\] we obtain that \[\begin{split}& Cn\int_{0}^{T}\left\|u^{n,h}(s)\right\|_{H^{2}} \left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|_{H}ds\\ &\leq Cn\int_{0}^{T}\left(\frac{1}{Cn}\left\|u^{n,h}(s)\right\|_{ H^{2}}^{2}+cn\left\|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right\|_{H}^{2} \right)ds\\ &=\int_{0}^{T}\left\|u^{n,h}(s)\right\|_{H^{2}}ds+Cn^{2}\int_{0}^{ T}\left\|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right\|_{H}^{2}ds\end{split} \tag{3.20}\] By Lemma 3.5, the second term is bounded form above by \(C_{T}\). Thus in view of the linear growth conditions of \(b\) and \(\sigma\), we conclude from the above displays that \[\sup_{0\leq t\leq T}\left\|u^{n,h}(t)\right\|_{V}^{2}+2\int_{0}^{T} \left\|u^{n,h}(s)\right\|_{H^{2}}^{2} \leq C\left\|u^{n,h}(0)\right\|_{V}^{2}+C_{T} \tag{3.21}\] \[+\int_{0}^{T}\left\|u^{n,h}(s)\right\|_{H^{2}}^{2}ds+C\int_{0}^{T }\left(1+\left\|u^{n,h}(s)\right\|_{V}^{2}\right)ds\] which implies (3.14) and (3.15). By Lemma 3.6, we have the following corollary. **Corollary 3.7**.: \[\sup_{n}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}(t)-\pi \left(u^{n,h}(t)\right)\right\|_{V}^{2}\leq\infty\] Proof.: Obversely, \(\left|\pi(x)-\pi(y)\right|\leq\left|x-y\right|\) and the properties of the Sobolev space \(V\), we deduce that \(\left\langle\pi(u),\pi(u)\right\rangle\leq\left\langle u,u\right\rangle_{V}\). In addition, since \[\left\|u^{n,h}(t)-\pi\left(u^{n,h}(t)\right)\right\|_{V}\leq\left\|u^{n,h}(t) \right\|_{V}+\left\|\pi\left(u^{n,h}(t)\right)\right\|_{V}\leq 2\left\|u^{n,h}(t) \right\|_{V}^{2}\] then, by Lemma 3.6 we can get this corollary. **Lemma 3.8**.: \[\lim_{n\to\infty}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}( t)-\pi\left(u^{n,h}(t)\right)\right\|_{H}^{2}=0\] Proof.: Let \(q(z)=dist(z,\bar{\mathcal{O}})\), define \(\psi(u)=\left(\int_{0}^{1}q\left(u(x)\right)ds\right)^{2}\), i.e. \[\psi(u)=\left\|u-\pi(u)\right\|_{H}^{4}\quad\text{for }u\in H\] Then, the first Frechet derivative \(\psi^{\prime}\) at \(u\) is given as follow: for \(h\in H\) \[\psi^{\prime}(u)(h)=4\left(\int_{0}^{1}q\left(u(x)\right)dx\right)\left\langle u -\pi(u),h\right\rangle\] We have \[\psi(u^{n,h}(t)) =2\int_{0}^{t}ds\left(\int_{0}^{1}q\left(u^{n,h}(s,x)\right)dx \right)\int_{0}^{1}\sum_{i=1}^{d}\frac{\partial q}{\partial z_{i}}\left(u^{n, h}(s,x)\right)\frac{\partial^{2}u_{i}^{n,h}(s,x)}{\partial x^{2}}dx \tag{3.22}\] \[+4\sum_{k=1}^{m}\int_{0}^{t}\left(\int_{0}^{1}q\left(u^{n,h}(s,x) \right)dx\right)\left\langle u^{n,h}(s)-\pi\left(u^{n,h}(s)\right),\sigma_{k} \left(u^{n,h}(s)\right)\right\rangle\dot{h}(s)ds\] \[+4\int_{0}^{t}\left(\int_{0}^{1}q\left(u^{n,h}(s,x)\right)dx \right)\left\langle u^{n,h}(s)-\pi\left(u^{n,h}(s)\right),b\left(u^{n,h}(s) \right)\right\rangle ds\] \[-4n\int_{0}^{t}\left(\int_{0}^{1}q\left(u^{n,h}(s,x)\right)dx \right)\left\langle u^{n,h}(s)-\pi\left(u^{n,h}(s)\right),\gamma\left(u^{n,h}( s)\right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle ds\] \[+4\sum_{k=1}^{m}\int_{0}^{t}\left\langle u^{n,h}(s)-\pi\left(u^{n, h}(s)\right),\sigma_{k}\left(u^{n,h}(s)\right)\right\rangle^{2}ds+\sum_{k=1}^{m} \int_{0}^{t}ds\left(\int_{0}^{1}q\left(u^{n,h}(s,x)\right)dx\right)\] \[=I_{1}+I_{2}+I_{3}+I_{4}+I_{5}\] In view of \(\left\langle\gamma(u),u(u)\right\rangle_{d}\geq\rho>0\), it yields \[\left\langle u^{n,h}(s)-\pi\left(u^{n,h}(s)\right),\gamma\left(u^{n,h}(s) \right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle \tag{3.23}\] \[=\int_{0}^{1}\left|u^{n,h}(s,x)-\pi\left(u^{n,h}(s,x)\right) \right|\left\langle u^{n,h}(s,x)-\pi\left(u^{n,h}(s,x)\right),\gamma\left(u^{n,h}(s,x)\right)\right\rangle_{d}dx\] \[\geq\rho\left\|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right\|_{H} ^{2}\geq 0,\] which implies \(I_{4}\leq 0\). Since \(\mathcal{O}\) is convex and the function \(q(z)\) is also convex on \(\mathbb{R}^{d}\backslash\mathcal{O}\), the matrix\(\left(\frac{\partial^{2}q}{\partial z_{i}\partial z_{j}}\right)_{1\leq i,j\leq d}\) is positive semi-definite on this domain. Then by the integration by parts formula, we have \[I_{1}^{n}(t) =-2\int_{0}^{t}ds\left(\psi\left(u^{n,h}(s)\right)\right)^{\frac{ 1}{2}}\int_{0}^{1}\sum_{i,j=1}^{d}\frac{\partial^{2}q}{\partial z_{i}\partial z _{j}}\left(u^{n,h}(s,x)\right)\frac{\partial u_{i}^{n,h}(s,x)}{\partial x} \frac{\partial u_{j}^{n,h}(s,x)}{\partial x}dx\] \[\leq 0\] \[\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}I_{2} \leq\sum_{k=1}^{m}\int_{0}^{T}\psi\left(u^{n,h}(s)\right)\left<u^ {n,h}(s)-\pi\left(u^{n,h}(s)\right),\sigma_{k}\left(u^{n,h}(s)\right)\right>^ {2}ds\left(\int_{0}^{T}\dot{h}(s)^{2}ds\right)^{\frac{1}{2}} \tag{3.24}\] \[\leq C\sum_{k=1}^{m}\int_{0}^{T}\psi\left(u^{n,h}(s)\right)\left< u^{n,h}(s)-\pi\left(u^{n,h}(s)\right),\sigma_{k}\left(u^{n,h}(s)\right) \right>^{2}ds\] \[\leq\frac{1}{4}\sup_{0\leq t\leq T}\psi\left(u^{n,h}(t)\right)+C _{1}\left(\int_{0}^{T}\left\|u^{n,h}(s)\right\|_{H}^{2}\left\|u^{n,h}(s)-\pi \left(u^{n,h}(s)\right)\right\|_{H}^{2}ds\right)\] \[\leq\frac{1}{4}\sup_{0\leq t\leq T}\psi\left(u^{n,h}(t)\right)+C _{2}\int_{0}^{T}\left\|u^{n,h}(s)\right\|_{H}^{2}\left\|u^{n,h}(s)-\pi\left(u ^{n,h}(s)\right)\right\|_{H}^{2}ds\] And \[n\int_{0}^{T}dt\left\|u^{n,h}(t)\right\|_{H}^{2}\int_{0}^{1} \left<u^{n,h}(t,x),\gamma\left(u^{n,h}(t,x)\right)\right>_{d}\left|u^{n,h}(t, x)-\pi\left(u^{n,h}(t,x)\right)\right|dx \tag{3.26}\] \[=n\int_{0}^{T}dt\left\|u^{n,h}(t)\right\|_{H}^{2}\int_{0}^{1} \left<u^{n,h}(t,x)-\pi\left(u^{n,h}(t,x)\right),\gamma\left(u^{n,h}(t,x) \right)\right>_{d}\left|u^{n,h}(t,x)-\pi\left(u^{n,h}(t,x)\right)\right|dx\] \[+n\int_{0}^{T}dt\left\|u^{n,h}(t)\right\|_{H}^{2}\int_{0}^{1} \left<\pi\left(u^{n,h}(t,x)\right),\gamma\left(u^{n,h}(t,x)\right)\right>_{d }\left|u^{n,h}(t,x)-\pi\left(u^{n,h}(t,x)\right)\right|dx\] \[\geq n\rho\int_{0}^{T}\left\|u^{n,h}(t)\right\|_{H}^{2}\left\|u^{ n,h}(t)-\pi\left(u^{n,h}(t)\right)\right\|_{H}^{2}dt\] Finally, we can get \[\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}(t)- \pi\left(u^{n,h}(t)\right)\right\|_{H}^{2}=\sup_{0\leq t\leq T}\psi\left(u^{n,h}(t)\right)^{\frac{1}{2}}\leq\left(\sup_{0\leq t\leq T}\psi\left(u^{n,h}(t) \right)\right)^{\frac{1}{2}} \tag{3.27}\] \[\leq C\left(\int_{0}^{T}\left\|u^{n,h}(s)\right\|_{H}^{2}\left\|u ^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right\|_{H}^{2}ds\right)^{\frac{1}{2}} \leq C\left(\frac{L}{n\rho}\right)^{\frac{1}{2}}\to 0\quad\text{as }n\to\infty\] **Corollary 3.9**.: \[\lim_{n\to\infty}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}(t) -\pi\left(u^{n,h}(t)\right)\right\|_{L^{\infty}([0,1])}^{2}=0\] Proof.: By the Sobolev embedding, for every \(\varepsilon>0\), there exists a constant \(C_{\varepsilon}\) such that \[\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}(t)-\pi \left(u^{n,h}(t)\right)\right\|_{L^{\infty}([0,1])}^{2} \leq\varepsilon\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T} \left\|u^{n,h}(t)-\pi\left(u^{n,h}(t)\right)\right\|_{V}^{2}\] \[+C_{\varepsilon}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T} \left\|u^{n,h}(t)-\pi\left(u^{n,h}(t)\right)\right\|_{H}^{2} \tag{3.28}\] Letting \(n\to\infty\), it follows Lemma 3.8 and Corollary 3.7 that \[\lim_{n\to\infty}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}( t)-\pi\left(u^{n,h}(t)\right)\right\|_{L^{\infty}([0,1])}^{2}\leq C\varepsilon\] Send \(\varepsilon\) to \(0\) to prove the corollary. **Lemma 3.10**.: _For any \(T>0\), \(N>0\) and \(n\geq m\),_ \[\lim_{n,m\to\infty}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n, h}(t)-u^{m,h}(t)\right\|_{H}^{2}+2\sup_{h\in\mathcal{S}_{N}}\int_{0}^{T} \left\|u^{n,h}(t)-u^{m,h}(t)\right\|_{V}^{2}dt=0 \tag{3.29}\] Proof.: For \(n\geq m\), we have \[\left\|u^{n,h}(t)-u^{m,h}(t)\right\|_{H}^{2} =2\int_{0}^{t}\left\langle u^{n,h}(s)-u^{m,h}(s),\frac{\partial^ {2}\left(u^{n,h}(s)-u^{m,h}(s)\right)}{\partial x^{2}}\right\rangle ds\] \[+2\sum_{k=1}^{m}\int_{0}^{t}\left\langle u^{n,h}(s)-u^{m,h}(s), \sigma_{k}\left(u^{n,h}(s)\right)-\sigma_{k}\left(u^{m,h}(s)\right)\right\rangle \dot{h}(s)ds\] \[+2\int_{0}^{t}\left\langle u^{n,h}(s)-u^{m,h}(s),b\left(u^{n,h}(s )\right)-b\left(u^{m,h}(s)\right)\right\rangle ds\] \[-2n\int_{0}^{t}\left\langle u^{n,h}(s)-u^{m,h}(s),\gamma\left(u^ {n,h}(s)\right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle ds\] \[+2m\int_{0}^{t}\left\langle u^{n,h}(s)-u^{m,h}(s),\gamma\left(u^ {m,h}(s)\right)\left|u^{m,h}(s)-\pi\left(u^{m,h}(s)\right)\right|\right\rangle ds\] \[=I_{1}^{m,n}+I_{2}^{m,n}+I_{3}^{m,n}+I_{4}^{m,n}+I_{5}^{m,n} \tag{3.30}\] Using the integration by parts formula, we deduce that \[\left\langle u^{n,h}(s)-u^{m,h}(s),\frac{\partial^{2}\left(u^{n,h}(s)-u^{m,h}( s)\right)}{\partial x^{2}}\right\rangle=-\left\|u^{n,h}(s)-u^{m,h}(s)\right\|_{V}^{2}\leq 0 \tag{3.31}\] By Holder's inequality, we have \[2n\int_{0}^{t}\left\langle u^{n,h}(s)-\pi\left(u^{m,h}(s)\right),\gamma\left(u^{n,h}(s)\right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right) \right|\right\rangle ds\] \[\geq-Cn\int_{0}^{t}\left\|u^{n,h}(s)-u^{m,h}(s)\right\|_{H} \left\|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right\|_{H}ds\] \[-Cn\int_{0}^{t}\left\|u^{m,h}(s)-\pi\left(u^{m,h}(s)\right)\right\| _{H}\left\|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right\|_{H}ds\] \[\geq-C\left(\int_{0}^{t}\left\|u^{n,h}(s)-u^{m,h}(s)\right\|_{H} ^{2}ds\right)^{\frac{1}{2}}\left(n^{2}\int_{0}^{t}\left\|u^{n,h}(s)-\pi\left(u ^{n,h}(s)\right)\right\|_{H}^{2}ds\right)^{\frac{1}{2}}\] \[-C\sup_{0\leq s\leq t}\left\|u^{m,h}(s)-\pi\left(u^{m,h}\right) \right\|_{H}\left(n\int_{0}^{t}\left\|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right) \right\right\|_{H}ds\right) \tag{3.32}\] Consequently, \[I_{4}^{n,m}= -2n\int_{0}^{t}\left\langle u^{n,h}(s)-\pi\left(u^{m,h}(s)\right), \gamma\left(u^{n,h}(s)\right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right| \right\rangle ds\] \[+2n\int_{0}^{t}\left\langle u^{m,h}(s)-\pi\left(u^{m,h}(s)\right),\gamma\left(u^{n,h}(s)\right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right) \right|\right\rangle ds\] \[\leq C\left(\int_{0}^{t}\left\|u^{n,h}(s)-u^{m,h}(s)\right\|_{H}^{2} ds\right)^{\frac{1}{2}}\left(n^{2}\int_{0}^{t}\left\|u^{n,h}(s)-\pi\left(u^{n,h}(s) \right)\right\|_{H}^{2}ds\right)^{\frac{1}{2}}\] \[+C\sup_{0\leq s\leq t}\left\|u^{m,h}(s)-\pi\left(u^{m,h}(s)\right) \right\|_{H}\times\left(n\int_{0}^{t}\left\|u^{n,h}(s)-\pi\left(u^{n,h}(s) \right)\right\|_{H}ds\right) \tag{3.33}\] By a similar argument as above, the same estimate hods for \(I_{5}^{m,n}\). Combing the estimates for \(I_{4}^{m,n}\) and \(I_{5}^{m,n}\) and apply Lemma 3.4, we have \[\sup_{0\leq t\leq T}\left\|u^{n,h}(t)-u^{m,h}(t)\right\|_{H}^{2}+2 \int_{0}^{T}\left\|u^{n,h}(t)-u^{m,h}(t)\right\|_{V}^{2}dt\] \[\leq\frac{1}{4}\sup_{0\leq t\leq T}\left\|u^{n,h}(t)-u^{m,h}(t) \right\|_{H}^{2}+C\int_{0}^{T}\left\|u^{n,h}(t)-u^{m,h}(t)\right\|_{H}^{2}dt\] \[+\left\{C\left(C_{T}\right)^{\frac{1}{2}}\int_{0}^{T}\left\|u^{n,h}(t)-u^{m,h}(t)\right\|_{H}^{2}dt+\sup_{0\leq t\leq T}\left\|u^{n,h}(t)-\pi \left(u^{m,h}(t)\right)\right\|_{H}^{2}\right.\] \[+\left.\sup_{0\leq t\leq T}\left\|u^{n,h}(t)-\pi\left(u^{n,h}(t) \right)\right\|_{H}^{2}\right\}\] \[+C\left(\sup_{0\leq t\leq T}\left\|u^{m,h}(t)-\pi\left(u^{m,h}(t )\right)\right\|_{L^{\infty}([0,1])}\right)\times 2n\int_{0}^{T}\left\|u^{n,h}(t)-\pi \left(u^{n,h}(t)\right)\right\|_{L^{1}([0,1])}dt \tag{3.34}\] By Gronwall's inequality and Lemma 3.3 and 3.8, we have \[\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n,h}(t)-u ^{m,h}(t)\right\|_{H}^{2}+2\int_{0}^{T}\left\|u^{n,h}(t)-u^{m,h}(t)\right\|_{V }^{2}dt \tag{3.35}\] \[\leq C\left(M_{T}\right)^{\frac{1}{2}}\sup_{0\leq t\leq T}\left\| u^{m,h}(t)-\pi\left(u^{m,h}(t)\right)\right\|_{L^{\infty}([0,1])}\] \[+C\left(M_{T}\right)^{\frac{1}{2}}\sup_{0\leq t\leq T}\left\|u^{ n,h}(t)-\pi\left(u^{n,h}(t)\right)\right\|_{L^{\infty}([0,1])}\] By corollary 3.9, we have \[\lim_{n,m\to\infty}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{n, h}(t)-u^{m,h}(t)\right\|_{H}^{2}+2\sup_{h\in\mathcal{S}_{N}}\int_{0}^{T} \left\|u^{n,h}(t)-u^{m,h}(t)\right\|_{V}^{2}dt=0\] Proof of Theorem 3.2.: From Lemma 3.10, there exists \(u^{h}\) such that for any \(T>0\), \(u^{h}\in C\left([0,T];H\right)\cap L^{2}\left([0,T];V\right)\). We will show \(u^{h}\) is the solution of (3.3). From Lemma 3.8, it follows that \[\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t\leq T}\left\|u^{h}(t)-\pi(u^{h}(t)) \right\|_{H}^{2}\leq\lim_{n\to\infty}\sup_{h\in\mathcal{S}_{N}}\sup_{0\leq t \leq T}\left\|u^{n,h}(t)-\pi\left(u^{n,h}(t)\right)\right\|_{H}^{2}=0\] This means that for any \(t>0\), we have \(dist(u^{h}(t,x),\bar{\mathcal{O}})=0\) for almost all \(x\in[0,1]\). Letting \[\eta^{n,h}(dt,dx)=n\gamma\left(u^{n,h}(t,x)\right)\left|u^{n,h}-\pi\left(u^{n, h}(t,x)\right)\right|dtdx\] From every \(T>0\), by Lemma 3.3, we have \[\sup_{n}\sup_{h\in\mathcal{S}_{N}}Var\left(\eta^{n,h}\right)\left([0,T]\times[0,1] \right)^{2}\leq\sup_{n}\sup_{h\in\mathcal{S}_{N}}\left(n\int_{0}^{T}\left\|u^{n,h}(t)-\pi\left(u^{n,h}(t)\right)\right\|_{L^{1}\left([0,1]\right)}dt\right)^{2} \leq\infty \tag{3.36}\] where \(Var\left(\eta^{n,h}\right)\left([0,T]\times[0,1]\right)\) denotes the total variation of \(\eta^{n,h}\) on \(Q_{T}=[0,T]\times[0,1]\). Let \(\mathcal{M}\left(Q_{T}\right)\) be the Banach space of measures on \(Q_{T}\) with the norm of total variation. It follows from (3.36) that \(\left\{\eta^{h,n}(dt,dx)\right\}\) is bounded in \(L^{2}\left(\mathcal{M}\left(Q_{T}\right)\right)\). Since \(\mathcal{M}\left(Q_{T}\right)\) can be identified with the dual of \(C\left(Q_{T}\right)\), \(\eta^{h,n}\) is converges to an element \(\eta^{h}\in L^{2}\left(\Omega,\mathcal{M}\left(Q_{T}\right)\right)\) with respect to the weak-*-topology. From (3.36), we also have \(Var\left(\eta^{h}\left([0,T]\times[0,1]\right)\right)<\infty\). Taking any \(\varphi\in C_{0}^{2}\left((0,\infty)\times[0,1];\mathbb{R}^{d}\right)\), by chain rule we have \[\begin{split}&\left\langle u^{n,h}(t),\varphi(t)\right\rangle- \int_{0}^{t}\left\langle u^{n,h}(s),\frac{\partial\varphi(s)}{\partial x} \right\rangle ds-\int_{0}^{t}\left\langle u^{n,h}(s),\varphi^{\prime\prime}(s )\right\rangle ds\\ &=\left\langle u(0),\varphi(0)\right\rangle+\sum_{k=1}^{m}\int_{0 }^{t}\left\langle\sigma_{k}(u^{n,h}(s)),\varphi(s)\right\rangle\dot{h}(s)ds \\ &+\int_{0}^{t}\left\langle b\left(u^{n,h}(s),\varphi(s)\right) \right\rangle ds-\int_{0}^{t}\int_{0}^{1}\varphi(s,x)\eta^{n,h}(ds,dx)\end{split} \tag{3.37}\] All the terms on the right hand side of the above identity converges. As \(n\to\infty\), we have \[\begin{split}&\left\langle u^{h}(t),\varphi(t)\right\rangle- \int_{0}^{t}\left\langle u^{h}(s),\frac{\partial\varphi(s)}{\partial x} \right\rangle ds-\int_{0}^{t}\left\langle u^{h}(s),\varphi^{\prime\prime}(s) \right\rangle ds\\ &=\left\langle u(0),\varphi(0)\right\rangle+\sum_{k=1}^{m}\int_{0 }^{t}\left\langle\sigma_{k}(u^{h}(s)),\varphi(s)\right\rangle\dot{h}(s)ds\\ &+\int_{0}^{t}\left\langle b\left(u^{h}(s),\varphi(s)\right) \right\rangle ds-\int_{0}^{t}\int_{0}^{1}\varphi(s,x)\eta^{h}(ds,dx)\end{split} \tag{3.38}\] For any \(\phi\in C([0,T]\times[0,1];\overline{\mathcal{O}})\), we obtain that \[\left\langle u^{n,h}(t,x)-\phi(t,x),u^{n,h}(t,x)-\pi\left(u^{n,h}(t,x)\right) \right\rangle_{d}\geq 0.\] Since \(\sum_{j=1}^{d}a_{ij}(x)\gamma_{j}(x)=n_{i}(x)\) for any \(x\in\partial\mathcal{O}\), we have \[\left(a_{ij}\left(u^{n,h}\right)\right)\eta^{n,h}(dt,dx)=\left(a_{ij}\left(u ^{n,h}\right)\right)u^{n,h}(t,x)-\pi\left(u^{n,h}(t,x)\right)dtdx\] and then \[\left\langle u^{n,h}(t,x)-\phi(t,x),\left(a_{ij}\left(u^{n,h}\right)\right) \eta^{n,h}(dt,dx)\right\rangle_{d}\geq 0.\] Letting \(n\to\infty\), it yields \[\int_{0}^{T}\int_{0}^{1}\left\langle u^{h}(t,x)-\phi(t,x),\left(a_{ij}(u) \right)\eta^{h}(dt,dx)\right\rangle_{d}\geq 0,\] by the strong convergence of \(u^{n,h}\) in \(L^{2}\left(\Omega,C\left(Q_{T}\right)\right)\) combined with the Sobolev embedding. Therefore, we conclude from above displays that \((u^{h},\eta^{h})\) is a solution to (3.1). Let \(\left(v^{h},\eta_{2}^{h}(dt,dx)\right)\) be another solution to the skeleton equation (3.1) such that \(\sup_{0\leq t\leq T}\left\|v^{h}(t)\right\|_{H}^{2}\leq\infty\) for any \(T>0\). There exists a function \(\Phi\in C_{b}^{2}\left(\mathbb{R}^{d}\right)\) such that \[\exists\alpha>0,\quad\forall u\in\partial\mathcal{O},\quad\left\langle\nabla \Phi(u),\gamma(u)\right\rangle_{d}\leq-\alpha C_{0}\leq 0. \tag{3.39}\] Define \[\phi(u^{h}(t)):=\int_{0}^{1}\Phi(u^{h}(t,x))dx\] By chain role, we have \[\begin{split}\phi\left(u^{h}(t)\right)=&\phi\left(u^{h} (0)\right)+\int_{0}^{t}\left\langle\nabla\Phi\left(u^{h}(s)\right),\frac{ \partial^{2}u^{h}(s)}{\partial x^{2}}\right\rangle ds+\int_{0}^{t}\left\langle \nabla\Phi(u^{h}(s),b\left(u^{h}(s)\right)\right\rangle ds\\ &+\sum_{k=1}^{m}\int_{0}^{t}\left\langle\nabla\Phi\left(u^{h}(s) \right),\sigma_{k}\left(u^{h}(s)\right)\right\rangle\dot{h}(s)ds-\int_{0}^{t} \left\langle\nabla\Phi\left(u^{h}(s)\right),d\eta_{1}(s)\right\rangle\end{split} \tag{3.40}\] and \[\begin{split} de\big{\{}-\lambda\left(\phi\left(u^{h}(t)\right)+ \phi\left(v^{h}(t)\right)\right)\big{\}}&=-\lambda e\big{\{}- \lambda\left(\phi\left(u^{h}(t)\right)+\phi\left(v^{h}(t)\right)\right)\big{\}} d\left(\phi\left(u^{h}(t)\right)+\phi\left(v^{h}(t)\right)\right)\\ &=-\lambda e\big{\{}-\lambda\left(\phi\left(u^{h}(t)\right)+\phi \left(v^{h}(t)\right)\right)\big{\}}\left\{\left[\left\langle\nabla\Phi\left( u^{h}(t)\right),\frac{\partial^{2}u^{h}(t)}{\partial x^{2}}\right\rangle \right.\right.\\ &\left.\left.+\left\langle\nabla\Phi\left(v^{h}(t)\right),\frac{ \partial^{2}v^{h}(t)}{\partial x^{2}}\right\rangle\right]dt+\sum_{k=1}^{m} \left[\left\langle\nabla\Phi\left(u^{h}(t)\right),\sigma_{k}\left(u^{h}(t) \right)\right\rangle\dot{h}(s)\right.\\ &\left.\left.+\left\langle\nabla\Phi\left(v^{h}(t)\right),\sigma _{k}\left(v^{h}(t)\right)\right\rangle\dot{h}(s)\right]dt+\left[\left\langle \nabla\Phi\left(u^{h}(t)\right),b\left(u^{h}(t)\right)\right\rangle\right. \\ &\left.+\left\langle\nabla\Phi\left(u^{h}(t)\right),b\left(u^{h} (t)\right)\right\rangle\right]dt-\left\langle\nabla\Phi\left(u^{h}(t)\right), d\eta_{1}^{h}(t)\right\rangle\\ &\left.-\left\langle\nabla\Phi\left(v^{h}(t)\right),d\eta_{2}^{h}( t)\right\rangle\right\}\end{split} \tag{3.41}\] Define \[\varphi(t):=\int_{0}^{1}\left[a_{ij}(u(t,x))+a_{ij}(v(t,x))\right]\left(u_{i} (t,x)-v_{i}(t,x)\right)\left(u_{j}(t,x)-v_{j}(t,x)\right)dx\] By chain rule we have \[\begin{split} d\varphi(t)&=2\int_{0}^{1}\left[a_{ij} (u^{h}(t,x))+a_{ij}(v^{h}(t,x))\right]\left(u_{i}^{h}-v_{i}^{h}\right)(t,x) \times\left\{\frac{\partial^{2}\left(u_{i}^{h}-v_{i}^{h}\right)}{\partial x^ {2}}(t,x)dt\right.\\ &+\left(b_{j}\left(u^{h}\right)-b_{j}\left(v^{h}\right)\right)(t,x )dt+\sum_{k=1}^{m}\left(\sigma_{jk}\left(u^{h}\right)-\sigma_{jk}(v^{h}) \right)(t,x)\dot{h}(t)dt\\ &\left.-\gamma_{j}\left(u^{h}(t,x)\right)dK_{1}(t,x)+\gamma_{j} \left(v^{h}(t,x)\right)dK_{2}(t,x)\right\}dx\\ &+\int_{0}^{1}\left[a_{ij}^{\prime}(u^{h}(t,x))du_{i}^{h}(t,x)+a_ {ij}^{\prime}(v^{h}(t,x))dv_{i}^{h}(t,x)\right]\left(u_{i}^{h}-v_{i}^{h}\right) \left(u_{j}^{h}-v_{j}^{h}\right)(t,x)dx\\ \end{split} \tag{3.42}\] \[\frac{1}{\alpha}\langle\nabla\Phi(u),\gamma(u)\rangle_{d}|u-v|^{2}-\sum_{i,j= 1}^{d}a_{ij}(u)\left(u_{i}-v_{i}\right)\gamma_{j}(u)\leq 0\] \[\frac{1}{\alpha}\langle\nabla\Phi(v),\gamma(v)\rangle_{d}|u-v|^{2}-\sum_{i,j= 1}^{d}a_{ij}(v)\left(v_{i}-u_{i}\right)\gamma_{j}(v)\leq 0.\] We also have \[\begin{split}&\int_{0}^{1}\left[a_{ij}(u^{h}(t,x))+a_{ij}(v^{h}(t,x))\right]\left(u_{i}^{h}-v_{i}^{h}\right)(t,x)\frac{\partial^{2}\left(u_{j} ^{h}(t,x)-v_{j}^{h}(t,x)\right)}{\partial x^{2}}dx\\ &\leq C\int_{0}^{1}\left(u_{i}^{h}-v_{i}^{h}\right)(t,x)\frac{ \partial^{2}\left(u_{i}^{h}(t,x)-v_{i}^{h}(t,x)\right)}{\partial x^{2}}dx\\ &=-C\int_{0}^{1}\left(\frac{\partial\left(u_{i}^{h}(t,x)-v_{i}^{h} (t,x)\right)}{\partial x}\right)^{2}dx\leq 0.\end{split} \tag{3.43}\] \[e^{-\lambda\left(\phi\left(u_{t}^{h}\right)+\phi\left(u_{t}^{h} \right)\right)}\varphi(t)\] \[\leq C_{1}\int_{0}^{t}\left\|u^{h}(s)-v^{h}(s)\right\|_{H}^{2}e^{- \lambda\left(\phi\left(u_{t}^{h}\right)+\phi\left(u_{t}^{h}\right)\right)}ds\] \[+C_{2}\int_{0}^{t}e^{-\lambda\left(\phi\left(u_{t}^{h}\right)+ \phi\left(u_{t}^{h}\right)\right)}\int_{0}^{1}|u^{h}(s,x)-v^{h}(s,x)|^{2}\left( dK_{1}(s,x)+dK_{2}(s,x)\right)\] \[-2\int_{0}^{t}e^{-\lambda\left(\phi\left(u_{t}^{h}\right)+\phi \left(u_{t}^{h}\right)\right)}\int_{0}^{1}\left[a_{ij}(u^{h}(s,x))+a_{ij}(v^{h }(s,x))\right]\] \[\times\left(u_{t}^{h}(s,x)-v_{i}^{h}(s,x)\right)\left(\gamma_{j}( u)dK_{1}(s,x)-\gamma_{j}(v^{h})dK_{2}(s,x)\right)\] \[-\alpha C_{0}\lambda\theta\int_{0}^{t}e^{-\lambda\left(\phi\left( u_{t}^{h}\right)+\phi\left(v_{t}^{h}\right)\right)}\int_{0}^{1}|u^{h}(s,x)-v^{h}(s,x )|^{2}\left(dK_{1}(s,x)+dK_{2}(s,x)\right). \tag{3.44}\] Taking \(\lambda=\frac{C_{2}+2C_{0}}{\alpha C_{0}\theta}\) we deduce that \[e^{-\lambda\left(\phi\left(u_{t}^{h}\right)+\phi\left(v_{t}^{h} \right)\right)}\varphi(t)\leq C^{\prime}\left[\int_{0}^{t}\left\|u^{h}(s)-v^{ h}(s)\right\|_{H}^{2}e^{-\lambda\left(\phi\left(u_{s}^{h}\right)+\phi\left(v_{s}^{h} \right)\right)}ds\right]\] which implies that \[\sup_{t\leq T}\left\|u^{h}(t)-v^{h}(t)\right\|_{H}^{2}\leq C\int_{0}^{T}\sup _{t\leq T}\left\|u^{h}(t)-v^{h}(t)\right\|_{H}^{2}dt\] Therefore by Gronwall's inequality, we can show that \(u^{h}(t)=v^{h}(t)\) In order to prove proposition 3.12, we need this lemma. **Lemma 3.11**.: _For \(N>1\), \(h_{r}\) weak convergence to \(h\) as \(r\to 0\), then for any \(T>0\), \(n\geq 1\)_ \[\lim_{r\to 0}\left(\sup_{0\leq t\leq T}\left\|u^{n,h_{r}}(t)-u^{n,h}(t) \right\|_{H}^{2}+\int_{0}^{T}\left\|u^{n,h_{r}}(t)-u^{n,h}(t)\right\|_{V}^{2} dt\right)=0. \tag{3.45}\] Proof.: \[\left\|u^{n,h_{r}}(t)-u^{n,h}(t)\right\|_{H}^{2} =2\int_{0}^{t}\left\langle u^{n,h_{r}}(s)-u^{n,h}(s),\frac{ \partial^{2}\left(u^{n,h_{r}}(s)-u^{n,h}(s)\right)}{\partial x^{2}}\right\rangle ds\] \[+2\sum_{k=1}^{m}\int_{0}^{t}\left\langle u^{n,h_{r}}(s)-u^{n,h}( s),\sigma_{k}\left(u^{n,h_{r}}(s)\right)-\sigma_{k}\left(u^{n,h}(s)\right) \right\rangle\dot{h}(s)ds\] \[+2\sum_{k=1}^{m}\int_{0}^{t}\left\langle u^{n,h_{r}}(s),\sigma_{k }\left(u^{n,h_{r}}(s)\right)\right\rangle\left(\dot{h}_{r}(s)-\dot{h}(s)\right)ds\] \[+2\int_{0}^{t}\left\langle u^{n,h}(s)-u^{m,h}(s),b\left(u^{n,h}( s)\right)-b\left(u^{m,h}(s)\right)\right\rangle ds\] \[-2n\int_{0}^{t}\left\langle u^{n,h}(s)-u^{m,h}(s),\gamma\left(u^{ n,h}(s)\right)\left|u^{n,h}(s)-\pi\left(u^{n,h}(s)\right)\right|\right\rangle ds\] \[+2m\int_{0}^{t}\left\langle u^{n,h}(s)-u^{m,h}(s),\gamma\left(u^{ m,h}(s)\right)\left|u^{m,h}(s)-\pi\left(u^{m,h}(s)\right)\right|\right\rangle ds\] (3.46) Since \(h_{r}\) weak converges to \(h\) as \(r\to 0\), we have \[\lim_{r\to 0}2\sum_{k=1}^{m}\int_{0}^{t}\left\langle u^{n,h_{r}}(s),\sigma_{k} \left(u^{n,h_{r}}(s)\right)\right\rangle\left(\dot{h}_{r}(s)-\dot{h}(s)\right) ds=0 \tag{3.47}\] By a similar argument as Lemma 3.10, we can prove Lemma 3.11. **Proposition 3.12**.: _For \(N>1\), \(h_{r}\) weak convergence to \(h\) as \(r\to 0\), then for any \(T>0\), \(n\geq 1\), \(\left(u^{h_{r}},\eta^{h_{r}}\right)\) converges to \(\left(u^{h},\eta^{h}\right)\)._ Proof.: We only need to show \[\lim_{r\to 0}\left(\sup_{0\leq t\leq T}\left\|u^{n,h_{r}}(t)-u^{n,h}(t)\right\|_{H }^{2}+\int_{0}^{T}\left\|u^{n,h_{r}}(t)-u^{n,h}(t)\right\|_{V}^{2}dt\right)=0 \tag{3.48}\] We have \[\begin{split}&\left\|u^{n,h_{r}}(t)-u^{n,h}(t)\right\|_{H}^{2}+ \int_{0}^{T}\left\|u^{n,h^{(r)}}(t)-u^{n,h}(t)\right\|_{V}^{2}dt\\ &\leq 3\left\|u^{n,h_{r}}-u^{h_{r}}\right\|_{H}^{2}+3\left\|u^{n,h_{r} }-u^{h}\right\|_{H}^{2}+3\left\|u^{n,h_{r}}-u^{n}\right\|_{H}^{2}\\ &+6\int_{0}^{T}\left\|u^{n,h_{r}}-u^{h_{r}}\right\|_{V}^{2}dt+6 \int_{0}^{T}\left\|u^{n,h_{r}}-u^{h}\right\|_{V}^{2}dt+\int_{0}^{T}\left\|u^{ n,h}-u^{n}\right\|_{H}^{2}dt\end{split} \tag{3.49}\] From Lemma 3.10, for any \(\alpha\), there exists \(N>0\) for any \(n\geq N\), \[\sup_{0\leq t\leq T}\left\|u^{n,h_{r}}-u^{h_{r}}\right\|_{H}^{2}+\int_{0}^{T} \left\|u^{n,h_{r}}-u^{h_{r}}\right\|_{V}^{2}dt\leq\frac{1}{18}\alpha \tag{3.50}\] \[\sup_{0\leq t\leq T}\left\|u^{n,h}-u^{h}\right\|_{H}^{2}+\int_{0}^{T}\left\|u^ {n,h}-u^{h}\right\|_{V}^{2}dt\leq\frac{1}{18}\alpha \tag{3.51}\] From Lemma 3.11, there exists \(r_{0}>0\), for any \(r\in(0,r_{0})\), we have \[\sup_{0\leq t\leq T}\left\|u^{n,h_{r}}-u^{n,h}\right\|_{H}^{2}+\int_{0}^{T} \left\|u^{n,h_{r}}-u^{n,h}\right\|_{V}^{2}dt\leq\frac{1}{18}\alpha \tag{3.52}\] Combining (3.50), (3.51) and (3.52), we can show \[\lim_{r\to 0}\left(\sup_{0\leq t\leq T}\left\|u^{n,h_{r}}(t)-u^{n,h}(t) \right\|_{H}^{2}+\int_{0}^{T}\left\|u^{n,h_{r}}(t)-u^{n,h}(t)\right\|_{V}^{2} dt\right)=0\] ## 4. Large deviation principle For any \(\varepsilon>0\), we can define a measurable mapping \(\mathcal{G}^{\varepsilon}:C([0,T];H)\to C([0,T];H)\cap L^{2}([0,T];V)\) \[\mathcal{G}^{\varepsilon}(B(\cdot)):=u^{\varepsilon} \tag{4.1}\] where \(u^{\varepsilon}\) is the solution of (2.2). Let \(\left\{h^{\varepsilon}\right\}_{\varepsilon>0}\in\mathcal{A}_{N}\), from Girsanov Theorem, \(Y^{\varepsilon}:=\mathcal{G}^{\varepsilon}\left(B(\cdot)+\frac{1}{\sqrt{ \varepsilon}}\int_{0}^{\cdot}\dot{h}^{\varepsilon}(s)ds\right)\) is the solution of (4.2) \[\begin{cases}dY_{i}^{\varepsilon}(t,x)=\frac{\partial^{2}Y^{ \varepsilon}(t,x)}{\partial x^{2}}dt+b_{i}\left(Y^{\varepsilon}(t,x)\right) dt+\sqrt{\varepsilon}\sum_{k=1}^{m}\sigma_{k}\left(Y^{\varepsilon}(t,x) \right)dB_{j}\\ \hskip 28.452756pt+\sum_{k=1}^{m}\sigma_{k}\left(Y^{\varepsilon}(t,x)\right) h_{k}^{\varepsilon}(t)dt-\gamma_{i}\left(Y^{\varepsilon}(t,x)\right)dk_{i}(t,x) \quad x\in[0,1]\\ Y^{\varepsilon}(0,\cdot)=(Y_{1}^{\varepsilon}(0,\cdot),Y_{2}^{\varepsilon}(0, \cdot),\cdots,Y_{n}^{\varepsilon}(0,\cdot))^{T}\in\tilde{\mathcal{O}}\\ Y^{\varepsilon}(t,0)=Y(t,1)=0\end{cases} \tag{4.2}\] Let \(Z=\mathcal{G}^{0}\left(\int_{0}^{\cdot}\dot{h}^{\varepsilon}(s)ds\right)\) is the solution of (4.3). \[\begin{cases}dZ^{\varepsilon}(t,x)=\frac{\partial^{2}Z^{ \varepsilon}(t,x)}{\partial x^{2}}dt+b_{i}\left(Z^{\varepsilon}(t,x)\right)dt+ \sum_{j=1}^{m}\sigma_{j}\left(Z^{\varepsilon}(t,x)\right)\dot{h}_{j}^{ \varepsilon}(t)dt\\ \hskip 28.452756pt-\gamma_{i}\left(Z^{\varepsilon}(t,x)\right)dk_{i}^{ \varepsilon,Z}(t,x),\quad x\in[0,1]\\ Z^{\varepsilon}(0,\cdot)=\left(Z_{1}^{\varepsilon}(0,\cdot),Z_{2}^{ \varepsilon}(0,\cdot),\cdots,Z_{n}^{\varepsilon}(0,\cdot)\right)^{T}\in \tilde{\mathcal{O}}\\ Z^{\varepsilon}(t,0)=Z^{\varepsilon}(t,1)=0\end{cases} \tag{4.3}\] We using penalized method to get some priori estimates. Consider penalized system of \(Y^{\varepsilon}\) and \(Z^{\varepsilon}\). \[\begin{split} Y^{n,\varepsilon}(t,x)&=u(0,x)+\int_{0}^ {t}\frac{\partial^{2}Y^{n,\varepsilon}(s,x)}{\partial x^{2}}ds+\int_{0}^{t} \sigma\left(Y^{n,\varepsilon}(s,x)\right)\dot{h}^{\varepsilon}(t)dt+\int_{0}^ {t}b\left(Y^{n,\varepsilon}(s,x)\right)ds\\ &+\int_{0}^{t}\sigma\left(Y^{n,\varepsilon}(s,x)\right)\dot{h}^{ \varepsilon}(s)ds+\sqrt{\varepsilon}\int_{0}^{t}\sigma\left(Y^{n,\varepsilon} (s,x)\right)dB_{s}\\ &-n\int_{0}^{t}\gamma\left(Y^{n,\varepsilon}(s,x)\right)\left|Y^ {n,\varepsilon}(s,x)-\pi\left(Y^{n,\varepsilon}(s,x)\right)\right|ds\\ \end{split} \tag{4.4}\] \[\begin{split} Z^{n,\varepsilon}(t,x)&=u(0,x)+\int_ {0}^{t}\frac{\partial^{2}Z^{n,\varepsilon}(s,x)}{\partial x^{2}}ds+\int_{0}^ {t}b_{i}\left(Z^{n,\varepsilon}(s,x)\right)ds+\int_{0}^{t}\sigma\left(Z^{n, \varepsilon}(s,x)\right)\dot{h}^{\varepsilon}(s)ds\\ &-n\int_{0}^{t}\gamma\left(Z^{n,\varepsilon}(s,x)\right)\left|Z^ {n,\varepsilon}(s,x)-\pi\left(Z^{n,\varepsilon}(s,x)\right)\right|ds\\ \end{split} \tag{4.5}\] As \(n\to\infty\), we have \(Y^{n,\varepsilon}\to Y^{\varepsilon}\) and \(Z^{n,\varepsilon}\to Z^{\varepsilon}\). We need some priori estimates for \(Y^{n,\varepsilon}\) and \(Z^{n,\varepsilon}\). Those proof are similar to [5], we sketch it here. **Lemma 4.1**.: _Let \(\nu^{n}=\{Y^{n,\varepsilon},Z^{n,\varepsilon}\}\)_ 1. _we have constant_ \(M_{T}\) _such that_ \[\sup_{n}\mathbb{E}\left[\left(n\int_{0}^{T}\left\|\nu^{n}(t)-\pi\left(\nu^{n} \right)\right\|_{L^{1}([0,1])}dt\right)^{2}\right]\leq M_{T}\] (4.6) 2. \[\lim_{n\to\infty}\mathbb{E}\left[\sup_{0\leq t\leq T}\left\|\nu^{n}(t)-\pi \left(\nu^{n}(t)\right)\right\|_{H}^{2}\right]=0\] (4.7) 3. \[\lim_{n\to\infty}\mathbb{E}\left[\sup_{0\leq t\leq T}\left\|\nu^{n}(t)-\pi \left(\nu^{n}(t)\right)\right\|_{L^{\infty}([0,1])}^{2}\right]=0\] (4.8) 4. _Assume that_ \(\nu^{n}(0)\in V\)_. Then we have_ \[\begin{split}\sup_{n}\mathbb{E}\left[\sup_{0\leq t\leq T}\left\| \nu^{n}(t)\right\|_{V}^{2}\right]\leq\infty\\ \sup_{n}\mathbb{E}\left[\int_{0}^{T}\left\|\nu^{n}(t)\right\|_{H ^{2}}^{2}\right]\leq\infty\end{split}\] (4.9) Based on penalized system of (4.4) and (4.5), mimicking the proof of [5][Theorem 3.9], we have the following Lemma. **Lemma 4.2**.: _For any \(\{h^{\varepsilon}\}\in\mathcal{A}_{N}\)_ \[\begin{split}\mathbb{E}\left[\sup_{0\leq t\leq T}\left\|Y^{ \varepsilon}(t)\right\|_{H}^{2}+\int_{0}^{T}\left\|Y^{\varepsilon}(t)\right\|_ {V}^{2}dt\right]<\infty,\\ \mathbb{E}\left[\sup_{0\leq t\leq T}\left\|Z^{\varepsilon}(t) \right\|_{H}^{2}+\int_{0}^{T}\left\|Z^{\varepsilon}(t)\right\|_{V}^{2}dt \right]<\infty.\end{split} \tag{4.10}\] **Lemma 4.3**.: _Existing \(\lambda>0\) such that_ \[\begin{split}\lim_{\varepsilon\to 0}\{\mathbb{E}\left[\sup_{0\leq t \leq T}\left(\exp\left\{-\lambda\int_{0}^{t}\left(\left\|Y^{\varepsilon}(s) \right\|_{V}^{2}+\left\|Z^{\varepsilon}(s)\right\|_{V}^{2}\right)ds\right\} \left\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t)\right\|_{H}^{2}\right)\right]\\ +\mathbb{E}\left[\int_{0}^{T}\exp\left\{-\lambda\int_{0}^{t} \left(\left\|Y^{\varepsilon}(s)\right\|_{V}^{2}+\left\|Z^{\varepsilon}(s) \right\|_{V}^{2}\right)ds\right\}\left\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t) \right\|_{V}^{2}dt\right]\right\}=0.\end{split} \tag{4.11}\] Proof.: We only need to show \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\{\mathbb{E} \left[\sup_{0\leq t\leq T}\left(\exp\left\{-\lambda\int_{0}^{t} \left(\|Y^{n,\varepsilon}(s)\|_{V}^{2}+\|Z^{n,\varepsilon}(s)\|_{V}^{2}\right) ds\right\}\|Y^{n,\varepsilon}(t)-Z^{n,\varepsilon}(t)\|_{H}^{2}\right)\right]\] \[+\mathbb{E}\left[\int_{0}^{T}\exp\left\{-\lambda\int_{0}^{t} \left(\|Y^{n,\varepsilon}(s)\|_{V}^{2}+\|Z^{n,\varepsilon}(s)\|_{V}^{2}\right) ds\right\}\|Y^{n,\varepsilon}(t)-Z^{n,\varepsilon}(t)\|_{V}^{2}\,dt\right] \right\}=0. \tag{4.12}\] Define \[\psi^{n}(t)=\exp\left\{-\lambda\int_{0}^{t}\left(\|Y^{n,\varepsilon}(s)\|_{V}^ {2}+\|Z^{n,\varepsilon}(s)\|_{V}^{2}\right)ds\right\}\] Applying Ito's formula, we have \[\|Y^{n,\varepsilon}(t)-Z^{n,\varepsilon}(t)\|_{H}^{2}\,\psi^{n}(t) =-\lambda\int_{0}^{t}\psi^{n}\left(\|Y^{n,\varepsilon}(s)\|_{V}^ {2}+\|Z^{n,\varepsilon}(s)\|_{V}^{2}\right)\|Y^{n,\varepsilon}(s)-Z^{n, \varepsilon}(s)\|_{H}^{2}\,ds\] \[-2\int_{0}^{t}\psi^{n}\left\|Y^{n,\varepsilon}(s)-Z^{n, \varepsilon}(s)\right\|_{V}^{2}ds\] \[+2\int_{0}^{t}\psi^{n}\left\langle Y^{n,\varepsilon}(s)-Z^{n, \varepsilon}(s),b\left(Y^{n,\varepsilon}(s)\right)-b\left(Z^{n,\varepsilon}(s) \right)\right\rangle ds\] \[+2\sqrt{\varepsilon}\sum_{k=1}^{m}\int_{0}^{t}\psi^{n}\left\langle Y ^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s),\sigma_{j}\left(Y^{n,\varepsilon}(s) \right)\right\rangle dB_{j}(s)\] \[+\varepsilon\sum_{k=1}^{m}\int_{0}^{t}\psi^{n}\left\|\sigma_{j} \left(Y^{n,\varepsilon}(s)\right)\right\|_{H}^{2}ds\] \[+2\int_{0}^{t}\psi^{n}\left\langle Y^{n,\varepsilon}(s)-Z^{n, \varepsilon}(s),[\sigma\left(Y^{n,\varepsilon}(s)\right)-\sigma\left(Z^{n, \varepsilon}(s)\right)]\,\dot{h}^{\varepsilon}(s)\right\rangle ds\] \[-2n\int_{0}^{t}\psi^{n}\left\langle Y^{n,\varepsilon}(s)-Z^{n, \varepsilon}(s),\gamma\left(Y^{n,\varepsilon}(s,x)\right)|Y^{n,\varepsilon}(s,x)-\pi\left(Y^{n,\varepsilon}(s,x)\right)|\right\rangle ds\] \[+2n\int_{0}^{t}\psi^{n}\left\langle Y^{n,\varepsilon}(s)-Z^{n, \varepsilon}(s),\gamma\left(Z^{n,\varepsilon}(s,x)\right)|Z^{n,\varepsilon}(s,x)-\pi\left(Z^{n,\varepsilon}(s,x)\right)|\right\rangle ds\] \[=I_{1}+I_{2}+I_{3}+I_{4}+I_{5}+I_{6}+I_{7}+I_{8} \tag{4.13}\] \[I_{3}\leq\frac{1}{2}\sup_{0\leq t\leq T}\left(\psi^{n}(s)\left\|Y^{n, \varepsilon}(s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}\right)+C\int_{0}^{T} \psi^{n}(s)\left\|Y^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}ds \tag{4.14}\] Apply B-D-G inequality, we have \[\mathbb{E}\left[\sup_{0\leq t\leq T}I_{4}\right] \leq 2\sqrt{\varepsilon}C\mathbb{E}\left\{\left[\sum_{k=1}^{m}\int_ {0}^{T}\psi^{n}(s)^{2}\left\langle Y^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s), \sigma_{j}\left(Y^{n,\varepsilon}(s)\right)\right\rangle^{2}ds\right]^{\frac{1 }{2}}\right\}\] \[\leq 2\sqrt{\varepsilon}C_{1}\mathbb{E}\left\{\sup_{0\leq t\leq T} \psi^{n}(s)^{\frac{1}{2}}\left\|Y^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s) \right\|_{H}\left(\sum_{k=1}^{m}\int_{0}^{T}\psi^{n}(s)\left\|\sigma_{j}\left( Y^{n,\varepsilon}(s)\right)\right\|_{H}^{2}ds\right)^{\frac{1}{2}}\right\}\] \[\leq\frac{1}{4}\mathbb{E}\left[\sup_{0\leq t\leq T}\psi^{n}(s) \left\|Y^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}\right]+16 \varepsilon C_{1}^{2}\mathbb{E}\left[\int_{0}^{T}\psi^{n}(s)\left(1+\|Y^{n, \varepsilon}(s)\|_{H}^{2}\right)ds\right] \tag{4.15}\] \[I_{5}\leq\varepsilon C\int_{0}^{T}\psi^{n}(s)\left(1+\|Y^{n, \varepsilon}(s)\|_{H}^{2}\right)ds\] (4.16) \[I_{6}\leq\frac{1}{2}\sup_{0\leq t\leq T}\left(\psi^{n}(s)\left\|Y^ {n,\varepsilon}(s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}\right)+C\int_{0}^{T} \psi^{n}(s)\left\|Y^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}ds \tag{4.17}\] \[I_{7} =-2n\int_{0}^{t}\left\langle\psi^{n}(s)\left(Y^{n,\varepsilon}(s)-\pi \left(Z^{n,\varepsilon}(s)\right)\right),\gamma\left(Y^{n,\varepsilon}(s)\right) \left|Y^{n,\varepsilon}(s)-\pi\left(Y^{n,\varepsilon}(s)\right)\right|\right\rangle ds \tag{4.18}\] \[+2n\int_{0}^{t}\left\langle\psi^{n}(s)\left(Z^{n,\varepsilon}(s)- \pi\left(Z^{n,\varepsilon}(s)\right)\right),\gamma\left(Y^{n,\varepsilon}(s) \right)\left|Y^{n,\varepsilon}(s)-\pi\left(Y^{n,\varepsilon}(s)\right)\right| \right\rangle ds\] \[\leq C\left(\int_{0}^{t}\psi^{n}(s)\left\|Y^{n,\varepsilon}(s)-Z ^{n,\varepsilon}(s)\right\|_{H}^{2}ds\right)^{\frac{1}{2}}\left(n^{2}\int_{0} ^{t}\psi^{n}(s)\left\|Y^{n,\varepsilon}(s)-\pi\left(Y^{n,\varepsilon}(s) \right)\right\|_{H}^{2}ds\right)^{\frac{1}{2}}\] \[+C\sup_{0\leq s\leq t}\psi^{n}(s)\left\|Z^{n,\varepsilon}(s)-\pi \left(Z^{n,\varepsilon}(s)\right)\right\|_{H}\times\left(n\int_{0}^{t}\psi^ {n}(s)\left\|Y^{n,\varepsilon}(s)-\pi\left(Y^{n,\varepsilon}(s)\right)\right\| _{H}ds\right)\] \[+\sup_{0\leq s\leq t}\psi^{n}(s)\left\|Z^{n,\varepsilon}(s)-\pi \left(Z^{n,\varepsilon}(s)\right)\right\|_{L^{\infty}([0,1])}\times\left(2n \int_{0}^{t}\psi^{n}(s)\left\|Y^{n,\varepsilon}(s)-\pi\left(Y^{n,\varepsilon}( s)\right)\right\|_{L^{1}([0,1])}ds\right)\] By a similar argument as above,the same estimate holds for \(I_{8}\). Combining the estimates for \(I_{3}\), \(I_{4}\), \(I_{5}\), \(I_{6}\), \(I_{7}\) and \(I_{8}\) and taking expectation, we can deduce that \[\mathbb{E}\left[\sup_{0\leq t\leq T}\psi^{n}(t)\left\|Y^{n, \varepsilon}(s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}\right]+\mathbb{E}\left[ \int_{0}^{T}\psi^{n}(s)\left\|Y^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s) \right\|_{V}^{2}ds\right] \tag{4.19}\] \[\leq\frac{5}{4}\mathbb{E}\left[\sup_{0\leq t\leq T}\psi^{n}(t) \left\|Y^{n,\varepsilon}(t)-Z^{n,\varepsilon}(t)\right\|_{H}^{2}\right]+16 \varepsilon C_{1}^{2}\mathbb{E}\left[\int_{0}^{T}\psi^{n}(s)\left(1+\left\|Y^ {n,\varepsilon}(s)\right\|_{H}^{2}ds\right]\right.\] \[+\varepsilon C\mathbb{E}\left[\int_{0}^{T}\psi^{n}(s)\left(1+ \left\|Y^{n,\varepsilon}(s)\right\|_{H}^{2}\right)ds\right]+\frac{1}{2} \mathbb{E}\left[\sup_{0\leq t\leq T}\left(\psi^{n}(s)\left\|Y^{n,\varepsilon }(s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}\right)\right]\] \[+C\mathbb{E}\left[\int_{0}^{T}\psi^{n}(s)\left\|Y^{n,\varepsilon} (s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}ds\right]+C_{2}\left\{\mathbb{E}\left[ \int_{0}^{T}\psi^{n}(s)\left\|Y^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s) \right\|_{H}^{2}ds\right.\right.\] \[\left.\left.+\mathbb{E}\left[\sup_{0\leq t\leq T}\psi^{n}(t) \left\|Z^{n,\varepsilon}(t)-\pi\left(Z^{n,\varepsilon}(t)\right)\right\|_{H}^{ 2}\right]+\mathbb{E}\left[\sup_{0\leq t\leq T}\left\|Y^{n,\varepsilon}(s)-\pi \left(Y^{n,\varepsilon}(s)\right)\right\|_{H}^{2}\right]\right\}^{\frac{1}{2}}\] \[+C_{3}\left\{\mathbb{E}\left[\sup_{0\leq t\leq T}\psi^{n}(s) \left\|Z^{n,\varepsilon}(s)-\pi\left(Z^{n,\varepsilon}(s)\right)\right\|_{L^{ \infty}([0,1])}^{2}\right]\right\}^{\frac{1}{2}}\] \[\times\left\{\mathbb{E}\left[\left(2n\int_{0}^{T}\psi^{n}(s) \left\|Z^{n,\varepsilon}(s)-\pi\left(Z^{n,\varepsilon}(s)\right)\right\|_{L^{1 }([0,1])}\right)\right]^{2}\right\}^{\frac{1}{2}}\] By Gronwall's inequality, \(\psi^{n}(s)\leq 1\) as \(\lambda\) enough large, and apply Lemma 4.1, we have \[\mathbb{E}\left[\sup_{0\leq t\leq T}\psi^{n}(t)\left\|Y^{n, \varepsilon}(s)-Z^{n,\varepsilon}(s)\right\|_{H}^{2}\right]+\mathbb{E}\left[ \int_{0}^{T}\psi^{n}(s)\left\|Y^{n,\varepsilon}(s)-Z^{n,\varepsilon}(s)\right\| _{V}^{2}ds\right] \tag{4.20}\] \[\leq C_{4}\left\{\mathbb{E}\left[\sup_{0\leq t\leq T}\left\|Z^{n, \varepsilon}(t)-\pi\left(Z^{n,\varepsilon}(t)\right)\right\|_{L^{\infty}([0,1]) }^{2}\right]\right\}^{\frac{1}{2}}\] \[+C_{5}\left\{\mathbb{E}\left[\sup_{0\leq t\leq T}\left\|Y^{n, \varepsilon}(s)-\pi\left(Y^{n,\varepsilon}(s)\right)\right\|_{L^{\infty}([0,1] )}^{2}\right]\right\}^{\frac{1}{2}}\] Then we have \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\{\mathbb{E}\left[\sup_{0\leq t \leq T}\left(\exp\left\{-\lambda\int_{0}^{t}\left(\left\|Y^{n,\varepsilon}(s) \right\|_{V}^{2}+\left\|Z^{n,\varepsilon}(s)\right\|_{V}^{2}\right)ds\right\} \left\|Y^{n,\varepsilon}(t)-Z^{n,\varepsilon}(t)\right\|_{H}^{2}\right)\right] \tag{4.21}\] \[\qquad\qquad\qquad+\mathbb{E}\left[\int_{0}^{T}\exp\left\{-\lambda \int_{0}^{t}\left(\left\|Y^{n,\varepsilon}(s)\right\|_{V}^{2}+\left\|Z^{n, \varepsilon}(s)\right\|_{V}^{2}\right)ds\right\}\left\|Y^{n,\varepsilon}(t)-Z^{n,\varepsilon}(t)\right\|_{V}^{2}dt\right]\right\}=0.\] **Theorem 4.4**.: \(u^{\varepsilon}\) _be the unique solution to (2.2). Then the family of \(\left\{u^{\varepsilon}\right\}_{\varepsilon>0}\) satisfies a LDP on the space \(\mathcal{E}\) with rare function_ \[I(g)=\inf_{\left\{h\in\mathcal{H},g=\mathcal{G}^{0}\left(\int_{0}^{T}h(s)ds \right)\right\}}\left\{\frac{1}{2}\left\|h\right\|_{\mathcal{H}}^{2}\right\}\] Proof.: According to Theorem 2.5, to complete the proof of this theorem, it is sufficient to verify **(LDP1)** and **(LDP2)**. From Proposition 3.12, we have established the **(LDP2)**. Then we only to prove **(LDP1)** is true. Let \(\psi=\lim_{n\to\infty}\psi^{n}\). \[\mathbb{P}\left(\sup_{0\leq t\leq T}\|Y^{\varepsilon}(t)-Z^{ \varepsilon}(t)\|_{H}^{2}+\int_{0}^{T}\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t) \|_{V}^{2}\,dt>\delta\right)\] \[= \mathbb{P}\left(\sup_{0\leq t\leq T}\|Y^{\varepsilon}(t)-Z^{ \varepsilon}(t)\|_{H}^{2}+\int_{0}^{T}\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t )\|_{V}^{2}\,dt>\delta,\int_{0}^{T}\left(\|Y^{\varepsilon}(s)\|_{V}^{2}+\|Z^{ \varepsilon}(s)\|_{V}^{2}\right)ds>M\right)\] \[+\mathbb{P}\left(\sup_{0\leq t\leq T}\|Y^{\varepsilon}(t)-Z^{ \varepsilon}(t)\|_{H}^{2}+\int_{0}^{T}\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t )\|_{V}^{2}\,dt>\delta,\int_{0}^{T}\left(\|Y^{\varepsilon}(s)\|_{V}^{2}+\|Z^{ \varepsilon}(s)\|_{V}^{2}\right)ds\leq M\right)\] \[\leq \mathbb{P}\left(\int_{0}^{T}\left(\|Y^{\varepsilon}(s)\|_{V}^{2}+ \|Z^{\varepsilon}(s)\|_{V}^{2}\right)ds>M\right)\] \[+\mathbb{P}\left(\sup_{0\leq t\leq T}\exp^{2}\left\{-\lambda\int_ {0}^{t}\left(\left\|Y^{\varepsilon}(s)\right\|_{V}^{2}+\|Z^{\varepsilon}(s)\|_ {V}^{2}\right)ds\right\}\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t)\|_{H}^{2}\] \[+\int_{0}^{T}\exp\left\{-\lambda\int_{0}^{t}\left(\|Y^{ \varepsilon}(s)\|_{V}^{2}+\|Z^{\varepsilon}(s)\|_{V}^{2}\right)ds\right\}\|Y^ {\varepsilon}(t)-Z^{\varepsilon}(t)\|_{V}^{2}\,dt\geq e^{-\lambda M}\delta\right)\] \[\leq \frac{1}{M}\mathbb{E}\left[\int_{0}^{T}\left(\|Y^{\varepsilon}(s) \|_{V}^{2}+\|Z^{\varepsilon}(s)\|_{V}^{2}\right)ds\right]\] \[+\frac{e^{\lambda M}}{\delta}\mathbb{E}\left[\sup_{0\leq t\leq T }\psi(t)\left\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t)\right\|_{H}^{2}+\int_{0} ^{T}E_{Y^{\varepsilon},Z^{\varepsilon}}(t)\left\|Y^{\varepsilon}(t)-Z^{ \varepsilon}(t)\right\|_{V}^{2}\,dt\right]. \tag{4.22}\] From Lemma 4.2, for any \(\alpha>0\), exists \(M>0\) such that \[\frac{1}{M}\mathbb{E}\left[\int_{0}^{T}\left(\|Y^{\varepsilon}(s)\|_{V}^{2}+ \|Z^{\varepsilon}(s)\|_{V}^{2}\right)ds\right]\leq\frac{\alpha}{2} \tag{4.23}\] From Lemma 4.3, there exists \(\varepsilon_{0}\), for any \(\varepsilon\in(0,\varepsilon_{0})\) we have \[\frac{e^{\lambda M}}{\delta}\mathbb{E}\left[\sup_{0\leq t\leq T}\psi(t)\left\| Y^{\varepsilon}(t)-Z^{\varepsilon}(t)\right\|_{H}^{2}+\int_{0}^{T}\psi(t)(t) \left\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t)\right\|_{V}^{2}\,dt\right]\leq \frac{\alpha}{2}. \tag{4.24}\] Combing (4.22), (4.23) and (4.24), we have for any \(\varepsilon\in(0,\varepsilon_{0})\) \[\mathbb{P}\left(\sup_{0\leq t\leq T}\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t)\|_ {H}^{2}+\int_{0}^{T}\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t)\|_{V}^{2}\,dt> \delta\right)\leq\alpha \tag{4.25}\] Therefore we can easily show that \[\lim_{\varepsilon\to 0}\mathbb{P}\left(\sup_{0\leq t\leq T}\|Y^{\varepsilon}(t)-Z^{ \varepsilon}(t)\|_{H}^{2}+\int_{0}^{T}\|Y^{\varepsilon}(t)-Z^{\varepsilon}(t )\|_{V}^{2}\,dt>\delta\right)=0. \tag{4.26}\]
2309.10803
Phase space analysis of two-wavelength interferometry
Multiple wavelength phase shifting interferometry is widely used to extend the unambiguous range (UR) beyond that of a single wavelength. Towards this end, many algorithms have been developed to calculate the optical path difference (OPD) from the phase measurements of multiple wavelengths. These algorithms fail when phase error exceeds a specific threshold. In this paper, we examine this failure condition. We introduce a "phase-space" view of multi-wavelength algorithms and demonstrate how this view may be used to understand an algorithm's robustness to phase measurement error. In particular, we show that the robustness of the synthetic wavelength algorithm deteriorates near the edges of its UR. We show that the robustness of de Groot's extended range algorithm [Appl. Opt. 33, 5948 (1994)] depends on both wavelength and OPD in a non-trivial manner. Further, we demonstrate that the algorithm developed by Houairi & Cassaing (HC) [J. Opt. Soc. Am. 26, 2503 (2009)] results in uniform robustness across the entire UR. Finally, we explore the effect that wavelength error has on the robustness of the HC algorithm.
Robert H. Leonard, Spencer E. Olson
2023-09-19T17:47:27Z
http://arxiv.org/abs/2309.10803v1
# Phase space analysis of two-wavelength interferometry ###### Abstract Multiple wavelength phase shifting interferometry is widely used to extend the unambiguous range (UR) beyond that of a single wavelength. Towards this end, many algorithms have been developed to calculate the optical path difference (OPD) from the phase measurements of multiple wavelengths. These algorithms fail when phase error exceeds a specific threshold. In this paper, we examine this failure condition. We introduce a "phase-space" view of multi-wavelength algorithms and demonstrate how this view may be used to understand an algorithm's robustness to phase measurement error. In particular, we show that the robustness of the synthetic wavelength algorithm deteriorates near the edges of its UR. We show that the robustness of de Groot's extended range algorithm [1] depends on both wavelength and OPD in a non-trivial manner. Further, we demonstrate that the algorithm developed by Houairi & Cassaing (HC) [2] results in uniform robustness across the entire UR. Finally, we explore the effect that wavelength error has on the robustness of the HC algorithm. ## 1 Introduction Determination of the absolute phase is important in applications such as interferometric synthetic aperture radar [3, 4, 5], strain/stress analysis [6], and atom interferometry [7]. Phase measurements are also critical in order to characterize a surface height via optical profilometry [8, 9], which can be used with a single wavelength or a more complex multiple-wavelength configuration. Single-wavelength phase shifting interferometry measures the phase difference modulo \(2\pi\), resulting in an \(n\lambda\) ambiguity in the optical path difference (OPD), where \(n\) is an integer and \(\lambda\) is the wavelength used. Consequently, the OPD may be unambiguously resolved when when restricted to the range \(\pm\lambda/2\); we refer to such a range as the unambiguous range (UR), and denote the length of the range as \(|\)UR\(|\). Comparison of phase measurements from multiple wavelengths allows the UR to be extended beyond that of a single wavelength. Towards this end, many algorithms have been developed which incorporate phase measurements from multiple wavelengths to determine the absolute phase with a larger UR [10, 11, 12, 1, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 280, 281, 285, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 35, 36, 371, 38, 39, 311, 33, 34, 36, 38, 39, 32, 34, 37, 39, 33, 35, 39, 36, 37, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 83, 85, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 109, 111, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 228, 230, 231, 24, 245, 246, 247, 248, 249, 250, 252, 253, 254, 255, 256, 257, 258, 259, 261, 279, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 294, 295, 296, 297, 300, 308, 309, 310, 320, 309, 321, 334, 35, 36, 371, 38, 39, 39, 400, 336, 309, 311, 33, 34, 36, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 91, 92, 93, 94, 95, 96, 97, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 113, 109, 114, 115, 109, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 140, 141, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 172, 173, 174, 175, 176, 177, 178, 179, 183, 184, 185, 186, 187, 188, 190, 192, 193, 194, 195, 196, 197, 199, 200, algorithm will generally vary with OPD. The condition under which de Groot's algorithm achieves maximum robustness is described. In Sec. 7, we demonstrate that the Houairi & Cassaing algorithm evenly partitions the phase space, resulting in uniform robustness over nearly the entire UR. In Sec. 8 we examine the effect of wavelength error on the Houairi & Cassaing (HC) algorithm. ## 2 Maximum Unambiguous Range We define the maximum UR as the largest range of OPD such that any two OPDs within this range are distinguishable when measurement error is absent. For an algorithm which uses a pair of phase measurements from two different wavelengths as input, two OPDs are indistinguishable when they result in the same measured phase pair \((\phi_{a},\phi_{b})\) where \(a\) and \(b\) denote data from the different wavelengths. Therefore, the minimum distance between two OPDs which result in the same phase pair will equal the maximum UR. We will denote the OPD as \(d\). To simplify the analysis, we will assume \(\lambda_{a}>\lambda_{b}\) throughout. Consider the measured phase pair \((0,0)\). A phase measurement will equal zero when \(d\) equals an integer multiple of \(\lambda\). Therefore, the phases pass through \((0,0)\) when \(d=n_{a}\lambda_{a}=n_{b}\lambda_{b}\) where \(n_{a},n_{b}\in\mathbb{Z}\). This Diophantine equation has a trivial solution at \(d=0\). Following the notation used by Houairi and Cassaing [2], we denote the next smallest integer pair which satisfies this equation as \(p,q\). Therefore, the length of the maximum \(|\text{UR}|\) may be written as \[|\text{UR}_{\max}|=p\lambda_{b}=q\lambda_{a} \tag{1}\] where \(p,q\) are the co-prime natural numbers which satisfy the equation on the right side of Eq. 1. This result has been previously noted [11; 13]. Note that Eq. 1 is only satisfied when \(\lambda_{a}\) and \(\lambda_{b}\) are commensurate. When \(\lambda_{a}\) and \(\lambda_{b}\) are incommensurate, the measured phase pairs never repeat, resulting in an infinite UR. In practice, the maximum desirable UR will be constrained by measurement error [2]. ## 3 Phase-Space Representation For two-wavelength interferometry with wavelengths \(\lambda_{a}\) and \(\lambda_{b}\), the measured phases in the absence of error are given by \[\phi_{i}=\frac{2\pi\,d}{\lambda_{i}}\mod 2\pi \tag{2}\] where \(i\in\{a,b\}\), and \(\mod\) is defined by the formula \[a\mod b\coloneqq a-b\left[\frac{a}{b}\right] \tag{3}\] and where \(\lfloor x\rfloor\) denotes the nearest integer value to \(x\). Consider a graph in which \(\phi_{a}\) and \(\phi_{b}\) are plotted on the \(x\) and \(y\) axes respectively. In this representation, all measured phases fall within the space \((\phi_{a},\phi_{b})\in[-\pi,\pi)\times[-\pi,\pi)\); we call this _phase space_. In Fig. 1, we show ideal phase measurements plotted in phase space over the entire UR. This representation is easily constructed by parametrically plotting the phases using Eq. 2. Following the notation introduced in Ref. [2], we write the OPD (\(d\)) as follows \[d=\lambda_{a}(\bar{m}_{a}+\dot{m}_{a})=\lambda_{b}(\bar{m}_{b}+\dot{m}_{b}) \tag{4}\] where \(\bar{m}\in\mathbb{Z}\) and \(\dot{m}\in[-0.5,0.5]\). The value of \(\dot{m}\) is related to the measured phase by \[\dot{m}_{i}=\frac{\phi_{i}}{2\pi}. \tag{5}\] As shown in Fig. 1, as \(d\) increases from zero which is found at \((\phi_{a},\phi_{b})=(0,0)\), the phases follow a line with slope \(r_{\lambda}=\lambda_{a}/\lambda_{b}\) in the phase space. When \(\phi_{a}\) or Figure 1: Ideal phase measurements for \(\lambda_{a}=700\) nm and \(\lambda_{b}=500\) nm over: the entire \(3.5\)\(\mu\)m UR (solid); the UR of the synthetic wavelength algorithm (square dots). The OPD (\(d\)) is represented by the color. At \(d=0\)\(\mu\)m the phase is \((0,0)\). As \(d\) increases the phases moves upwards along the \(0,0\) line. When \(\phi_{b}=\pi\), the phases jump to the bottom of the \(0,1\) line. Starting from -UR\({}_{\max}/2\), and increasing in \(d\), the ideal phases jump between the lines in the following order: (\(-2\) - \(3\)), (\(-2\) - \(2\) - \((-1,-2)\), (\(-1,-1\)), (\(0,-1\)), (\(0,0\)), (\(0,1\)), \((1,1)\), \((1,2)\), \((2,2)\), \((2,3)\). \(\phi_{b}\) reach \(\pm\pi\), the phase "wraps back" to \(\mp\pi\), creating a new line. This occurs when \(\dot{m}_{i}=\pm 0.5\). The new line corresponds to the incremented value of \(\bar{m}_{i}\). In this way, ideal phase measurements form a set of parallel lines where each line is uniquely associated with an integer pair (\(\bar{m}_{a},\bar{m}_{b}\)). When plotted over the entire UR, each line is displaced from the adjacent line by an equal amount. The displacement between adjacent lines is \[\Delta\phi_{i}=\frac{2\pi\lambda_{i}}{|\text{UR}_{\text{max}}|}. \tag{6}\] ## 4 Phase-Space View of Phase Errors Building on our knowledge of the phase space, we can imagine a two-wavelength interferometry algorithm which achieves the maximal UR. When error is present, measured phases deviate from the lines shown in Fig. 1. Because each line is associated with a unique integer pair (\(\bar{m}_{a},\bar{m}_{b}\)), we can determine the values of \(\bar{m}_{i}\) by identifying the line closest to the measured phase pair (\(\phi_{a},\phi_{b}\)). Once the values of \(\bar{m}_{i}\) are known, the OPD (\(d\)) is determined using to Eqs. 4 and 5. Brug and Klaver [13] developed an algorithm similar to this using a lookup table for all phase values in the phase space. Phase measurement error effects interferometry algorithms in two ways: (1) phase error may result in a error in the calculated value of \(\dot{m}_{i}\) (a continuous variable); (2) phase error may result in the incorrect determination of \(\bar{m}_{i}\) (an integer variable). Because an incorrect determination of \(\bar{m}_{i}\) will result in a large error in the calculated OPD, we classify **(2)** as an algorithm failure and **(1)** as typical measurement error. It is helpful to think of the phase errors as a displacement vector in the phase space, \(\vec{\delta\phi}=(\delta\phi_{a},\delta\phi_{b})\) where \(\delta\phi_{i}\) is the error in \(\phi_{i}\). When possible, it is helpful to decompose the phase error into components corresponding to these different effects. To this end, another convenient coordinate system for error decomposition is parallel and perpendicular to the ideal phase lines (e.g as shown in Fig. 1), denoted by \(\vec{\delta\phi}=(\delta\phi_{\perp},\delta\phi_{\parallel})\). An algorithm will incorrectly determine \(\bar{m}_{i}\) when phase error displaces the phase from its ideal value so that the measured phase is closer to a line corresponding to incorrect values of \(\bar{m}_{i}\). Consequently, \(\delta\phi_{\perp}\) is responsible for algorithm failure, whereas \(\delta\phi_{\parallel}\) is identified as simple measurement error. Any two-wavelength algorithm which uses Eq. 4 will result in two OPD calculations. In general, when both calculated values for OPD are combined into a weighted average, the resulting error in the OPD will depend on the phase error in a non-trivial way. For the special case in which both phase measurements have equal uncertainty, the weighted average of the two OPD results will correspond to a point exactly on the ideal phase line which is closest to the measured phase. In this case, \(\delta\phi_{\parallel}\) is solely responsible for error in the calculated OPD. We will assume that both OPD results are combined into a weighted average for the remainder of this paper. When the phase uncertainties are the same for both wavelengths, \(\delta\phi_{\parallel}\) is related to the error in \(d\) by \[\delta d=\frac{\delta\phi_{\parallel}}{2\pi}\frac{\lambda_{a}\lambda_{b}}{ \sqrt{\lambda_{a}^{2}+\lambda_{b}^{2}}}. \tag{7}\] (\(\phi_{a},\phi_{b}\)) is related to (\(\phi_{\parallel},\phi_{\perp}\)) by the rotational transformation \[(\phi_{\parallel},\phi_{\perp})^{\text{T}}=\mathbf{\mathcal{R}}_{-\theta}(\phi_{a },\phi_{b})^{\text{T}} \tag{8}\] where \(\theta\) is given by \[\theta=\tan^{-1}\left(\frac{\lambda_{a}}{\lambda_{b}}\right) \tag{9}\] and where \(\mathbf{\mathcal{R}}_{-\theta}\) is the typical rotation matrix about an angle \(-\theta\). Recall that an algorithm is deemed to have failed when the values of \(\bar{m}_{i}\) are determined incorrectly. We define robustness, \(R\), as the probability that the algorithm will succeed. Robustness may be calculated by integrating the probability distribution function for the measured phase values over the range of phase values for which the algorithm returns the correct values of \(\bar{m}_{i}\). For the remainder of this paper, we assume that measured phases deviate from ideal values according to a normal distribution with standard deviation equal to the measurement uncertainty, \(\sigma_{i}\). As we see in the next section, the robustness may depend on the OPD. Note that any algorithm which achieves the maximum UR will pass through the \((\pm\pi,\pm\pi)\) corners of the phase space when \(d=\pm\text{UR}_{\text{max}}/2\). Because measured phases are constrained to fall within \([-\pi,\pi)\), discontinuities in the measured phases arise near the \((\pm\pi,\pm\pi)\) corners (Fig. 4a can be used to visualize this). For instance: when \(d=-\text{UR}_{\text{max}}/2\), the phases should be \((-\pi,-\pi)\). A measurement error which would normally result in a small negative phase error, now results in a phase error of nearly \(+2\pi\) (as measurement error can cause \(\phi_{a}\) to wrap to the right of Fig. 4a or \(\phi_{b}\) to wrap to the top of Fig. 4a). As a result, the robustness of an algorithm which achieves the maximum UR will deteriorate when the OPD is within measurement uncertainty of \(\pm|\text{UR}_{\text{max}}|/2\). Specifically, the robustness will approach \(1/4\) as \(d\rightarrow\pm\left|\text{UR}_{\max}\right|/2\). An algorithm will avoid this failure mode when both phase errors satisfy the constraint \[\left|d_{0}+\lambda_{i}\frac{\delta\phi_{i}}{2\pi}\right|<\frac{\left|\text{UR} _{\max}\right|}{2}. \tag{10}\] ## 5 Synthetic Wavelength Algorithm The synthetic wavelength algorithm was first introduced by J. C. Wyant [14]. The algorithm uses phase measurements from two wavelengths, \(\lambda_{a}\) and \(\lambda_{b}\), to calculate the phase that would be produced by a larger synthetic wavelength, \(\Lambda\), which is defined as \[\Lambda=\frac{\lambda_{a}\,\lambda_{b}}{\left|\lambda_{a}-\lambda_{b}\right|}. \tag{11}\] As a result, the synthetic wavelength algorithm has an UR of \(\pm\Lambda/2\). A major benefit of phase measurements from an effectively longer wavelength is to simplify the phase-unwrapping by reducing the many \(2\pi\) phase ambiguities. Additionally, a synthetic wavelength measurement produced from shorter wavelengths takes advantage of much easier optical elements, detectors, and cameras as compared to available resources for the longer wavelengths. For most choices of wavelengths, the UR of the synthetic wavelength algorithm is less than the maximum UR that could be achieved for same choice of wavelengths (as described in Sec. 2). When ideal phase measurements are plotted over the UR of the synthetic wavelength algorithm, the resulting lines terminate before they can form a set of evenly spaced lines. An example of this is shown in Fig. 1 where the truncated UR of the synthetic wavelength algorithm (square dots) is compared to the ideal case (solid lines). The irregular spacing of these ideal phase lines hint that the robustness of the synthetic wavelength algorithm may vary with OPD. Note that any two-wavelength algorithm may be thought of as a function which takes two phases as an input and returns an OPD; _i.e._ OPD \(\coloneqq d(\phi_{a},\phi_{b})\). With this in mind, and to further understand the behavior of the synthetic wavelength algorithm, the algorithm can be applied to a raster scan of phase values, \(\phi_{a}\) and \(\phi_{b}\), across the entire phase space, \([-\pi,\pi)\times[-\pi,\pi)\). Fig. 2 shows the OPD as calculated using the synthetic wavelength algorithm for \(\lambda_{a}=700\ \mathrm{nm}\) and \(\lambda_{b}=500\ \mathrm{nm}\) by doing such a raster scan of phase space. As we will show below, the qualitative features seen in Fig. 2 are common to any choice of wavelength, even when the UR of the synthetic wavelength algorithm equals the maximum UR. The phase-mapping of OPD shown in Fig. 2 is characterized by continuous bands of OPD (\(d\)), where all points within each continuous region correspond to a unique pair \((\bar{m}_{a},\bar{m}_{b})\). The continuous bands of OPD are bounded by discontinuities in \(d\) that represent sudden changes in the values of \(\bar{m}_{i}\). When the the OPD is within the algorithm's UR, and measurement error is absent, the measured phases \((\phi_{a},\phi_{b})\) lie exactly on the ideal phase (solid) lines shown in Fig. 2, and the values of \(\bar{m}_{i}\) are easily resolved. When error is present, the measured phase is displaced from the ideal phase; as long as the displacement is small enough to remain within the same continuous band, the values of \(\bar{m}_{i}\) are resolved correctly. On the other hand, when the measured and ideal phases are separated by a discontinuity boundary, the values of \(\bar{m}_{i}\) are incorrectly resolved. Near \(d=0\), the discontinuities in the calculated OPD are parallel to the ideal phase lines, and the ideal phase lines are evenly spaced. This appears to be a common feature of all two-wavelength interferometry algorithms. For the synthetic wavelength algorithm, these discontinuities are located equidistant from adjacent ideal phase lines. In this region, the spacing between adjacent ideal phase lines, \(\Delta\phi_{i}\) is found by replacing \(\left|\text{UR}_{\max}\right|\) with \(\Lambda\) in Eq. 6. Thus, the values of \(\bar{m}_{i}\) are determined correctly when \[\left|\delta\phi_{\downarrow}\right|<\pi\left(\frac{\lambda_{a}-\lambda_{b}}{ \sqrt{\lambda_{a}^{2}+\lambda_{b}^{2}}}\right). \tag{12}\] Figure 2: Calculated OPD (\(d\)) for \(\lambda_{a}=700\ \mathrm{nm}\) and \(\lambda_{b}=500\ \mathrm{nm}\) using the synthetic wavelength algorithm. The solid line shows the expected phase values when measurement error is absent—note that the solid line does not evenly fill the phase space, as occurs in Fig. 1. Similar to Fig. 1, the color scale represents OPD (\(d\)) and \(d=0\ \mu\)m occurs at \((\phi_{a},\phi_{b})=(0,0)\). This corresponds to the well-known condition: \(|\delta\bar{m}_{i}|<1/2\)[2, 15, 16], where \(\delta\bar{m}\) is the error in \(\bar{m}\) before rounding to the nearest integer. For an OPD in the region dominated by this condition, the synthetic wavelength algorithm has robustness given by \[R=\left(\frac{1}{2\pi\sigma_{a}\sigma_{b}}\right)\int_{+\infty}^{-\infty}\, \int_{r_{x}x\cdot\frac{\Delta\phi_{b}}{2}}^{r_{x}x\cdot\frac{\Delta\phi_{b}}{2 }}\,\mathrm{e}^{-\frac{1}{2}\left(\frac{\sigma_{a}^{2}}{\sigma_{a}^{2}}+\frac{ \sigma_{b}^{2}}{\sigma_{b}^{2}}\right)}\mathrm{d}x\,\mathrm{d}y. \tag{13}\] When \(\sigma_{a}=\sigma_{b}\), Eq. 13 is reduced to \[R=\mathrm{erf}\left(\frac{\Delta\phi_{a}\Delta\phi_{b}}{2\sigma\sqrt{2(\Delta \phi_{a}^{2}+\Delta\phi_{b}^{2})}}\right). \tag{14}\] Near the edges of the OPD range, we see that the space between the ideal phase lines and the discontinuities begins to shrink. The change in the behavior of the algorithm is related to a second condition \(|\phi_{e,0}+\delta\phi_{e}|=|\phi_{e,0}-\delta\phi_{a}+\delta\phi_{b}|<\pi\) where \(\phi_{e,0}\) is the effective phase in the absence of error. Writing this constraint in terms of \(\delta\phi_{\perp}\) gives \[\left|2\pi\frac{\tilde{d}}{\Lambda}-(\sin\theta+\cos\theta)\delta\phi_{\perp} \right|<\pi \tag{15}\] where \(\theta\) is defined in Eq. 9, and we've used Eq. 7 to rewrite the equation in terms of the measured OPD \(\tilde{d}\), where \(\tilde{d}=d_{0}+\delta d\) and \(d_{0}\) is the OPD in the absence of any parallel phase error \(\phi_{\parallel}\). In Fig. 3, these constraints are plotted across the entire UR for the synthetic wavelength algorithm when \(\lambda_{a}=700\ \mathrm{nm}\) and \(\lambda_{b}=500\ \mathrm{nm}\). As shown in Fig. 2, the interaction of these constraints can effectively partition the phase space in a non-uniform and uneven manner, making the correct determination of the values of \(\bar{m}_{i}\) much more challenging. In Sec. 7, we discuss the robustness of the algorithms presented in this paper in more detail (see Fig. 5 in Sec. 7 where the robustness of the synthetic wavelength algorithm over the UR of the algorithm is shown in comparison to the HC algorithm). ## 6 De Groot Algorithm De Groot proposed an algorithm which extends the unambiguous range beyond that of the synthetic wavelength algorithm. To achieve this, de Groot's algorithm resolves the \(\Lambda\) ambiguity which will arise when applying the synthetic wavelength algorithm to range greater than \(\pm\Lambda/2\)[1]. With this in mind, we note that the OPD may be written as \[d=\Lambda(\bar{M}+\dot{M}). \tag{16}\] We introduce a quantity which will prove important to the analysis of this algorithm \[f_{i}=\left|\frac{1}{\Lambda/\lambda_{i}-\lfloor\Lambda/\lambda_{i}\rfloor} \right|. \tag{17}\] It is straightforward to show that \(f_{a}=f_{b}\) and \(f>2\) for all wavelengths. The UR of de Groot's algorithm may be written as \(\mathrm{UR}=\pm\Lambda\left\lfloor f\right\rfloor/2\). Following the procedure described in Sec. 5, we apply the de Groot algorithm to a raster scan of phase values in the phase space. When performing this analysis, we find that the algorithm exhibits qualitatively different behavior depending on the difference between \(f\) and the nearest integer value, given by \(f-\lfloor f\rceil\). When \(f-\lfloor f\rceil=0\), \(f\in\mathbb{Z}\) and \(\mathrm{UR}=\pm\Lambda f/2\). An example of the output of de Groot's algorithm when \(f-\lfloor f\rceil=0\) is shown in Fig. 3(a). In this scenario, de Groot's algorithm produces an even partitioning of the phase space. As with the synthetic wavelength algorithm, discontinuities in the algorithm's output are related to constraints on the phase error. For de Groot's algorithm, this constraint is \[\left|\delta\dot{M}\right|<\frac{1}{2} \tag{18}\] which may be rewritten in terms of \(\delta\phi_{\perp}\) as \[\left|\delta\phi_{\perp}\right|<\frac{\pi}{f}\left(\frac{\lambda_{a}-\lambda _{b}}{\sqrt{\lambda_{a}^{2}+\lambda_{b}^{2}}}\right). \tag{19}\] The maximum allowed \(\delta\phi_{\perp}\) for de Groot's algorithm is smaller than the maximum \(\delta\phi_{\perp}\) allowed by Figure 3: Phase error constraints for the synthetic wavelength algorithm. Note that these constraints agree with the discontinuities in Fig. 2. This can be visualized by “stitching” together the partitions of Fig. 2 so that the ideal phase lines forms a continuous straight line with increasing OPD. the synthetic wavelength algorithm by a factor of \(f\). However, de Groot's algorithm extends the UR of the synthetic wavelength algorithm by a factor of \(f\). The inverse relationship between the maximum allowed \(\delta\phi_{\perp}\) and the UR is a common feature to all algorithms since extending UR simply means that the bands of OPD continuity get more dense. In terms of \(UR/\lambda\), both the basic synthetic wavelength algorithm and de Groot's algorithm are equally robust to phase error. However, unlike the synthetic wavelength algorithm, the robustness of de Groot's algorithm does not deteriorate near the edges of the UR. The output of de Groot's algorithm becomes more complicated when \(f-\lfloor f\rceil\neq 0\). Eqs. (10-13) from de Groot's paper [1] feature rounding functions that are used to extend the UR. Each rounding function is a source of discontinuity in the algorithm's output. In Fig. 4 we relate the discontinuities in the algorithm's output to Eqs. (10-13) from Ref. [1] for different values of \(f\). While it is difficult make general statements regarding the robustness of de Groot's algorithm, a few observations can be made. First, the robustness of de Groot's algorithm depends on OPD when \(f-\lfloor f\rceil\neq 0\). Second, when averaged over the entire UR, robustness decreases as \(f-\lfloor f\rceil\) moves further from zero. ## 7 Houairi & Cassaing Algorithm An algorithm which achieves the maximum UR for a given choice of wavelengths was developed by Houairi and Cassaing [2]; we refer to this as the HC algorithm. The HC algorithm utilizes the arithmetic properties of the fundamental Diophantine equation relating phase measured by one wavelength to the phase measured by the other wavelength, as described in Sec. 2, to disambiguate the phases to a greater extent than the typical synthetic wavelength algorithm. Unlike the other algorithms examined in this paper, the spacing between discontinuities and the ideal phase lines for this algorithm remain constant over the entire UR. The HC algorithm will correctly resolve the values of \(\bar{m}_{i}\) as long as the phase errors satisfy the constraint [2] \[|\text{-}p\delta\phi_{a}+q\delta\phi_{b}|<\pi, \tag{20}\] where \(p\) and \(q\) are as defined in Eq. 1, and also the constraint for the end of the UR as shown in Eq. 10. The constraint in Eq.20 may also be rewritten in terms of \(\delta\phi_{\perp}\) as \[|\delta\phi_{\perp}|<\frac{\pi}{\sqrt{p^{2}+q^{2}}}. \tag{21}\] Note that the HC algorithm is equivalent to de Groot's algorithm when \(f-\lfloor f\rceil=0\). The robustness of the HC algorithm, calculated following the description at the end of Sec. 4, results in Eq. 13 or Eq. 14, where \(\Delta\phi_{i}\) are given by Eq. 6. In Fig. 5 the robustness of the synthetic wavelength and HC algorithms are calculated and compared across the entire UR. Wavelengths \(\lambda_{a}=700\)\(\mathrm{nm}\) and \(\lambda_{b}=500\)\(\mathrm{nm}\) are used for the HC algorithm. For the synthetic wavelength algorithm, wavelengths \(\lambda_{a}=619.824\)\(\mathrm{nm}\) and \(\lambda_{b}=526.572\)\(\mathrm{nm}\) are used to achieve a simliar UR and robustness near \(d=0\) as for the HC algorithm. Phase errors of \(\sigma_{a}=\sigma_{b}=0.0939\) radians are used Figure 4: Output of de Groot’s algorithm for (a) \(f=2.00\), (b) \(f=2.49\), (c) \(f=2.51\), and (d) \(f=3.49\). Note that these values of \(f\) are chosen to maximize the complex behavior of de Groot’s algorithm when \(f-\lfloor f\rceil\) is far from zero. The wavelengths chosen are \(500\)\(\mathrm{nm}\) and \(700\)\(\mathrm{nm}\), \(500\)\(\mathrm{nm}\) and \(708.194\)\(\mathrm{nm}\), \(500\)\(\mathrm{nm}\) and \(708.472\)\(\mathrm{nm}\), and \(500\)\(\mathrm{nm}\), \(718.672\)\(\mathrm{nm}\) respectively. Each discontinuity is associated with an equation in de Groot’s paper [1]. Note that some discontinuities are smaller than a wavelength, and, therefore, difficult to see. The rounding function in Ref. [1] Eq. (12) does not create discontinuities in the algorithm’s output. The HC algorithm produces the same output as de Groot’s algorithm for \(\lambda_{a}=700\)\(\mathrm{nm}\) and \(\lambda_{b}=500\)\(\mathrm{nm}\). Unlike de Groot’s algorithm, the HC algorithm always produces an even partitioning of the phase space. As a result, the HC algorithm will always produce an output similar to the figure in the top left. to calculate the robustness of both algorithms. From Fig. 5, we see the robustness of the synthetic wavelength algorithm deteriorates much sooner than the HC algorithm as \(d\rightarrow\mathrm{UR}\). ## 8 Wavelength Error We have thus far neglected wavelength error in this analysis. This is justified as phase errors are typically on the order of \(\pi/100\) or worse. Meanwhile, locking and measuring a wavelength with a fractional uncertainty of \(<10^{-4}\) is relatively simple using commercial servos and wavemeters. Therefore, as we proceed, we will assume that errors in wavelength are small compared to the phase errors. We denote the measured wavelength as \(\tilde{\lambda}\), the actual wavelength as \(\lambda_{0}\) and the error in the measured wavelength as \(\delta\lambda\), so that \(\tilde{\lambda}=\lambda_{0}+\delta\lambda\). The output of any algorithm is determined by the user-specified measured wavelengths \(\tilde{\lambda}_{i}\). Meanwhile, the ideal phase values, \(\phi_{0}\), are determined by the actual wavelength \(\lambda_{0}\). We may conceptually understand the effects of wavelength error by plotting the ideal phase values across phase space, given by \(\phi_{0,i}=2\pi\,d/\lambda_{0,i}\mod 2\pi\), against the OPD (\(d\)) from the HC algorithm using \(\tilde{\lambda}_{i}\). This phase space representation presented in Fig. 6 shows that the slope of the ideal phase lines no longer match the slope of the discontinuities in the algorithm's output. As a result, ideal phase values move closer to one discontinuity and further from the other. This can be thought of as a systematic contribution to \(\delta\phi_{\downarrow}\). The relationship between \(\delta\lambda_{i}\) and the contribution to \(\delta\phi_{\downarrow}\) caused by \(\delta\lambda_{i}\), which we denote as \(\delta\phi_{\lambda,\downarrow}\), is given by \[\delta\phi_{\lambda,\downarrow}=2\pi\left(\frac{\delta r_{\lambda}}{r_{ \lambda}}\right)\frac{d}{\sqrt{\lambda_{a}^{2}+\lambda_{b}^{2}}} \tag{22}\] where \(r_{\lambda}=\lambda_{a}/\lambda_{b}\), so that \(\delta r_{\lambda}\) is given by \[\delta r_{\lambda}=\frac{\lambda_{b}\delta\lambda_{a}-\lambda_{a}\delta \lambda_{b}}{\lambda_{b}^{2}}. \tag{23}\] When the phase uncertainty is the same for both phases and considering both phase and wavelength errors, the robustness of the HC algorithm is then given by \[\begin{split} R\approx\frac{1}{2}\left(\mathrm{erf}\left(\frac{ \pi\,\lambda_{x}\,\lambda_{y}}{\sigma\,|\mathrm{UR}|\sqrt{2(\lambda_{x}^{2}+ \lambda_{y}^{2})}}-\frac{\delta\phi_{\lambda,\downarrow}}{\sqrt{2}\sigma} \right)\right.\\ \left.-\mathrm{erf}\left(-\frac{\pi\,\lambda_{x}\,\lambda_{y}}{ \sigma\,|\mathrm{UR}|\sqrt{2(\lambda_{x}^{2}+\lambda_{y}^{2})}}-\frac{\delta \phi_{\lambda,\downarrow}}{\sqrt{2}\sigma}\right)\right)\end{split} \tag{24}\] Fig.7 demonstrates the robustness \(R\) from Eq. 24 resulting from either a small or a large error in one of the two wavelengths used for the HC algorithm. ## 9 Conclusions In this paper we introduced the idea of the phase space as a tool for analyzing the behavior of multi-wavelength interferometry algorithms. We show that the component of the phase error perpendicular to the ideal phase lines is solely responsible for algorithm success. When the phase uncertainty is the same for both wavelengths, we find that the component of the phase error parallel to the ideal phase lines is responsible for error in the calculated OPD. We note that the robustness of an algorithm which does not achieve the maximum UR will likely depend on the OPD. In particular, we show that the robustness of the synthetic wavelength algorithm decreases when the OPD is within \(\lambda/2\) of \(\pm\Lambda/2\). We show that the drop in robustness is associated with an overlooked constraint of the synthetic wavelength algorithm. We show that robustness of de Groot's algorithm depends on wavelength choice and OPD in a non-trivial way. Finally we show that the HC algorithm results in an optimal partitioning of the phase space. This even partitioning results in uniform robustness across the entire UR. We examined the effect of wavelength error on the robustness of the HC algorithm and found that wavelength errors result in robustness which decreases as \(|d|\) increases. **Funding** This work was funded by the Air Force Office of Scientific Research under lab task 22RV-COR017. **Disclaimer** The views expressed are those of the authors and do not necessarily reflect the official policy or position of the Department of the Air Force, the Department of the Defense, or the U.S. Government.
2309.04876
News-driven Expectations and Volatility Clustering
Financial volatility obeys two fascinating empirical regularities that apply to various assets, on various markets, and on various time scales: it is fat-tailed (more precisely power-law distributed) and it tends to be clustered in time. Many interesting models have been proposed to account for these regularities, notably agent-based models, which mimic the two empirical laws through a complex mix of nonlinear mechanisms such as traders' switching between trading strategies in highly nonlinear way. This paper explains the two regularities simply in terms of traders' attitudes towards news, an explanation that follows almost by definition of the traditional dichotomy of financial market participants, investors versus speculators, whose behaviors are reduced to their simplest forms. Long-run investors' valuations of an asset are assumed to follow a news-driven random walk, thus capturing the investors' persistent, long memory of fundamental news. Short-term speculators' anticipated returns, on the other hand, are assumed to follow a news-driven autoregressive process, capturing their shorter memory of fundamental news, and, by the same token, the feedback intrinsic to the short-sighted, trend-following (or herding) mindset of speculators. These simple, linear, models of traders' expectations, it is shown, explain the two financial regularities in a generic and robust way. Rational expectations, the dominant model of traders' expectations, is not assumed here, owing to the famous no-speculation, no-trade results
Sabiou Inoua
2023-09-09T21:05:07Z
http://arxiv.org/abs/2309.04876v1
# News-driven Expectations and Volatility Clustering ###### Abstract Financial volatility obeys two fascinating empirical regularities that apply to various assets, on various markets, and on various time scales: it is fat-tailed (more precisely power-law distributed) and it tends to be clustered in time. Many interesting models have been proposed to account for these regularities, notably agent-based models, which mimic the two empirical laws through a complex mix of nonlinear mechanisms such as traders' switching between trading strategies in highly nonlinear way. This paper explains the two regularities simply in terms of traders' attitudes towards news, an explanation that follows almost by definition of the traditional dichotomy of financial market participants, investors versus speculators, whose behaviors are reduced to their simplest forms. Long-run investors' valuations of an asset are assumed to follow a news-driven random walk, thus capturing the investors' persistent, long memory of fundamental news. Short-term speculators' anticipated returns, on the other hand, are assumed to follow a news-driven autoregressive process, capturing their shorter memory of fundamental news, and, by the same token, the feedback intrinsic to the short-sighted, trend-following (or herding) mindset of speculators. These simple, linear, models of traders' expectations, it is shown, explain the two financial regularities in a generic and robust way. Rational expectations, the dominant model of traders' expectations, is not assumed here, owing to the famous no-speculation, no-trade results. volatility clustering; power law; trend following; efficient market hypothesis; liquidity ## 1 Introduction A meticulous and extensive study of high-frequency financial data by various researchers reveals important empirical regularities. Financial volatility, in particular, obeys two well-established empirical laws that attracted special attention in the literature: it is fat-tailed (in fact power-law distributed with an exponent often close to 3) and it tends to be clustered in time, unfolding through intense bursts of high instability interrupting calmer periods (Mandelbrot, 1963; Fama, 1963; Ding, Granger, & Engle, 1993; Gopikrishnan, Meyer, Amaral, & Stanley, 1998; Lux, 1998; Plerou, Gabaix, Stanley, & Gopikrishnan, 2006; Cont, 2007; Bouchaud, 2011). The first regularity implies that extreme price changes are much more likely than suggests the standard assumption of normal distribution. The second property, volatility clustering, reveals a nontrivial predictability in the return process, whose sign is uncorrelated but whose amplitude is long-range correlated. These are fascinating regularities that apply to various financial products (commodities, stocks, indices, exchange rates, CDS1) on various markets and on various time scales. Footnote 1: See, e.g., Bouchaud and Challet (2016). The universality and robustness of these laws (illustrated graphically in section 2) suggests that there must be some _basic, permanent, and general_ mechanisms causing them (an intuition that shall be the heuristic and guiding principle throughout this paper). To identify these causes requires to go back to the basics of financial theory and to contrast the two major paradigms on financial fluctuations. The dominant view today, the efficient market hypothesis, treats an asset's price as following a random walk _exogenously driven by fundamental news_(Bachelier, 1900; Osborne, 1959; Fama, 1963; Cootner, 1964; Fama, 1965a, 1965b; Samuelson, 1965; Fama, 1970). On the other hand is the growing resurgence of an old view of financial markets that insists on _endogenous amplifying feedback mechanisms_ caused by mimetic or trend-following speculative expectations, fueled by credit, and responsible for bubbles and crashes (Fisher, 1933; Keynes, 1936; Shiller, 1980; Smith, Suchanek, & Williams, 1988; Cutler, Poterba, & Summers, 1989; Orlean, 1989; Cutler, Poterba, & Summers, 1990; Minsky, 1992; Caginalp, Porter, & Smith, 2000; Barberis & Thaler, 2003; Porter & Smith, 2003; Akerlof & Shiller, 2010; Shaikh, 2010; Bouchaud, 2011; Dickhaut, Lin, Porter, & Smith, 2012; Keen, 2013; Palan, 2013; Soros, 2013; Gjerstad & Smith, 2014; Soros, 2015).2 Footnote 2: The nuance in this diverse literature on endogenous financial instability, already clearly articulated by the classical economists (Inoua & Smith, 2020), lies perhaps in the nature of the ultimate destabilizing force that is specifically emphasized in each tradition, notably human psychology (Keynes and behavioral finance) or the easy bank-issued liquidity that backs or fuels the speculative euphoria and without which this latter would be of no significant, macroeconomic, harm (Fisher, Minsky, Kindleberger, etc., and the classical economists who preceded them). The endogenous cause of financial volatility, probably predominant in empirical data (Bouchaud, 2011), is particularly taken seriously in agent-based models, which, unlike neoclassical finance, deal explicitly with the traditional dichotomy of financial participants, investors versus speculators (often named differently), extending it to include other types of players; besides traders' heterogeneity, these models also insist on traders' learning, adaptation, interaction, etc. They generate realistic fat-tailed and clustered volatility, but typically through a _complex mix of nonlinear mechanisms_, notably traders' switching between trading strategies. These interesting models of financial volatility have already been carefully reviewed elsewhere (Cont, 2007; Samanidou, Zschischang, Stauffer, & Lux, 2007; He, Li, & Wang, 2016; Lux & Alfarano, 2016). The realism of these models comes at a price, however; for it is not easy to isolate basic causes of the financial regularities amid a mathematically intractable complex of highly nonlinear processes at work simultaneously3. So, while this literature contributed significantly to a faithful picture of financial markets, it is not completely satisfactory for a basic reason: the sophisticated trading behaviors commonly assumed in this literature, handled through modern computers, is hardly a natural explanation of the financial regularities, whose discovery (let alone validity) goes back to the 1960s, an early and more rudimentary stage of finance. There must be, in other words, something of a most fundamental nature, some permanent cause intrinsic to the very act of financial trading, that is causing these regularities. GARCH models are perhaps more popular and more parsimonious models of the two regularities than agent-based models (Engle, 1982; Bollerslev, 1986; Bollerslev, Chou, & Kroner, 1992); but these statistical models are hardly a theoretical explanation of the empirical laws from explicit economic mechanisms; when fitted to empirical data, moreover, they imply an infinite-variance return process, the integrated GARCH (or IGARCH) model (Engle & Bollerslev, 1986), which corresponds to a more extreme randomness than the empirical one (Mikosch & Starica, 2000, 2003). Footnote 3: Other types of models are also suggested for the power law more specifically; one of them, e.g., relates the power law of return to the trades of very large institutional investors (Plerou et al., 2006). This paper explains the two regularities simply in terms of traders' attitudes towards news, an explanation that follows almost by definition of the traditional dichotomy of financial market participants, investors versus speculators, whose behaviors are reduced to their simplest forms. Long-run investors' valuations of an asset are assumed to follow a news-driven random walk, thus capturing the investors' persistent, long memory of fundamental news. Short-term speculators' anticipated returns, on the other hand, are assumed to follow a news-driven autoregressive process, capturing their shorter memory of news, and, by the same token, the feedback intrinsic to their short-memory, trend-following (or herding) mindset. These simple, linear, models of traders' expectations, it is shown below, explain the two financial regularities in a generic and robust way. Rational expectations, the dominant model of traders' expectations, is not assumed here, owing to the famous no-speculation, no-trade results (Milgrom & Stokey, 1982; Tirole, 1982). In fact there seems to be an intrinsic difficulty in building a realistic theory of high-frequency volatility of financial markets, caused by incessant trading at almost all time scales and often driven by short-term speculative gains, from rational expectations, since they typically lead to a no-speculation, no-trade equilibrium. The model this paper suggests can be viewed as a _simple theory_ of the interplay between the exogenous and endogenous causes of financial volatility, and, by the same token, identifies the two components to be responsible for the two regularities, reducing them to basic, _linear mechanisms_; it is a synthesis of the two paradigms above-mentioned, avoiding the caveats on both sides: the no-trade problem inherent to the neoclassical formulation of the news-driven random walk model, and the nonlinear complexity characteristic of agent-based models. The power-law tail of volatility can be shown to derive intrinsically from the self-reinforcing amplifications inherent to herding or trend-following speculative trading. Trend-following speculation, for example, which is a popular financial practice, leads directly to a random-coefficient autoregressive (RCAR) return process in a competitive financial market, assuming a simple _linear_ competitive price adjustment, as recent empirical evidence suggests (Cont, Kukanov, & Stoikov, 2014). The RCAR model derives naturally, provided that trend-following is modeled, not in terms of moving averages of past prices (as often assumed in the agent-based literature) but in terms of _moving averages of past returns_, which is more natural and more convenient (Beekhuizen & Hallerbach, 2017). The power-law tail of such processes is rigorously proven in the mathematical literature4(Kesten, 1973; Kluppelberg & Pergamenchtchikov, 2004; Buraczewski, Damek, & Mikosch, 2016). But it can be proven that the RCAR model, briefly derived below5(in section 3), cannot explain volatility clustering (Mikosch & Starica, 2000; Basrak, Davis, & Mikosch, 2002; Mikosch & Starica, 2003; Buraczewski et al., 2016).6 The basic cause of clustered volatility, this paper suggests, is none other than the impact of exogenous news on expectations. A more general model is therefore suggested that includes, as usual, a second class of agents besides speculators: fundamental-value investors, who attach a real value to an asset and buy it when they think the asset is underpriced, or sell it, otherwise, updating additively their valuations with the advent of a fundamental, exogenous news; the amount of information a news reveal to the traders about the asset's worth can be precisely quantified by the log-probability of the news, as is known from information theory (Shannon, 1948). Speculators' expectations are more subtle, since they are at least partly endogenous. The simplest compromise consists of modeling the speculators' anticipated return as a first-order autoregressive process with a coefficient that is lower than 1, to capture speculative self-reinforcing feedback, but close enough to 1, so that exogenous news have a persistent enough impact on speculators' expectations as well. This extended model is superior, not only by generating both the fat-tailed and clustered volatility, but also by ensuring stationarity of the return process. ## 2 The empirical regularities Let \(P_{t}\) be the price of a financial asset at the closing of period \(t\), let the return (or relative price change) be \(r_{t}=(P_{t}-P_{t-1})/P_{t-1}\). The two empirical regularities read formally: (1) \(\mathbb{P}(1_{r}\mid>x)\sim Cx^{-\alpha}\), for big values \(x\), where often \(\alpha\approx 3\) (\(C\) being merely a normalizing constant); (2) \(\mathrm{cor}(1_{r}\mid,1_{r_{t+h}}\mid)>0\) over a long range of lags \(h>0\), while \(\mathrm{cor}(r_{t},r_{t+h})\approx 0\) for all \(h>0\). Figures 1 and 2 illustrate these two regularities for General Electric's daily stock price and the NYSE daily index7. Footnote 7: The linear fit is based on a maximum-likelihood algorithm developed by Clauset, Shalizi, and Newman (2009), which is an important reference for the statistical test of empirical power laws; the program codes are available at [http://tuxulu.santafe.edu/~aaronc/powerlaws/](http://tuxulu.santafe.edu/~aaronc/powerlaws/). For an introduction to power laws more generally, see, for example, Newman (2005) and Gabaix (2008, 2016). Figure 1: General Electric stock: (**a**) Price; (**b**) Return (in percent); (**c**) cumulative distribution of volatility in log-log scale, and a linear fit of the tail, with a slope close to 3; (**d**) Autocorrelation function of return, which is almost zero at all lags, while that of volatility is nonzero over a long range of lags (a phenomenon known as volatility clustering). ## 3 The model Following a traditional dichotomy, consider a financial market populated by two types of traders: (short-term) speculators, who buy an asset for anticipated capital gains; and (long-run fundamental-value) investors, who buy an asset based on its fundamental value. Let the (excess) demands of an investor and a speculator be respectively1: Footnote 1: Because nonlinearity adds no further insight to this theory, we assume these standard linear supply and demand functions, which can be viewed as first-order linear approximations of more general functions; also, since financial supply and demand can be treated symmetrically (by treating supply formally as a negative demand), one can think directly in terms of a trader’s excess demand, which is a demand or a supply, depending on the sign. \[Z_{u}=\mu\frac{V_{t}^{\prime}-P_{t}}{P_{t}}, \tag{1}\] \[Z_{u}=\gamma\frac{P_{st}^{\prime}-P_{t}}{P_{t}}, \tag{2}\] where \(P_{st}^{\prime}\) is a speculator's estimation of the asset's future price, \(V_{u}^{\prime}\) is an investor's estimation the asset's present value, and the parameters \(\mu,\gamma>0\). Let \(M_{t}\), and \(N_{t}\) be respectively the numbers of investors and speculators active in period \(t\). The overall (market) excess demand is \[Z_{t}=\mu M_{t}\frac{V_{t}^{\prime}-P_{t}}{P_{t}}+\gamma N_{t}\frac{P_{t}^{ \prime}-P_{t}}{P_{t}}, \tag{3}\] where \(P_{t}^{\prime}=N_{t}^{-1}\sum_{s}P_{st}^{\prime}\) and \(V_{t}^{\prime}=M_{t}^{-1}\sum_{s}V_{st}^{\prime}\), namely, the average investor valuation (hereafter referred to simply as "the value' of the security) and the average speculator anticipated future price. Assume the following standard price adjustment, in accordance with the market microstructure literature2: Figure 2: NYSE composite daily index. \[r_{t}=\beta\frac{Z_{t}}{L_{t}}\,, \tag{4}\] where \(L_{t}\) is the overall market liquidity (or market depth) and \(\beta>0\). Let the overall price impact of speculative and investment orders be denoted respectively as \[m_{t}=\beta\mu M_{t}/L_{t}\,, \tag{5}\] \[n_{t}=\beta_{\gamma}N_{t}/L_{t}\,. \tag{6}\] The two equations (3) and (4) combined yield: \[r_{t}=m_{t}\frac{V_{t}^{*}-P_{t}}{P_{t}}+n_{t}\frac{P_{t}^{*}-P_{t}}{P_{t}}\,, \tag{7}\] _A purely news-driven investment market model_ Let the arrival of exogenous news relevant to investors and speculators be modeled as random events I, and J\({}_{t}\) occurring with probability \(\mathbb{P}(\text{I}_{t})\) and \(\mathbb{P}(\text{J}_{t})\) and leading the traders to revise additively their prior estimations of the asset by the amounts \(\varepsilon_{t}\) and \(\nu_{t}\) respectively (which can be assumed normally distributed by aggregation). (There is no harm in assuming \(\text{I}_{t}=\text{J}_{t}\), namely a common access to the same news by all the traders.) Thus the traders' valuations of the asset follow a random walk: \(V_{t}^{*}=V_{t-1}^{\epsilon}+\varepsilon_{t}\text{I}(\text{I}_{t})\) and \(P_{t}^{\prime}=P_{t-1}^{\epsilon}+\nu_{t}\text{I}(\text{I}_{t})\), where \(\text{I}(\text{I}_{t})\) and \(\text{I}(\text{J}_{t})\) are the indicator functions associated with the advent of the news. The amount of information the news reveal to the traders about the asset's worth can be precisely quantified: \(-\log(\mathbb{P}(\text{I}_{t}))\) and \(-\log(\mathbb{P}(\text{I}_{t}))\), respectively. This _news_-driven random-walk of traders' expectations should be distinguished from a standard assumption in the agent-based literature, introduced perhaps by Lux and Marchesi (1999), whereby an asset's fundamental value is modeled as a _noise_-driven random walk, where the noise is a Gaussian white noise. The difference between a news and a noise is simple: a noise can be formally defined as a news that carries zero information \((\mathbb{P}(\text{I}_{t})=1,\ \mathbb{P}(\text{J}_{t})=1)\).10 Footnote 10: We are grateful to a reviewer whose comment makes us aware of the need to emphasize explicitly the basic difference between a news and a noise. The reviewer suggests also that the noise-driven random walk of fundamental value, along with traders’ heterogeneity, may be responsible for clustered volatility in some agent-based models such as that by He and Li (2012), although these models put forward a complex mix of nonlinear mechanisms. Yet most agent-based models who assume the same Gaussian _white noise_ in the fundamental-value dynamics, starting from Lux and Marchesi (1999), make the opposite claim, showing that the noise-driven random walk in their model has nothing to do with the stylized facts, and in fact they assume the Gaussian white noise precisely so that none of the emergent financial stylized facts in their models be attributed to this white noise: "In order to ensure that none of the typical characteristics of financial prices can be traced back to exogenous factors, we assume that the relative changes of [fundamental value] are Gaussian random variables." (Lux and Marchesi, 1999, p. 499). In fact, Lux and Marchesi (2000) showed that the fat-tail and volatility clustering in their model hold even when the fundamental value is constant. This is the case in this paper's model as well, as emphasased below (Figure 7). All in all, the asset's price dynamics reads: \[P_{t}=(1+r_{t})P_{t-1}, \tag{8}\] \[r_{t}=m_{t}\,\frac{V_{t}^{e}-P_{t}}{P_{t}}+n_{t}\,\frac{P_{t}^{e}-P_{t}}{P_{t}}\,, \tag{9}\] \[V_{t}^{e}=V_{t-1}^{e}+\varepsilon_{t}\mathbf{1}(\mathbf{I}_{t}), \tag{10}\] \[P_{t}^{e}=P_{t-1}^{e}+\nu_{t}\mathbf{1}(\mathbf{J}_{t}). \tag{11}\] Because both types of traders are behaviorally equivalent, the investor-speculator dichotomy is of no substance in this specific model: \(P^{e}\) is equivalent to \(V^{e}\), and both are entirely driven by exogenous news. In other words, the market thus modeled is in fact a non-speculative purely news-driven market of investors. Figure 3 shows a simulation of this model, and Figure 4 shows 5 superposed sample paths of the model. (All the parameter specifications are reported in Table 1 in the discussions section.) The clustering of volatility is generic in this model; but, as is clear from Figure 3, the model suffers from an obvious non-stationarity of the return process due to the double random walk of the traders' expectations; thus the graphical impression of a robust fat tail is an artefact: it does not make sense to say that the distribution is fat-tailed (since the returns are in fact drawn from different distributions: for example, the standard deviation of the return varies greatly from sample to sample, as should be expected). Figure 3: A purely news-driven market model. _A purely speculative trend-following market model_ Let the speculators' anticipated return be denoted \[r_{t}^{\epsilon}\equiv\frac{P_{t}^{\epsilon}-P_{t}}{P_{t}}. \tag{12}\] Trend-following implies that speculators' overall anticipated return is of the form \(r_{t}^{\epsilon}=\sum_{h=1}^{l}\omega_{ht}r_{t-k}+\nu_{t}\mathbf{1}\{I_{t}\}\), where we have added the impact of exogenous news on speculators' expectations. The weighting scheme \(\{\omega_{ht}\}\) can be computed explicitly from standard moving-average trend-following techniques used by financial practitioners (Beekhuizen & Hallerbach, 2017). In a purely speculative market (\(m_{t}=0\)), the asset's return is then \(r_{t}=n_{t}\sum_{h=1}^{M}\omega_{ht}r_{t-k}+n_{t}\nu_{t}\mathbf{1}\{I_{t}\}\). This RCAR model generates, under quite general and mild technical conditions, a strictly stationary power law tail \(\mathbb{P}(1\,r_{t}\mid>x)\sim Cx^{-\alpha}\), where the exponent \(\alpha\) depends solely on the joint distribution of \((n_{t},\omega_{ht})\), and not on the exogenous news (Kluppelberg & Pergamenchtchikov, 2004; Buraczewski et al., 2016). But this strict stationary comes at a price, as noted in the introduction: for any such autoregressive model, and for any arbitrary function \(f\), \(\mathrm{cov}[f(r_{t}),f(r_{t+k})]\), when it is well-defined, decays rapidly, at an exponential rate, with the \(\mathrm{lag}\,h\)(Mikosch & Starica, 2000; Basrak et al., 2002). So volatility, whether measured as \(\mid r_{t}\mid\), \(r^{2}\), or more generally by any function \(f\), cannot be long-range correlated in this purely speculative trend-following model. _A more general model_ The two polar models emphasize a tension between the endogenous and the exogenous causes of volatility: the purely exogenous, news driven-driven, expectations, produces a clustering of volatility but induces a trivial non-stationarity; whereas the purely endogenous feedback-inducing trend-following expectations generates a stationary power-law tail but cannot account for volatility Figure 4: The purely news-driven market model: 5 simulations superposed. clustering. The simplest compromise between these two notions consists of maintaining a purely exogenous, news-driven investors' valuations, but to assume that the speculators' anticipated return is partly endogenous (self-referential, or reflexive) and to write \(r_{t}^{\epsilon}=ar_{t-1}^{\epsilon}+\nu_{1}\mathbf{I}(t_{t})\), where \(0<a<1\), to capture the (exponentially decaying) short-memory of speculators' concerning a fundamental news, but \(a\approx 1\), so that incoming news have a lasting enough impact upon the speculators' expectations.11 The purely news-driven random walk of investors' valuations, on the other hand, implies that the asset's value incorporates all the fundamental news (news relevant to investors) in the sense that \(V_{t}^{\epsilon}=V_{0}^{\epsilon}+\sum_{k=1}^{t}\nu_{k}\mathbf{1}(I_{k})\), making \(V_{t}^{\epsilon}\) the natural estimate of the asset's fundamental value in this model. Footnote 11: An arbitrarily general AR model is recently suggested by Shi, Luo, and Li (2019), which replicates and studies in detail the robustness of an earlier working version of this paper’s model (titled ‘The random walk behind volatility clustering, 2016’). But the choice \(a\approx 1\) is crucial for volatility clustering. All in all, the asset's price dynamics in the general model reads: \[P_{t}=(1+r_{t})P_{t-1}, \tag{13}\] \[r_{t}=n_{t}r_{t}^{\epsilon}+m_{t}\,\frac{V_{t}^{\epsilon}-P_{t}}{P_{t}}\,, \tag{14}\] \[V_{t}^{\epsilon}=V_{t-1}^{\epsilon}+\nu_{1}\mathbf{I}(t_{t}), \tag{15}\] \[r_{t}^{\epsilon}=ar_{t-1}^{\epsilon}+\varepsilon_{t}\mathbf{I}(t_{t}), \tag{16}\] Figures 3-6 are simulations of the model using the parameter specifications in Table 1. Figure 5: The general model: illustration 1. Figure 6: The general model: 5 simulations superposed. ## 4 Discussion Both the fat tail and the volatility clustering are generic and robust in the model: they hold for a broad class of distributions and parameters: the specifications in Figure 3-7 are chosen merely for illustration, except to reflect realistic orders of magnitude compared to empirical data (notably the standard deviation of return which is typically around 1). Also, in all the simulations: \(T=10000\) periods; \(P_{0}=V_{0}^{\prime}=P^{\prime}=100\), \(r_{1}=r_{1}^{\prime}=0\); \(\{n_{i}\}\) and \(\{m_{i}\}\) are exponentially distributed iid processes; \(\{\varepsilon_{i}\}\) and \(\{\nu_{i}\}\) are zero-mean Gaussian iid processes. The differing parameter choices are reported in Table 1. \begin{table} \begin{tabular}{l c c c c c c} \hline & _Figure 1_ & _Figure 2_ & _Figure 3_ & _Figure 4_\({}^{\text{i}}\) & _Figure 5_ & _Figure 6_\({}^{\text{i}}\) & _Figure 7_ \\ & GE & NYSE & Model 1 & Model 1 & Model 2 & Model 2 & Model 2 \\ \hline _Parameters_ & & & & & & \\ \hline mean (\(m\)) & & 0.1 & 0.1 & 0.2 & 0.2 & 0.2 \\ mean (\(n\)) & & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ std(\(\varepsilon\) ) & & 1 & 1 & 1 & 1 & 1 \\ std(\(\nu\) ) & & 1 & 1 & 0.04 & 0.04 & 0.04 \\ \hline \end{tabular} \end{table} Table 1: Parameter specifications. Figure 7: The general model: variable versus constant fundamental value.12 ## 5 Conclusion This paper suggests a simple explanation for excess and clustered volatility in financial markets through a simple synthesis of the two major paradigms in financial theory. Excess volatility means that that price fluctuations are too high given the underlying fundamentals, which is an intrinsic feature of the model, owing to the amplifying feedback intrinsic to speculative trading, as illustrated more strikingly in Figure 7, in which the fundamental value is kept constant. Clustered volatility simply reflects, in this theory, the traders' persistent memory of exogenous news concerning the asset's present value or future price. The two empirical facts are thus reduced to simple explanations, through basic _linear processes_ that capture investors' long-memory versus speculators' shorter-memory of real news.
2309.04862
Distributional Data Augmentation Methods for Low Resource Language
Text augmentation is a technique for constructing synthetic data from an under-resourced corpus to improve predictive performance. Synthetic data generation is common in numerous domains. However, recently text augmentation has emerged in natural language processing (NLP) to improve downstream tasks. One of the current state-of-the-art text augmentation techniques is easy data augmentation (EDA), which augments the training data by injecting and replacing synonyms and randomly permuting sentences. One major obstacle with EDA is the need for versatile and complete synonym dictionaries, which cannot be easily found in low-resource languages. To improve the utility of EDA, we propose two extensions, easy distributional data augmentation (EDDA) and type specific similar word replacement (TSSR), which uses semantic word context information and part-of-speech tags for word replacement and augmentation. In an extensive empirical evaluation, we show the utility of the proposed methods, measured by F1 score, on two representative datasets in Swedish as an example of a low-resource language. With the proposed methods, we show that augmented data improve classification performances in low-resource settings.
Mosleh Mahamud, Zed Lee, Isak Samsten
2023-09-09T19:01:59Z
http://arxiv.org/abs/2309.04862v1
# Distributional Data Augmentation Methods for Low Resource Language ###### Abstract Text augmentation is a technique for constructing synthetic data from an under-resourced corpus to improve predictive performance. Synthetic data generation is common in numerous domains. However, recently text augmentation has emerged in natural language processing (NLP) to improve downstream tasks. One of the current state-of-the-art text augmentation techniques is easy data augmentation (EDA), which augments the training data by injecting and replacing synonyms and randomly permuting sentences. One major obstacle with EDA is the need for versatile and complete synonym dictionaries, which cannot be easily found in low-resource languages. To improve the utility of EDA, we propose two extensions, easy distributional data augmentation (EDDA) and type specific similar word replacement (TSSR), which uses semantic word context information and part-of-speech tags for word replacement and augmentation. In an extensive empirical evaluation, we show the utility of the proposed methods, measured by F1 score, on two representative datasets in Swedish as an example of a low-resource language. With the proposed methods, we show that augmented data improve classification performances in low-resource settings. Department of Computer and Systems Sciences Borgarfjordsgatan 12, Kista, Sweden {mosleh.mahamud,zed.lee,samsten}@dsv.su.se ## Introduction Augmentation is a technique to construct synthetic training data from available datasets. Various augmentation techniques have been used mainly in the computer vision field to improve machine learning models Shorten et al. (2021), especially with huge deep learning models in the area. However, text augmentation has been growing recently, also being aligned with massive models that have come out nowadays Bayer et al. (2021). The two core reasons to use text augmentation are as follows: 1) some languages are in low-resource domains, thus it is hard to get enough data to train the model, 2) augmentation can be helpful to strengthen decision boundaries, leading to more robust classifiers or better uncertainty estimates so the model can be more familiar with the local space around examples Bayer et al. (2021). Unlike images, languages cannot be generalized or merged, meaning each language only has its own resources, while images can easily be merged regardless of topics and types. In this sense, text augmentation techniques can benefit low-resource languages such as Swedish, Kazakh, Tamil, Welsh, Upper Serbian, and many more Sahin (2022). There have been a few text augmentation techniques, from the most straightforward one Ebrahimi et al. (2017); Kolomyets et al. (2011), to complex ones using separate deep learning models Wu et al. (2019); Croce et al. (2020); Malandrakis et al. (2019). One of the easiest ways to apply text augmentation is with a technique called easy data augmentation (EDA). EDA has four main techniques to augment a sentence Wei and Zou (2019) as follows: synonyms replace (SR), random Insertion (RI), random swap (RS), and random deletion (RD). While EDA can be regarded as a universal text augmentation technique that can be applied to any language. However, this may not always be true, as it is not truly universal in the sense of not being able to apply to different languages since it still depends on other language-dependent modules such as wordnet. Adapting EDA to low-resource languages may be even more challenging since some language dependencies cannot be easily solved. Therefore, this paper aims to provide a framework for modified EDA augmentation that can also easily be applied to low-resource languages. We show our framework for Swedish as an example of a low-resource language. While the Swedish language is classified into the low-resource group, there have been a few text augmentation trials for the language. One of the earliest text augmentation works has been done on clinical text data in Swedish by merging various sources of text for named entity recognition (NER) tasks using different deep models Berg and Dalianis (2019). However, this paper has a limitation in that it only tests on one Swedish clinical dataset and the augmentation techniques used in the paper are domain-specific, thus it cannot be applied to every Swedish text. Moreover, a group of researchers has tried controlled text perturbation using three main perturbation methods: n-gram shift, clause shift, and random shift on Swedish text Taktasheva et al. (2021). However, this paper focuses only on evaluating deep models such as BERT and BART Devlin et al. 2019; Lewis et al.2020) and investigates attention layers for each token to observe their behavior without discussing the effects of augmentation on the models' performances. They also do not disclose how the augmentation techniques are implemented, hindering the possibility of reproducing the technique. To the best of our knowledge, no previous work has been found where EDA with neural adaptation is applied to the Swedish text. Regarding the inner workings of EDA, it is heavily dependent on wordnet synonym replacement. As aforementioned, there may not always be a comprehensive dictionary in every language, especially in low-resource languages. Therefore, we replace wordnet with the word2vec (Mikolov et al.2013; Borin, Forsberg, and Lonngren2013) model to integrate within this augmentation framework, which becomes a data-driven approach to augmenting data, which we call **Easy** Distributional **Dta**Augmentation (**EDDA**). We expect that this approach can greatly help low-resource languages without good quality dictionary data, such as wordnet, use EDA techniques with a trainable component. Moreover, we also introduce how syntax information of words can also be used to augment data, which we call **T**ype **S**pecific **S**imilar word **R**eplacement (**TSSR**). This is due to randomness in EDDA may affect sentence sentiment (Qiu et al.2020; Bayer, Kaufhold, and Reuter2021; Anaby-Tavor et al.2020) by producing sentimentally dissimilar synthetic sentences; therefore, this is a directed approach to complement EDDA. **Contibutions.** The main contributions of this paper can be summarized as follows: * We adapt EDA-style augmentation techniques for low-resource languages by using distributional synonym replacement that does not require strong language-specific dependency. We exemplify its usefulness in Swedish text. * We introduce and evaluate a novel augmentation method using POS information, which we name TSSR, as a complementary module to our EDDA framework and show that this method can significantly improve predictive performance. * We show that by using the proposed augmentation techniques, we increase the F1 score only using 40%-50% of the training data compared to the baseline performances without augmentation. * We provide our code in the GitHub repository for reproducibility purposes. ## Related work Among the multitude of text perturbation techniques, text augmentation comes down to two main categories: symbolic and neural augmentation techniques (Shorten, Khoshgoftaar, and Furht2021). The first consists of a wide range of techniques, such as rule-based augmentation, feature-space augmentation, and graph-structured augmentation, whereas the latter is based on different techniques of deep neural networks, such as back-translation, style augmentation, and generative data augmentation. Symbolic augmentation is more interesting because it can be more controllable and interpretable than its counterpart. However, very little research has been done where symbolic and neural augmentation techniques are aligned to augment sentences which this paper explores. As text augmentation is a relatively new area, there have not been many experiments in the Swedish domain. As per our knowledge, the earliest attempts with augmentation in the Swedish language are with Swedish clinical text mining, where they merge various sources of text for NER (Berg and Dalianis2019). Apart from that, no popular augmentation techniques like EDA have been applied to the SuperLim suites of benchmarking datasets (Adesam, Berdievskis, and Morger2020), which we showcase in this paper. Swedish has been known to be a low-resource language (Sahin2022). Hence, there is a need for available resources such as EDA or other augmentation tools that could improve various NLP downstream tasks. One paper discusses pre-training for an ASR model in low-resource domains where they have tried various augmentation techniques (Stoian, Bansal, and Goldwater2019). However, it focuses on augmentation for speech data and is not universal or applicable to purely text models. One of the first attempts at EDA in Swedish can be found in an unreliable news detection problem (Munoz Sanchez et al.2022). This paper deals with a classification problem where three main augmentation techniques have been applied to boost the model's performance, such as (1) sub-sampling of data, (2) EDA, and (3) back translation. Both back translation and EDA are also combined to achieve good classification performance. They train with a bag of words model, Bi-LSTM, and BERT in their experiments. The paper denotes that EDA functions best with simple machine learning models. However, this is where this paper and our work diverge, as (1) we use a neural-adapted EDA that can easily adapt to any language. (2) we focus on testing our methods on two benchmarking datasets. Our paper takes inspiration from EDA and infuses it with a word2vec making it a data-driven approach to text perturbation. Similar augmentation attempts have been made on the DALAJ dataset (Voldina, Mohammad, and Klezl2021) with controlled perturbations using three main perturbation methods: N-gram shift, clause shift, and random sift. N-gram shifts are about utilizing compound nouns and prepositions to perturb the data. Whereas clause shift is rotating syntactic trees to perturb data, the random shift is identical to a random swap in EDA (Taktasheva, Mikhailov, and Artemova2021). However, that paper focuses on evaluating the BERT and BART (Lewis et al.2020) attention layers for each token to observe their behavior but does not discuss their performance effects individually, nor do they disclose how the augmentation techniques are implemented. Few research papers discuss controlled perturbations (Bayer, Kaufhold, and Reuter2021) where pronoun tokens such as "he" and "she" are used to de-bias an NLP model (Zhao et al.2018). This is a type of context-preserving augmentation technique. This has been done many times in the English language but has not been attempted within the Swedish language to the best of our knowledge. Moreover, our paper uses a data-driven approach to augment sentences in a controlled manner where any POS tag tokens can be specified. ## Proposed Method ### Problem Statement Consider a low-resource setting, e.g., Swedish, where we only have limited data. The available dataset has low amounts of labeled data, and hence various augmentation tools are used to expand existing training data to make a better classification. However, one extra constraint is no dictionary synonyms are available. Our problem to solve is (1) to find a way to adapt the EDA-style augmentation for low-resource domains, (2) to measure how well EDDA performs on the Swedish datasets, (3) to examine how type specific similar work replacement (TSSR) affects classification. ### Easy Distributional Data Augmentation (EDDA) We introduce EDDA, a novel technique to support text augmentation in low-resource languages (Figure 1). EDDA takes inspiration from EDA, which is a combination of many different augmentation methods [20]. In this paper, we adapt the following strategies from EDA for low-resource data augmentation: 1. Random synonym replacement (RSR): We randomly select a small, user-defined fraction of words from the sentence, excluding stop words. A randomly chosen synonym replaces each replacement candidate. 2. Random insertion (RI): We randomly choose a small, user-defined fraction of positions within the sentence and insert a random synonym to a random word in the sentence. 3. Random swap (RS): We randomly choose a small, user-defined fraction of words and swap their positions. 4. Random deletion (RD): We randomly delete a small, user-defined fraction of words. With regards to embeddings, The name distributional in EDDA comes from distributional semantics which is capturing linguistic expressions as vectors that capture co-occurrence patterns in large corpora [21, 13]. This framework leverages this theory to augment various sentences using a language model like word2vec. The synonym replacement is done, instead of using a lookup table, by using a word2vec model using its latent space to find the most similar word replacements. With the intuition that no functioning public synonym dictionary is available for Swedish, a Swedish word2vec model [15, 1] is used to generate word candidates with similar word distribution in an embedding space. Since it is not a dictionary that has a pure list of synonyms given a word, word2vec may not always find synonyms, but similar words that could occur in the same context. Thus, EDDA is a hybrid between a rule-based system such as EDA and a neural-based system. While there are many distributional embeddings that could potentially be used, we use word2vec instead of e.g., BERT, since we use a particular token to find similar words in an embedding space which BERT's masked language modeling would not allow. Despite the fact that this might result in more randomness and potentially break the semantic meaning of a sentence, it allows us to support low-resource domains. Moreover, another advantage of using word2vec is that it still maintains the morphological coherence of suggested words compared to just using a synonym dictionary, such as SALDO [1], as they are only in their base form. Additionally, the word2vec model is generally smaller than the BERT model, which may help with inference speed. The remainder of the EDA framework is used verbatim. Another benefit of using distributional semantic models to generate word replacement candidates is that non-strict synonyms (e.g., names or places) can be generated. While word2vec is a well renowned algorithms where many languages have pre-trained models and even in cases where there is a lack of pre-trained models it is effortless to train such a model as it does not require any labeled data. Moreover, large scale language models like BERT may require heavy computational resources [20] to train, whereas word2vec may not, making it resource efficient. We claim that this adjustment can be of great benefit to language settings which lack good synonym dictionaries. ### Type Specific Similar word Replacement (TSSR) Figure 1: An overview of the EDDA framework. Figure 2: An example of TSSR replacing a noun word. When working with context-sensitive data, especially with sentiments, random synonym replacement might disrupt the semantic meaning of a sentence since the current EDA technique does not restrict replacing the word with synonyms from different types since it only looks at the list of synonyms in the dictionary. Therefore, we suggest constraining EDDA's synonym replacement by only replacing words with synonyms with the same POS tag, e.g., replacing verbs only with verb synonyms. Figure 2 shows one example where a noun token is chosen to be replaced and among two noun words 'Larsson' is chosen and is replaced with 'Eriksson'. To the best of our knowledge, no previous work has experimented with this method where word replacement using a language model (e.g., word2vec) with POS tag-specific perturbation has been done, especially within the low-resource domains. This allows for domain-specific augmentation that is more controllable and label-preserving in combination with EDDA. ``` Input : t: original text, s: token type, n: number of sentences to be created Result: newSentences: list of new sentences 1 newSentences = [] 2for\(i\gets 1\dots n\)do 3 chosenToken = FindRandomToken(t, s) 4 candidateToken = FindCandidate(chosenToken) 5 newText = Replace(t, chosenToken, candidateToken) 6 newSentences.append(newText) returnnewSentences ``` **Algorithm 1**TSSR pseudocode The whole procedure of TSSR is depicted in Figure 3 and Algorithm 1. First, we iterate the process \(n\) times to generate \(n\) new sentences for each sentence where \(n\) is a parameter (Algorithm 1, lines 1-2). A random token is chosen from the input text \(t\) using the preferred token type \(s\) as an input. A random POS token is selected if no token type is entered (line 3). After the chosen token, a new candidate token is generated using word embeddings (line 4) to replace the original text (lines 5). In the end, the new sentences are returned after the new altered sentences have been appended to the list (lines 6-7). We acknowledge that POS taggers may not always be available in every low-resource language. The specified POS tags depend on the person or domain where the technique is being used to perturb the data. One thing to note about this technique is that it is not a pure synonym replacement but a similar word replacement based on the word embedding space, so it still does not depend on a synonym dictionary which low-resource languages might not have. ## Experimental Setup In this section, we describe our experiments on two downstream classification datasets to show how well the proposed text augmentation techniques (EDDA and TSSR) work in the Swedish language. There are five main parts of the experiment, as also depicted in Figure 4, as follows: 1. Divide the dataset into multiple subpartitions. 2. With each dataset partition: 1. Train baseline model with dataset partitions. 2. Augment with EDDA and train another model. 3. Augment with TSSR and train another model. 4. Augment with RSR and train another model. ### Evaluation Methodology Dataset DescriptionThe experiments are conducted with two publicly available datasets from a Swedish benchmarking dataset repository called SuperLim. Since the datasets are already cleaned for research purposes, no special data cleaning or preprocessing is necessary [1]. We use two datasets which represent the two most common problems in NLP, such as _syntax analysis_ and _sentimental analysis_, as follows: 1. **DALAJ**: A Swedish linguistic acceptability dataset [13]. This dataset contains a set of sentences where each sentence is denoted as linguistically correct or incorrect. The dataset has predefined train, validation, and test splits with 7,682 training samples where 3,841 samples are classified into the correct group. On the other hand, the test set has 888 samples, where half are correct grammatical sentences Figure 4: The augmentation pipeline for the experiments. Figure 3: An overview of the TSSR framework. and the other half are incorrect. The validation dataset is ignored in our experiment since our experiment does not have any parameter tuning. One thing to note about this dataset is when applying augmentation, only linguistically incorrect training samples are augmented, as augmenting the good samples has a higher chance of breaking the syntactic form of a sentence. 2. **ABSA**: Aspect-based sentiment analysis (ABSA) is an annotated Swedish corpus for aspect-based sentiment analysis. This dataset includes various statements that are labeled from 1 (very negative) to 5 (very positive). The dataset also has predefined train, validation, and test splits with 38,640 training samples and 4,830 test samples. Again, the validation dataset is ignored. BaselineThe baseline is trained with the training data with various subparts as described later. No augmentation techniques on the training data is applied to the baseline, only classification using the linear support vector machine (SVM) model [1] with BERT embeddings [1] is used. For all training attempts, SVM parameters used are the default parameters provided by Scikit-learn [1]. Experimental SettingsIn this experiment, we use SVM, but any kind of linear classifier can be applied. The maximum limit is set to 512 tokens so that we can get all the information from the BERT model to classify our linear model. When extracting the BERT embeddings, only the [CLS] token are used after passing each text through the model. The BERT model is a 12-layer transformer with 768 hidden dimensions and 125M parameters [13]. Swedish is a low resource language but there was a Swedish Bert available hence it was used to generate embeddings. We acknowledge that such a large model may not exist in every low resource language but other feature extractors could be used instead. Implementation DetailsFirst, the training set is further split into 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 percent, where each partition is augmented and then added to that partition to train a linear model using BERT embeddings [13]. The reason behind making small stratified partitions is to re-create a scenario where insufficient data is available and to determine whether augmentation helps. Each dataset split is augmented using individual augmentation techniques, as shown in Figure 4 where the augmentation is applied sequentially under the same partition to observe performance differences under controlled conditions. The perturbation rate is 20% for every sentence that is augmented (i.e., 20% of the tokens). Each sentence is augmented once using the described perturbation methods. For the DALAJ dataset, only the incorrect samples are augmented using all the augmentation methods as they are already incorrect. Therefore, any perturbation is less likely to affect the class label. Moreover, each sentence is classified from the middle layer or layer six as it tends to have a high degree of syntax information [11] within the embedding before we pass it to the linear layer. On the other hand, for ABSA, all the augmentation techniques are applied regardless of class labels for ABSA as it is not as syntax sensitive in comparison to DALAJ. Moreover, the embeddings are extracted from the last layer, because we want to get the semantic embeddings from BERT for further experiments. Semantic DeviationAfter the augmentation has been applied, to observe the inner workings and impact on individual sentences, a check is done using similarity measures of non-augmented and augmented sentences to assess the similarity. If an altered sentence largely deviates from its original form, this can be important to check, as very different sentences could destroy the semantics and could change the actual label. We use our \(Devition\) function to check the similarity between an original sentence \(t\) and any augmented sentence \(\hat{t}\) from \(t\). \[Devition(t,\hat{t})=\begin{cases}\text{``similar''},&\text{if }cos(t,\hat{t})\geq \delta\\ \text{``dissimilar''},&\text{otherwise}\end{cases}\] The deviation threshold \(\delta\) is chosen at a level of 0.9 cosine similarity. Any sample below that is considered a different semantic sentence. The reason why 0.9 was used is because the augmented sentences should have high proximity to their original form which is important to preserve the sentiment label. The embedding is extracted using the same BERT model used for all the other experiments. ## Results This section is composed of two parts, where we showcase the F1 scores of various augmentation techniques on DALAJ & ABSA. Lastly, we further investigate how much the sentiments deviate from one another for the ABSA dataset only. The techniques shown in the results are (1) baseline, (2) EDDA, (3) TSSR: controlled perturbation of selected parts of speech tags (in this case, nouns). (4) RSR: only using random synonym replacement. For a fair comparison, we use the exact data for each partition from the training set to augment and train models. Table 1 shows overall F1 scores on DALAJ datasets under different settings. Up until 60% partitions of data, all \begin{table} \begin{tabular}{l|c c c c} \hline Partition & Baseline & EDDA & TSSR & RSR \\ \hline 10\% & 0.56 & **0.58** & 0.56 & 0.53 \\ 20\% & 0.55 & 0.60 & 0.59 & **0.61** \\ 30\% & 0.58 & **0.60** & 0.59 & 0.58 \\ 40\% & 0.56 & 0.61 & 0.57 & **0.65** \\ 50\% & 0.56 & 0.61 & 0.61 & **0.63** \\ 60\% & 0.60 & 0.61 & 0.61 & **0.62** \\ 70\% & **0.64** & 0.63 & 0.60 & 0.63 \\ 80\% & **0.62** & 0.61 & 0.61 & 0.61 \\ 90\% & 0.55 & **0.63** & 0.56 & 0.62 \\ 100\% & **0.64** & 0.61 & 0.60 & **0.64** \\ \hline \end{tabular} \end{table} Table 1: F1 scores on DALAJ under four different settings and ten different proportions of partitions. the augmentation technique improves classification on the DALAJ dataset. However, using more than 60% of the data with augmentation tends to reduce the effectiveness of the said augmentation techniques. When comparing baseline to EDDA from 10% to 60% of the dataset, EDDA improves by 2.5% on average. On the other hand, when comparing baseline to TSSR we get a 2% average increase. But the best results appear under RSR compared to the baseline by a 3.5% increase in classification performance. One of the reasons for using text augmentation is when we have low amounts of data, we can use such techniques to improve models for downstream tasks. Figure 5 supports that claim as by only using 40% of the data, RSR improves by 9% over the baseline, whereas EDDA improves F1 by 5%. However, this is a case where TSSR only improves by 1%. The original paper, which has introduced DALAJ [12], has also reported an F1 score of 62% on the same test set. This is compared to our approach, where we only needed 40% of the data to get 65% on F1 using only RSR. Another proof of why augmentation can be effective with limited labeled data. EDDA & RSREDDA improves the performance in seven out of 10 partitions of the dataset, whereas RSR improves in six out of ten partitions. The augmentation works satisfactorily for this task, especially in low-data scenarios. Surprisingly, RSR performs exceptionally well, with only 40% of the data overshadowing baseline with 100% of the training data by 1% in F1 score. TssrThis augmentation does improve in six out of 10 partitions. TSSR on this downstream task does not perform as well as EDDA and RSR. However, it consistently improves over the baseline, but it is not the most optimal augmentation technique to use in this classification dataset. a justifiable augmentation technique that could work in a given dataset. TSSROn eight out of ten partitions of this multi-labeled dataset, TSSR consistently improve classification performances. This is partly due to only changing certain noun token types that are more controlled. This gives a higher chance of not changing adverbs and adjectives, preserving the class label. Moreover, using a word2vec model to find replacements allows us to get a variety of words that may not be found in a standard dictionary, such as name replacements, e.g., Mattias Eriksson to Mattias Larsson. Semantic Deviation The semantic deviation is only assessed for the ABSA dataset to see how much augmentation affects the sentiment dataset. Only the sentiment dataset is used because it has the most chance of breaking when various augmentation techniques are applied. Table 3 shows how many augmented sentences hold enough similarity to the original sentence. 40.3% of the augmented sentences by EDDA do not meet our minimum criterion (i.e., cosine similarity 0.9) to be similar to the original sentence, while TSSR only produces 14.8% of the synthetic sentences that have the similarity below 0.9, proving that TSSR preserves semantic proximity to the original sentence hence preserving the label. Therefore, it is safe to say TSSR can play a role as a competitive module together with EDDA in sentiment datasets. ## Conclusion We introduced EDDA, a modification of EDA without a huge dependency on language, and TSSR, a complementary method to EDDA, to replace synonyms given type specific information. We measured how these two techniques worked on the representative Swedish datasets and showed that those two techniques could improve DALAJ by 1% over baseline with only 40% of the training data. We also showed how well the presented augmentation worked with small amounts of labeled data and demonstrated that less data is most effective for augmentation to perform well. Moreover, augmentations may not always improve classification results but can still be very useful in most instances. We would like to emphasize that the techniques introduced in this paper are easily adaptable to other low-resource languages. Our future work involves (1) testing the augmentation techniques in other low-resource languages, (2) testing on multiple different downstream tasks other than classifications, (3) extending the framework to other types of augmentation that is language agnostic or at least easily adaptable to any language.
2308.16829
Machine learning of microscopic structure-dynamics relationships in complex molecular systems
In many complex molecular systems, the macroscopic ensemble's properties are controlled by microscopic dynamic events (or fluctuations) that are often difficult to detect via pattern-recognition approaches. Discovering the relationships between local structural environments and the dynamical events originating from them would allow unveiling microscopic level structure-dynamics relationships fundamental to understand the macroscopic behavior of complex systems. Here we show that, by coupling advanced structural (e.g., Smooth Overlap of Atomic Positions, SOAP) with local dynamical descriptors (e.g., Local Environment and Neighbor Shuffling, LENS) in a unique dataset, it is possible to improve both individual SOAP- and LENS-based analyses, obtaining a more complete characterization of the system under study. As representative examples, we use various molecular systems with diverse internal structural dynamics. On the one hand, we demonstrate how the combination of structural and dynamical descriptors facilitates decoupling relevant dynamical fluctuations from noise, overcoming the intrinsic limits of the individual analyses. Furthermore, machine learning approaches also allow extracting from such combined structural/dynamical dataset useful microscopic-level relationships, relating key local dynamical events (e.g., LENS fluctuations) occurring in the systems to the local structural (SOAP) environments they originate from. Given its abstract nature, we believe that such an approach will be useful in revealing hidden microscopic structure-dynamics relationships fundamental to rationalize the behavior of a variety of complex systems, not necessarily limited to the atomistic and molecular scales.
Martina Crippa, Annalisa Cardellini, Matteo Cioni, Gábor Csányi, Giovanni M. Pavan
2023-08-31T16:02:55Z
http://arxiv.org/abs/2308.16829v1
# Machine learning of microscopic structure-dynamics relationships in complex molecular systems ###### Abstract In many complex molecular systems, the macroscopic ensemble's properties are controlled by microscopic dynamic events (or fluctuations) that are often difficult to detect via pattern-recognition approaches. Discovering the relationships between local structural environments and the dynamical events originating from them would allow unveiling microscopic-level structure-dynamics relationships fundamental to understand the macroscopic behavior of complex systems. Here we show that, by coupling advanced structural (_e.g._, Smooth Overlap of Atomic Positions, SOAP) with local dynamical descriptors (_e.g._, Local Environment and Neighbor Shuffling, LENS) in a unique dataset, it is possible to improve both individual SOAP- and LENS-based analyses, obtaining a more complete characterization of the system under study. As representative examples, we use various molecular systems with diverse internal structural dynamics. On the one hand, we demonstrate how the combination of structural and dynamical descriptors facilitates decoupling relevant dynamical fluctuations from noise, overcoming the intrinsic limits of the individual analyses. Furthermore, machine learning approaches also allow extracting from such combined structural/dynamical dataset useful microscopic-level relationships, relating key local dynamical events (_e.g._, LENS fluctuations) occurring in the systems to the local structural (SOAP) environments they originate from. Given its abstract nature, we believe that such an approach will be useful in revealing hidden microscopic structure-dynamics relationships fundamental to rationalize the behavior of a variety of complex systems, not necessarily limited to the atomistic and molecular scales. ## Introduction The macroscopic behavior of complex systems is often influenced by fluctuations that, while being fundamental for comprehending the systems' dynamics, are challenging to detect and control. This also holds true at the molecular scale, where phenomena such as nucleation, defect propagation, and phase transitions are intricately linked to these fluctuations. The integration of advanced molecular descriptors with Machine Learning (ML) has been playing a key role in analyzing molecular trajectories, contributing to a better understanding of diverse nanoscale systems, ranging from atomistic to supramolecular levels.[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] Standard human-based descriptors, tailored for building detailed analyses and investigating specific systems like, i.e., ice-water interfaces [12] or metal clusters [13, 14], have increasingly left more and more space to abstract descriptors, [15, 16, 17, 18, 19, 20, 21] often combined with supervised and unsupervised ML methods.[1, 2, 3, 4, 5, 6, 7, 8, 9, 10] These ML-based techniques offer valuable insights into the structural and dynamical properties of the systems. [5, 6, 7, 8, 9, 10] While human-based approaches provide an accurate comprehension of intrigued physical-chemical mechanisms, they heavily rely on in-depth prior knowledge of the system, limiting their transferability. On the contrary, the use of abstract descriptors allows more general representations and outlines a broader picture of the system behavior, eventually managing a large amount of high-dimensional data which are often difficult to rationalize. Widely recognized approaches based on dimensionality reduction principles (_e.g._ linear Principal Component Analysis (PCA), kernel-PCA [22], t-SNE[23]), are frequently employed to extract information from such descriptors related dataset, then classifying the reduced dataset with diverse clustering methods (_e.g._ KMeans[24], Gaussian Mixture Models (GMM) [25], DBSCAN[26], HDBSCAN [27]) to facilitate its interpretation. However, when relying on structure-based descriptors, these approaches have limitations: while they effectively detect dominant structural environments in the system, they may fail to capture local time-dependent events that are sparsely observed within the trajectory. These transitions, although statistically insignificant, have revealed a crucial role in the overall behavior of the system.[28, 29] The absence of an adaptive resolution that allows to catch non-dominant events presents two challenges: firstly, it leads to a loss of information by failing to detect fluctuations within the system, and secondly, these fluctuations may be inaccurately classified within the dominant clusters, thereby contaminating them. In recent studies, we have developed dynamic descriptors that are very efficient in capturing the local dynamic environments of atoms in complex molecular systems from structural information/identity-based information. [28, 29] By monitoring these descriptors over time along the trajectory, we can effectively capture dynamic behaviors, including local and sparse events within the system. In particular, we introduced a dynamical descriptor, LENS (Local Environments and Neighbors Shuffling)[28], which considers the interacting particles as distinct individuals (IDs) monitoring how much the list of neighbor particles ID (of each particle \(i\)) changes over time, for example at each sampled \(\Delta\)t. As follows, LENS provides information on the reshuffling of the local neighbor's environments surrounding each unit \(i\) in the system along the trajectory. However, at the same time, a descriptor like LENS retains very limited structural information: if, _e.g_, the neighbor units rearranging locally, while remaining within the neighborhood in \(\Delta\)t, LENS would not detect any signal (such events are vice-versa well captured by structural descriptors such as tSOAP[29]). Thus, while LENS can detect the local dynamics of the system, it does not allow to determine, _e.g_, the specific structural environment from which dynamic events originate. Here we demonstrate how, combining structural (SOAP [15]) and dynamical (LENS [28]) descriptors, it is possible to obtain an improved characterization of the system. We compose a dataset where the SOAP spectra (\(n\) components each) are augmented with the LENS descriptor (an additional dimension), leading to significant technical and scientific advantages. Firstly, (i) it enables the separation of sparsely observed, but relevant, dynamic events/environments (_e.g._ fluctuations) from the noise in the SOAP dataset. As a result, (ii) the interpretation of SOAP and LENS (combined) not only provides a more accurate complete characterization, but the two descriptors improve each other: the addition of LENS yields an enhanced SOAP structural classification. Furthermore, (iii) this allows identifying unique microscopic structure-dynamics relationships, showing _e.g._ which local SOAP structural environments generate a certain type of dynamical event along the sampled Molecular Dynamics (MD) trajectory. In this work we demonstrate the efficiency and abstraction of this approach on diverse molecular systems, employed herein as case studies. ## Results As a first case study, we focus on a copper **Cu(211)** FCC metal surface recently demonstrated to possess non-trivial internal atomic dynamics. Metals are known to display interesting dynamic behavior even well below the melting temperature.[30, 31] For example, when simulated at T=600 K, the **Cu(211)** FCC slab of Figure 1 exhibits a surface with structurally diverse environments, as made evident by a simple coordination analysis, and a non-trivial internal atomic dynamics (Figure 1a, right: dynamical atomic rearrangements). Unveiling the underlying mechanism behind such dynamics is essential to understand the properties of such metal systems [32, 33, 34]. Moreover, the comprehension of structural-dependent features plays a fundamental role in practical applications such as heterogeneous catalysis, mechanical properties, _etc_.[35, 36, 37, 38] SOAP-based and LENS-based ML analyses have been recently employed to analyze MD sim Figure 1: Flow of the analysis. (a) FCC 211 copper slab snapshots colored by atom coordination (excluding bulk) at t=0 ns and after 75 ns of an MD trajectory at T=600 K. (b) LENS and SOAP: given the local neighborhood (cyan sphere) of each atom (red atom) in the system, LENS tracks the _identity_ of the neighbor atoms within it (no information on their geometrical organization is retained), while SOAP captures their structural arrangement (without tracking their identity: it is a permutationally-invariant description). (c) SOAP-based analysis of **Cu(211)** system. Left: HC-based dendrogram (from an HDBSCAN* classification, see Figure S1a) and dendrogram cutting, defining the merged macro-clusters. Middle: PCA of the SOAP dataset (first two principal components), colored based on the detected macro-clusters. Right: chord diagram (fluxes) and transition probability matrix for the dynamical transitions between the macro-clusters (SOAP environments). Bottom: surface MD snapshots where atoms are colored based on the classification: bulk atoms in green, sub-surface in orange and red, surface ”valleys” in yellow, faces in cyan, and edges in blue. (d) LENS analysis of **Cu(211)** at 600 K. Left: LENS time-series and classification.[28] Right: chord diagram (fluxes) and transition probability matrix. Bottom: MD snapshots with atoms colored based on the LENS clusters: more/less dynamic atoms in brighter/lighter colors. (e) Scheme of the SOAP&LENS combined data set: the SOAP power spectrum of each particle at every time step (\(\mathbf{p}_{i}^{t}\)) is combined with the LENS scalar value calculated at the subsequent time-interval (\(\delta_{i}^{t+\Delta t}\)), obtaining a new dataset \(\mathbf{\chi}_{i}^{t}\). ulation trajectories of metals below the melting temperature (including, _e.g._ copper surfaces as that of Figure 1). [9; 28; 29] Although a structural-descriptor-based analysis, such as that one using SOAP combined with dimensionality reduction and density-based clustering, captures the most prevalent and dominant conformation domains within the system, a pure LENS analysis based on the reshuffling of the neighborhood over time, catches the dynamical features of the system (see Figure 1b). Here, we investigate a **Cu(211)** FCC copper slab using a preexisting MD trajectory composed of N=2400 atoms simulated _via_ a DeepMD-based potential [39] for 150 ns (see Cioni _et al._[9] for details). To examine both the structural and dynamical properties of the **Cu(211)** system, we firstly adopted a similar _bottom-up_ protocol as described in the study by Cioni _et al._[9]. This strategy includes, as a first step, a representation of the system via the SOAP descriptor. One SOAP spectrum is extracted for each of the 900 atoms (three top-most layers, although the SOAP spectra also consider the presence of the 1500 bottom-side atoms as neighbors, they are not included in the analysis because we are interested in the dynamics of the surface [9]) in 482 snapshots taken every \(\Delta\)t=0.3 ns along 145 ns of MD simulation, for a total of 482\(\times\)900 spectra. A Principal Component Analysis (PCA) is then used to reduce the dimensionality of the SOAP spectra dataset, considering the first n-PCA components in order to retain at least 99.5% of the variance (see Table S1 in Supporting Information for details). Unsupervised clustering algorithm (HDBSCAN*[27] or Gaussian Mixture Models [25]) can be finally adopted to rationalize the data and to identify the dominant Atomic Environments (AE) on the surface (colored clusters in the PCA of Figure S1). From the atoms' transition between the clusters over time, we compute a transition probability matrix. This reports the probability of an atom that is in a certain cluster at time \(t\) to remain in the same environment at time t+\(\Delta\)t (_i.e._, after \(\Delta\)t: the temporal resolution of our analysis) or to undergo transition into a different cluster (see Figure S1 for the micro-clusters transition matrix). From the transition probability matrix, we construct a Hierarchical Clustering (HC) based dendrogram merging the clusters with high dynamic interconnection (Figure S1). The dendrogram is cut in order to retrieve only meaningful clusters, colored accordingly in the PCA plot of Figure 1c, where only the first two PCA components are reported. The results demonstrate how SOAP can successfully distinguish diverse structural environments within this system, including the bulk (green), subsurface (orange and red), surface valleys (yellow), faces and edges (cyan and blue), identified in different colors in Figure 1c. The dynamic interconnections between the various clusters (AEs) on the surface are also represented by the cord diagrams in Figure 1c on the right: in these cord diagrams the dimensions of the arcs stand for the population of the various clusters, while the dimensions of the strings connecting them give visual information on how pronounced the atomic flow is in \(\Delta\)t, and thus on their dynamic interconnection. Moreover, we also obtained the transition probabilities matrix (% to undergo transition in \(\Delta\)t=0.3 ns) between the HC-merged clusters (Figure 1c right). Separately, we also perform a LENS analysis on the same 482 snapshots extracted by the same MD trajectory. A LENS analysis of the system reveals intriguing surface events that are not captured (or highlighted) by the static SOAP-based analysis of structure as described above. Specifically, a few Cu atoms are seen to detach from the crystalline structure of the **Cu(211)** surface and to diffuse on it very fast. On the one hand, since these diffusing atoms are characterized by a high rate of reshuffling of their neighbors, they are clearly identified by LENS as a separate environment in the dataset (Figure 1d). On the other hand, a comparison of Figure 1c and Figure 1d shows how, since these points are sparse and have negligible statistical weight in the dataset, these are overlooked in a pattern recognition approach such as that, _e.g._, of Figure 1c. In particular, in the SOAP analysis of Figure 1c, it is possible to note that the diffusing atoms (magenta in Figure 1d), are merged to the SOAP cluster identifying the edges of **Cu(211)** surface. To address this limitation, here we developed a combined approach based on the basic assumption that a structural environment at a certain time might influence the dynamical events within the subsequent time interval. As shown schematically in Figure 1e, starting for example at time \(t_{1}\), a SOAP spectrum \(\mathbf{p}_{i}^{t_{1}}\) is computed for each particle \(i\) in the system. We also calculate its LENS value for the immediately subsequent time interval \(\delta_{i}^{t_{2}-t_{1}}\). By including the LENS term as an extra-component into each SOAP power spectrum, we thus obtain a new vector \(\boldsymbol{\chi}_{i}^{t_{1}}=(\mathbf{p}_{i}^{t_{1}},\delta_{i}^{t_{2}-t_{1}})\) containing information on the structural properties in the neighbor environment surrounding atom \(i\) at time \(t_{1}\) and its evolution in the subsequent time interval \(t_{2}-t_{1}\). The SOAP spectrum and LENS scalar component are opportunely normalized to have the same weight in the dataset (see Methods for details). Iterating this procedure for the whole trajectory, we thus obtain a new dataset (SOAP&LENS dataset) comprising \(N=N_{particle}\times N_{frames}\) vectors, each one of dimension \(n+1\), where \(n\) is the SOAP spectrum dimension (structural information) and \(1\) the LENS (dynamical) component. Such updated dataset effectively contains information on the instantaneous environments surrounding each particle \(i\) and how they are prone to change over time at the resolution \(\Delta\)t (0.3 ns) of our analysis. This method allows us to delineate a new concept for classification, as reported in Figure 2. On the left side, Figure 2a shows the PCA of the SOAP dataset projected onto the first two PCA components. On the right side, Figure 2a shows the same projection for the new SOAP&LENS combined dataset (see Methods section for additional details). Notably, while the majority of the data has an almost identical distribution on the two PCAs, a distinct cloud of points appears as evidently separated from the rest in the combined dataset (top-right: highlighted by the red circle). Shown in Figure 2b, unsupervised HDBSCAN* clustering combined with HC-based merging (in general, any other suitable clustering algorithm, _e.g._ GMM, DBSCAN, KMeans, would also work) reveals that such a separated domain on the SOAP&LENS PCA identifies a distinct, specific local environment. Note that the clustering parameters used for the analyses of Figures 1c and 2b are exactly the same (see Methods for details). This comparison shows how the classification of Figure 1c (SOAP only) is enriched via the detection of a new LENS environment identified by the pink color (highlighted Figure 2: Combined SOAP&LENS analysis of a Cu(211) surface at 600 K. (a) Left: First two PCA components of the SOAP power spectra of the **Cu(211)** system. Right: First two PCA components of the SOAP&LENS combined descriptor: the new cloud of points emerging in the PCA projection of the \(\chi\) vector is highlighted by the red circle. (b) SOAP&LENS based analysis of **Cu(211)** system. Left: HC-based dendrogram (from an HDBSCAN* classification, see Figure S2a) and dendrogram cutting, defining the merged macro-clusters (accordingly to clusters in Figure 1a, except for a new pink cluster). Middle: PCA of the SOAP&LENS dataset (first two principal components), colored based on the detected macro-clusters and chord diagram (fluxes). Right: transition probability matrix for the dynamical transition between the macro-clusters, highlighting the new cluster in pink. (c) Trajectory of an atom detaching from an edge, running on the surface, and re-entering into the edge. The trajectory is shown both on the PCA plot and on the snapshots, colored from blue to yellow according to the time evolution. (d) Three surface MD snapshots colored based on the classification: bulk atoms in green, sub-surface in orange and red, surface "valleys" in yellow, faces in cyan, edges in blue and pink atoms detaching from the edges and running on the surface (an example of this process is reported in the zoom below). by the arrows in the transition matrix and chord diagram of Figure 2b). As done for both the SOAP and LENS independent analyses, we can reconstruct the evolution of the detected environments by following the AE belonging to all atoms at every time step (see the chord diagram and transition probability matrix in Figure 2b, right). This analysis based on combining SOAP and LENS in a unique dataset offers distinct advantages over the purely SOAP-based approach. The decoupling of this additional pink LENS environment not only provides a more complete description of what happens in the **Cu(211)** surface at 600 K, but also improves the statistical precision in the classification of the SOAP environments. In fact, in differentiating the structural from the dynamical environments, the detection of the SOAP AEs in the SOAP&LENS dataset benefits from a reduced error. Notably, the PCA area identified by the red oval in Figure 2b, which corresponds in this analysis to a well-defined LENS AE, merges into the SOAP AEs in the PCA of Figure 1c, creating errors and increased uncertainty. In this sense, when combined, two distinct descriptors such as, _e.g._, SOAP and LENS, complement and improve each other. Furthermore, such an approach also allows tracking the origin of local dynamical (LENS) fluctuations occurring on the surface, outlining microscopic structure-dynamics relationships. The off-diagonal entry in the matrix of Figure 2b representing the transition of atoms from the edge AE (in blue) to the pink (LENS) environment (\(\sim\)0.1 % probability) reveals that those atoms diffusing with high-speed on the metal surface come from the surface edges (see Movie S1). After their creation and diffusion, such diffusing pink atoms are then again reabsorbed into the surface edges (\(\sim\)6.4 % probability). The large imbalance between the probabilities for the creation and annihilation of these LENS diffusing atoms (Figure 2b right, \(\sim\)0.1% vs. \(\sim\)6.4%) indicates that the emergence of such fast atoms is a rare event. Yet, it is clear that detecting such diffusing atoms is key for understanding the behavior of the system. Figure 2c provides an example of the structural variation of an atom undergoing such transition, following its trajectory both on the PCA plot and along the MD. The atom's trajectory is color-coded based on the MD simulation time, from dark blue to yellow, showing atoms that after residing within the surface edges (dark blue lines, example snapshot 1), detach and diffuse on the surface becoming part of this pink LENS environment (green lines, example snapshot 2), and then being reabsorbed into the edges (yellow lines, example snapshot 3). Figure 2d shows a complete representation of the **Cu(211)** surface colored based on corresponding SOAP&LENS environments. In contrast to the snapshots of Figure 1c,d, this comprehensive approach captures all the key SOAP as well as LENS environments, providing a more complete characterization of this system. By combining these two descriptors, it becomes evident that the motion of atoms diffusing on the surface (pink AE) originates from fluctuations within the SOAP environment, which defines the edges of the surface (blue AE). We further test our approach on different systems. We carried out a second test on a 309 atoms icosahedral gold nanoparticle (**Au-NP**) model, simulated for 2 ns at T=200 K using the Gupta potential [10, 40], (see Methods section for details). In these conditions, this **Au-NP** was demonstrated to have non-trivial Figure 3: Combined SOAP&LENS analysis of a cold Au-NP surface at 200 K. (a) SOAP-based analysis of the **Au-NP** system. Left: HC-based dendrogram (from an HDBSCAN* classification, see Figure S1b) and dendrogram cutting, defining the merged macro-clusters. Middle: PCA of the SOAP dataset (first two principal components), colored based on the detected macro-clusters, and chord diagram (fluxes). Right: transition probability matrix for the dynamical transitions between the macro-clusters (SOAP environments). Bottom: two nanoparticle MD snapshots where atoms are colored based on the classification: vertices in blue, surface atoms in lime, sub-surface in light-green, yellow and orange, and bulk atoms in red. (b) LENS analysis of **Au-NP**. Left: LENS time-series and classification. Middle: chord diagram (fluxes) and transition probability matrix. Right: MD snapshots with atoms colored based on the LENS clusters: more/less dynamic atoms in brighter/lighter colors. (c) SOAP&LENS based analysis of the **Au-NP**. Left: HC-based dendrogram (from an HDBSCAN* classification, see Figure S2b) and dendrogram cutting, defining the merged macro-clusters, colored according to cluster classification in Figure 3a, except for a new pink cluster. Middle: PCA of the SOAP dataset (first two principal components), colored based on the detected macro-clusters, and chord diagram (fluxes). Right: transition probability matrix for the dynamical transitions between the macro-clusters, highlighting the new cluster in pink. Bottom: three nanoparticle MD snapshots colored according to the classification: vertices in blue, surface atoms in lime, sub-surface in light-green, yellow and orange, bulk atoms in red and "liquid-like" region in pink. (d) Trajectory of an atom detaching from a vertex and entering the surface of the nanoparticle and giving rise to the rosette environment. The trajectory is shown both on the PCA plot and on the snapshots, colored from blue to yellow according to the time evolution. dynamics [10, 28]. In Figure 3a, a SOAP-based analysis of the MD trajectory reveals the dominant structural environments within the NP vertices in blue, surface in lime, sub-surfaces AEs in orange, bulk atoms in red and also surface defects in yellow and rosette in light-green. The dynamics of these SOAP AEs is quantified by the exchange chord diagram and in the transition probability of Figure 3a (right). At the same time, analysis of the LENS time series unveils a crucial insight, overlooked by a pure SOAP analysis (Figure 3b). After \(\sim\) 180 ns of MD simulation, the nanoparticle undergoes a sharp local structural transition involving one vertex, which penetrates the surface generating a distinctive structure known as a rosette (Figure 3a,d: in light-green). Notably, the creation of a rosette (six symmetrical neighbors around an intruded center) from a vertex (five symmetrical neighbors) is an event that is known to happen in such icosahedral NPs and that can be observed experimentally.[10, 41]. The LENS analysis shows the emergence of strong signals when the vertex intrudes and triggers the formation of the rosette (Figure 3b, left). In particular, the magenta colors in Figure 3b reveal, after such local transition, the presence of a highly dynamic "liquid-like" region surrounding the rosette, coexisting with a "crystalline-like" domain in the remaining portion of the **Au-NP**. It is worth noting how a SOAP analysis alone overlooks such a dynamic surface non-uniformity: for the SOAP descriptor, rich in structural information, this local dynamical change does constitute a relevant effect. In the SOAP-based analysis, such a "liquid-like" region is classified together with the crystalline region, as a global surface cluster (lime color), even if the dynamic behavior of the two regions is different. Therefore, the SOAP description fails to capture part of the system physics: it incorporates two distinct regions with entirely different dynamical behaviors into one single cluster characterized by an averaged structural representation. In Figure 3c, we show the results of SOAP&LENS based analysis, where we combined the SOAP spectrum of each atom at every timestep with the LENS signal for the same atom at the subsequent \(\Delta\)t. In this case, the combined analysis reveals that a significant portion of the PCA-reduced data -in particular, that central region referring to the surface of the NP (in Figure 3a: in lime)-corresponds to a highly dynamic LENS environment (Figure 3c: pink). This allows us to disentangle the "liquid-like" region from the well-defined crystal-like structural domains on the NP surface. Furthermore, the results of Figure 3c demonstrate again how, also in this case, the addition of LENS improves the accuracy in the detection of the SOAP environments. Comparing of Figure 3a _vs._ 3c, it is clear how the analysis robustly distinguishes now the edges (dark green), faces (lime) and vertexes (blue), as well as rosettes (light-green) and defects (yellow) on the icosahedral NP surface. Similar to the case of **Cu(211)**, a strong correlation arises between the "liquid-like" dynamical domain and specific structural environments: the LENS (pink) cluster in the transition matrix is found connected to the faces (lime, \(\sim\) 3.9 %), edges (green, \(\sim\) 3.4 %), vertices (blue, \(\sim\) 2.7 %) and especially the rosettes (light-green, \(\sim\) 4.2) of the NP. This is interesting, considering that the pink dynamical region (local "melting" of the NP surface) originates from the creation of a first rosette (a defect in the icosahedron). In Figure 3d, we show an example of a structural variation event that gives rise to the formation of a rosette structure. This transition is depicted both on the PCA plot and on the snapshots, where the trajectory of the vertex atom (blue, at 50 ns) is color-coded according to time evolution, ranging from dark blue to yellow (2 us of MD). This demonstrates how the vertex atom entering into the surface, leads to the emergence of a "liquid-like" region surrounding the rosette (pink, at 2 us of MD). As a last case study, we present the effectiveness of our SOAP&LENS analysis in capturing distinct phases within a system where ice and liquid water coexist in correspondence of the solid and liquid transition temperature. We analyzed 50 ns of an atomistic simulation of water modeled with **TIP4P/Ice** force field, containing 2048 molecules at equilibrium between the two phases (\(\sim\)50% ice and \(\sim\)50% liquid water) at the transition temperature.[28, 29] A pure SOAP-based (structural) analysis, reported in Figure 4a, can distinguish the two main phases (ice in white and liquid water in orange and purple). The two clusters in orange and purple in Figure 4a, correspond to tiny variations of the same environment (liquid water). This is clearly shown in the probability matrix and in particular in the HC-based dendrogram, where the purple and orange AEs are very close to one each, and both are in comparison very far from the white one (see Figure 4a right). However, recently we have demonstrated that a pure LENS (dynamic) analysis can detect easily both the ice and water environments, plus also the interface between them.[28] Figure 4b shows the LENS time series, which clearly highlight two distinct statistically relevant environments, with different dynamics, separated by an interface environment where the ice/liquid water molecular transitions occur. The flux chord diagram and the probability transition matrix of Figure 4b (right) reveal how the ice/liquid phase transition of the molecules takes place through the interface. Figure 4c displays the combined SOAP and LENS in a unique dataset, thereby providing a PCA that is significantly distorted compared to the SOAP one of Figure 4a. Two main density peaks are evident (in white and red) corresponding respectively to ice and liquid water. GMM clustering now clearly detects a distinct area on the PCA corresponding to the ice-water interface (in cyan). In Figure 4d, we highlighted one explicatory trajectory (on the PCA plot and on the snapshot) of a water molecule undergoing phase transition from liquid water to ice, crossing the interface. The flow chart in Figure 4e provides a qualitative visualization of the transitions between the various environments considering specific time intervals along the trajectory (_e.g._ at 5 ns, 10 ns, 20 ns and 30 ns). Also in this case, the addition of a LENS component to the SOAP vectors offers a clear advantage over a purely structural analysis (SOAP only). In this specific case, it is interesting to note how LENS retains large part of the information contained in the system trajectory compared to SOAP. This is evident, for example, if we compare the cumulative variance contained in the dataset as a function of the number of principal components of the PCA. In Figure S4, we clarify that to reach the 99% of the cumulative variance of the dataset 8 components are needed in a purely structural SOAP dataset, while for example, when LENS is embedded into the dataset, with only three components we Figure 4: Combined SOAP&LENS analysis of ice-liquid water equilibrium in correspondence of the transition temperature. (a) SOAP-based analysis of the **TIP4P/Ice** system. Left: first two PCA plot, colored according to GMM clustering (see Methods section for details). Right: chord diagram (fluxes) and transition probability matrix of the clusters and its HC-based dendrogram showing the relation within them. Bottom: a snapshot along the trajectory colored based on the cluster classification, ice molecules in white, liquid water in orange and purple. (b) LENS-based analysis of **TIP4P/Ice**. Left: LENS time-series and classification. Middle: chord diagram (fluxes) and transition probability matrix. Bottom: an MD snapshot with atoms colored based on the LENS clusters, more/less dynamic atoms in brighter/lighter colors. (c) SOAP&LENS based analysis of the **TIP4P/Ice**. Left: first two PCA plot, colored according to GMM clustering (see Methods section for details). Right: the chord diagram (fluxes) and transition probability matrix of the clusters and its HC-based dendrogram showing the relation within them. Bottom: an MD snapshot colored according to the cluster classification: ice molecules in white, liquid water in red and the interface in cyan. Right: zoom of the interface region. (d) Trajectory of a molecule that undergoes a phase transition, from liquid water to ice, crossing the interface. The trajectory is reported both on the PCA plot and on the zoomed snapshots, colored from blue to yellow according to the time evolution. (e) Flow chart of the transitions between the three phases, colored accordingly, at 5 ns, 10 ns, 20 **i2** and 30 ns. largely exceed the 99% of variance. This demonstrates how, in this system, the LENS descriptor might retain more comprehensive information regarding the key features that characterize the system, compared to the SOAP descriptor. In conclusion, this study points out the intrinsic limitations of relying solely on structural descriptors to comprehend the physics of dynamically evolving systems. By integrating a microscopic dynamic descriptor, like LENS, with a structural counterpart (_e.g._ SOAP), we obtain numerous advantages. First, this integration improves the accuracy of both structural and dynamic classifications, "cleaning up" the noise and reducing the degeneracy issues intrinsic to both individual analyses. Second, this paves the way for understanding how given structural microscopic environments within the system can generate specific dynamic behavior (fluctuations). This opens new routes to learn microscopic-scale structure-dynamic relationships (_e.g._ those of Figures 2 and 3) that are key to understanding the behaviors and properties of these, and in general of a variety of complex systems. These results are also reminiscent of general concepts in physics. For instance, when studying the behavior of a system, the sole positional information of the objects is insufficient to predict the dynamic behavior of the system at non-zero temperature (_e.g._, information on velocities is also needed). Similarly, these results demonstrate how coupling a purely structural parameter like SOAP, which provides information only on the relative structural arrangements, with a descriptor that is rich in local dynamic information, offers fascinating insights. We expect that such type of approach, given its abstract nature, will be highly valuable in characterizing the behavior of complex systems across various domains and potentially also beyond the atomistic/molecular scale. ## Methods ### MD simulations and pre-processing The atomistic model of **Cu(211)** surface (see Figures 1,2) is composed of N\({}_{211}\)=2400 atoms. The MD simulation is conducted at T=600 K _via_ LAMMPS software[42] using a Neural Network potential built using the DeepMD platform,[39] as described in detail in reference [9]. The sampled trajectories are 150 ns long. A total of 502 frames are extracted every \(\Delta\)t=0.3 ns along the MD trajectory and used for the analysis. The atomistic model for the icosahedral **Au-NP** is composed of N\({}_{Au-NP}\)=309 gold atoms (Figure 3). The **Au-NP** model is parametrized according to the Gupta potential, [40] and is simulated for 2 us of MD at T=200 K using the LAMMPS software[42] as described in detail in reference [10]. 2000 frames are extracted every \(\Delta\)t=1 ns of the MD trajectory and then used for the analysis. The atomistic Ice/Water interface model of Figure 4 is composed of N\({}_{TIP4P}\)=2048 water molecules. The MD simulation is conducted at T=268 K. The **TIP4P/Ice** water model[43] is used to represent both the solid phase of ice and the phase of liquid water,[39] as described in detail in reference [28]. The sampled trajectory is t=50 ns long, sampled and analyzed every 0.1 ns. All MD trajectories are firstly pre-processed in order to obtain a hdf5 database, containing the data needed to extract the SOAP spectra and LENS values by using the software SOAPify, accessible at: [https://github.com/GMPavanLab/SOAPify](https://github.com/GMPavanLab/SOAPify). For the **Cu(211)** surface, we computed the SOAP spectra on both the surface and the bulk (N\({}_{211}\)=2400 atoms in total), removing most of the deep bulk atoms, thus obtaining the 900-atoms system analyzed herein. We analyzed all the atoms of the **Au-NP** system. In the **TIP4P/Ice** water system of Figure 4 we computed the SOAP spectra for all the O atoms considering also the H atoms in the environment, while we did the LENS analysis by considering only the O atoms. In all cases, the analysis is then conducted by building both the local SOAP environments and the LENS values of each unit within a sphere of radius r\({}_{cut}\) (see 1b), equal to 6 A for the **Cu(211)**, 4.48 A for the **Au-NP**, and 6 A for the **TIP4P/Ice** system. ### SOAP analysis To describe the structural environment surrounding each particle within the simulations, we use the SOAP descriptor. We compute the SOAP spectrum \(\mathbf{p}_{i}^{t}\) representing the local structural environment of each particle \(i\) at every timestep \(t\) within a cut-off radius r\({}_{cut}\) (6 A for the **Cu(211)**, 4.48 A for the **Au-NP**, and 6 A for the **TIP4P/Ice** system) through the software SOAPify, accessible at: [https://github.com/GMPavanLab/SOAPify](https://github.com/GMPavanLab/SOAPify). The SOAP vectors are generated using _dscribe[44]_, and both l\({}_{max}\) and n\({}_{max}\) parameters for spherical harmonics, and number of radial basis functions are set to 8. The results in a 576-component vector represent the environments of one particle at a certain timestep for the single species systems in (**Cu(211)** and **Au-NP**), while in a 1728-component vector for the ice/water interface. Then, we applied the Principal Component Analysis algorithm to each dataset (as implemented in the SciPy python package [45]), reducing the dimensionality of the representation to the first n-components, in order to reach a certain cumulative variance within each system, as reported in Table S1. To analyze the reduced data of the **Cu(211)** and **Au-NP** systems, we applied the HDBSCAN* [27] clustering algorithm set up with min_cluster_size=80 for the former and min_cluster_size=150 for the latter, obtaining 7 and 9 environments, respectively. We used soft-clustering to assign the point classified as noise to their closer cluster. From the cluster transition probability matrix (see Figure S1a,b), we found the relations within the environments _via_ Hierarchical Clustering algorithm. Then, merging the ones closer than 1 in terms of the chosen metrics (correlation) and linkage (average), we obtained 6 and 7 macro-clusters respectively for **Cu(211)** and **Au-NP** systems. Regarding the **TIP4P/Ice** system, we followed a slightly different procedure: indeed, as clear from the PCA of the SOAP spectra reported in Figure 4a, there are no clear density-based patterns, and HDBSCAN* failed to assign meaningful clusters, as shown in Figure S3. Thus, instead of HDBSCAN* clustering algorithm, we employed a Gaussian Mixture Model (GMM)[25] setting the number of clusters to three, without merging clusters _a posteriori_ bust still applying HC being interested in cluster relations. Then, for all the systems, we compute the clusters' fluxes, _i.e._ the number of particles going from one cluster to another, following the cluster assignment along the trajectory. The fluxes are visualized as chord diagrams in Figures 1c, 3a, and 4a. The width of the arcs represents the total number of transitions experienced by each cluster during the simulation, including both self-transitions and those to other clusters. The chords linking the clusters depict their interconnections, with the extension of the chord's base indicating the amount of particles exchanging between connected clusters. The color of the chords indicates the dominant direction of particle transfer between clusters. Then, normalizing the flux matrices on each row, we obtained the transition probability, reported in Figures 1c, 3a and 4a. ### LENS analysis We compute the \(\delta_{i}(t)\) signals for all the systems following a similar procedure reported in Crippa _et al.[28]_, reducing the noise by using a Savitzky-Golay [46] filter (as implemented in the SciPy python package [45]). Each \(\delta_{i}\)(t) signal is smoothed using a common polynomial order parameter of \(p=2\) and a time-window of 20 frames in the crystalline **Cu(211)** surface, 100 frames for both the water/ice interface and the **Au-NP** system. After the noise reduction, the clustering of the \(\delta_{i}\) data is performed: in the case of **Cu(211)**, the clustering thresholds are set as previously [28] while for both the **Au-NP** and the **TIP4P/Ice** systems are set by means of KMeans algorithm [24] implemented in SciPy python package.[45] The KMeans algorithm requires the definition of the number of clusters as an input: in both cases of gold nanoparticle and ice/water interface, we set four and three clusters respectively, according to the number of macro clusters previously found.[28] Knowing the cluster assignment, we compute the cluster fluxes, _i.e._ the number of particles going from one cluster to another, for each system. The fluxes are reported as chord diagrams of Figures 1d, 3b and 4b, representing the data as reported above. Then, normalizing the flux matrices on each row, we obtain the transition probability, reported in Figures 1d, 3b, and 4b. ### SOAP&LENS combined analysis The combined SOAP&LENS descriptor is obtained by following the procedure illustrated in Figure 1e and explained in detail in the Results section. The SOAP power spectrum of each particle \(i\) at every time step \(t\) (\(\mathbf{p}_{i}^{t}\)) is combined with the subsequent LENS scalar value (\(\delta_{i}^{t+\Delta t}\)), obtaining a new vector \(\boldsymbol{\chi}_{i}^{t}=(\mathbf{p}_{i}^{t},\delta_{i}^{t+\Delta t})\). Each SOAP power spectra are normalized on their norm, while the LENS scalar is intrinsically normalized within zero (no neighborhood changes) and one (the whole neighborhood changes). In this way, while retaining different information and having two distinct mathematical forms (a high dimensional vector and a scalar), the two components have the same "weight" in the dataset. This procedure, when iterated throughout the entire trajectory, results in a new dataset including \(N=N_{particle}\times N_{frames}\) vectors. Each vector contains \(n+1\) compo nents: \(n\) components representing the SOAP power spectrum and 1 component representing the LENS value. Starting from this \(\mathbf{\chi}_{i}^{t}\) representation of the particle local environments, we follow the same _bottom-up_ procedure, described above, applied to the pure SOAP dataset. To highlight the real effect of the LENS component, avoiding biased results, we performed the _bottom-up_ analysis by using the same parameters. Indeed, upon applying PCA to the SOAP&LENS dataset of each system, we considered the first n-PCA components to match the PCA variance retained in the SOAP analysis, as reported in Table S1. We apply the clustering algorithm (both HDBSCAN* and GMM) to this new reduced dataset, by using the same parameters (min_cluster_size=80 for the **Cu(211)** and min_cluster_size=150 for the **Au-NP** and n_component=3 for the **TIP4P/Ice**), and then the HC dendrogram cutting under the same conditions _i.e._ closer than 1 in terms of the chosen metrics (correlation) and linkage (average). ## Data availability Details on the molecular models and on the MD simulations, and additional simulation data are provided in the Supporting Information. Complete data for the simulation and analysis performed in this work are available at: [https://github.com/GMPavanLab/StrDynRel](https://github.com/GMPavanLab/StrDynRel) (this temporary folder will be replaced with a definitive Zenodo archive upon acceptance of the final version of this paper). ## Acknowledgements G.M.P. acknowledges the support received by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement no. 818776 - DYNAPOL) and by the Swiss National Science Foundation (SNSF Grant IZLIZ2_183336). ## Competing interests statement The authors declare no competing interests. ## References * [1] Andrews, J., Gkountouna, O. & Blaisten-Barojas, E. Forecasting molecular dynamics energetics of polymers in solution from supervised machine learning. _Chem. Sci._**13**, 7021 (2022). * [2] Gasparotto, P. & Ceriotti, M. Recognizing molecular patterns by machine learning: An agnostic structural definition of the hydrogen bond. _J. Chem. Phys._**141**, 174110 (2014). * [3] Davies, M. B., Fitzner, M. & Michaelides, A. Accurate prediction of ice nucleation from room temperature water. _Proc. Natl. Acad. Sci. U.S.A._**119**, e2205347119 (2022). * [4] Noe, F., Olsson, S., Kohler, J. & Wu, H. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. _Science_**365**, eaaw1147 (2019). * [5] Gardin, A., Perego, C., Doni, G. & Pavan, G. M. Classifying soft self-assembled materials via unsupervised machine learning of defects. _Commun. Chem._**5**, 82 (2022). * [6] Cardellini, A. _et al._ Unsupervised data-driven reconstruction of molecular motifs in simple to complex dynamic micelles. _J. Phys. Chem. B_**127**, 2595-2608 (2023). * [7] Capelli, R., Gardin, A., Empereur-Mot, C., Doni, G. & Pavan, G. M. A data-driven dimensionality reduction approach to compare and classify lipid force fields. _J. Phys. Chem. B_**125**, 7785-7796 (2021). * [8] Lionello, C., Perego, C., Gardin, A., Klajn, R. & Pavan, G. M. Supramolecular semiconductivity through emerging ionic gates in ion-nanoparticle superlattices. _ACS Nano_**17**, 275-287 (2023). * [9] Cioni, M. _et al._ Innate dynamics and identity crisis of a metal surface unveiled by machine learning of atomic environments. _J. Chem. Phys_**158**, 124701 (2023). * [10] Rapetti, D. _et al._ Machine learning of atomic dynamics and statistical surface identities in gold nanoparticles. _Commun. Chem._**6**, 143 (2023). * [11] Cheng, B. _et al._ Mapping Materials and Molecules. _Acc. Chem. Res._**53**, 1981-1991 (2020). * [12] Errington, J. R. & Debenedetti, P. G. Relationship between structural order and the anomalies of liquid water. _Nature_**409**, 318-321 (2001). * [13] Behler, J. & Parrinello, M. Generalized neural-network representation of high-dimensional potential-energy surfaces. _Phys. Rev. Lett._**98**, 146401 (2007). * [14] Rossi, K., Pavan, L., Soon, Y. & Baletto, F. The effect of size and composition on structural transitions in monometallic nanoparticles. _Eur. Phys. J. B_**91**, 33 (2018). * [15] Bartok, A. P., Kondor, R. & Csanyi, G. On representing chemical environments. _Phys. Rev. B_**87**, 184115 (2013). * [16] Pietrucci, F. & Martonak, R. Systematic comparison of crystalline and amorphous phases: Charting the landscape of water structures and transformations. _J. Chem. Phys._**142**, 104704 (2015). * [17] Behler, J. Atom-centered symmetry functions for constructing high-dimensional neural network potentials. _J. Chem. Phys._**134**, 074106 (2011). * [18] Drautz, R. Atomic cluster expansion for accurate and transferable interatomic potentials. _Phys. Rev. B_**99**, 014104 (2019). * [19] Faber, F., Lindmaa, A., von Lilienfeld, O. A. & Armiento, R. Crystal structure representations for machine learning models of formation energies. _Int. J. Quantum Chem._**115**, 1094-1101 (2015). * [20] Gasparotto, P., Bochicchio, D., Ceriotti, M. & Pavan, G. M. Identifying and tracking defects in dynamic supramolecular polymers. _J. Phys. Chem. B_**124**, 589-599 (2020). * [21] Musil, F. _et al._ Physics-Inspired Structural Representations for Molecules and Materials. _Chem. Rev._**121**, 9759-9815 (2021). * [22] Scholkopf, B., Smola, A. & Muller, K.-R. Nonlinear Component Analysis as a Kernel Eigenvalue Problem. _Neural Comput._**10**, 1299-1319 (1998). * [23] van der Maaten, L. & Hinton, G. Visualizing Data using t-SNE. _J. Mach. Learn. Res._**9**, 2579-2605 (2008). * [24] Lloyd, S. Least squares quantization in pcm. _IEEE Trans. Inf. Theory_**28**, 129-137 (1982). * [25] Reynolds, D. _Gaussian Mixture Models_, 659-663 (Springer US, Boston, MA, 2009). * [26] Schubert, E., Sander, J., Ester, M., Kriegel, H. P. & Xu, X. Dbscan revisited, revisited: Why and how you should (still) use dbscan. _ACM Trans. Database Syst._**42**, (3) 1-21 (2017). * [27] McInnes, L., Healy, J. & Astels, S. hdbscan: Hierarchical density based clustering. _J. Open Source Softw._**2**, 205 (2017). * [28] Crippa, M., Cardellini, A., Caruso, C. & Pavan, G. M. Detecting dynamic domains and local fluctuations in complex molecular systems via timelapse neighbors shuffling. _Proc. Natl. Acad. Sci. U.S.A._**120**, e2300565120 (2023). * [29] Caruso, C., Cardellini, A., Crippa, M., Rapetti, D. & Pavan, G. M. Time-SOAP: Tracking high-dimensional fluctuations in complex molecular systems via time variations of SOAP spectra. _J. Chem. Phys._**158**, 214302 (2023). * [30] Spencer, M. S. Stable and metastable metal surfaces in heterogeneous catalysis. _Nature_**323**, 685-687 (1986). * [31] Jayanthi, C. S., Tosatti, E. & Pietronero, L. Surface melting of copper. _Phys. Rev. B_**31**, 3456-3459 (1985). * [32] Yamakov, V., Wolf, D., Phillpot, S., Mukherjee, A. & Gleiter, H. Deformation-mechanism map for nanocrystalline metals by molecular-dynamics simulation. _Nat. Mater._**3**, 43-47 (2004). * [33] Zepeda-Ruiz, L. A., Stukowski, A., Oppelstrup, T. & Bulatov, V. V. Probing the limits of metal plasticity with molecular dynamics simulations. _Nature_**550**, 492-495 (2017). * [34] Wang, X. _et al._ Atomistic processes of surface-diffusion-induced abnormal softening in nanoscale metallic crystals. _Nat. Commun._**12**, 5237 (2021). * [35] Koch, R., Borbonus, M., Haase, O. & Rieder, K. H. Reconstruction behaviour of fcc(110) transition metal surfaces and their vicinals. _Appl. Phys. A_**55**, 417-429 (1992). * [36] Wang, X.-Q. Phases of the au(100) surface reconstruction. _Phys. Rev. Lett._**67**, 3547-3550 (1991). * [37] Antczak, G. & Ehrlich, G. _Surface Diffusion: Metals, Metal Atoms, and Clusters_ (Cambridge University Press, 2010). * [38] Gazzarrini, E., Rossi, K. & Baletto, F. Born to be different: the formation process of Cu nanoparticles tunes the size trend of the activity for CO2 to CH4 conversion. _Nanoscale_**13**, 5857-5867 (2021). * [39] Wang, H., Zhang, L., Han, J. & E, W. DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics. _Comput. Phys. Commun._**228**, 178-184 (2018). * [40] Gupta, R. P. Lattice relaxation at a metal surface. _Phys. Rev. B_**23**, 6265-6270 (1981). * [41] Apra, E., Baletto, F., Ferrando, R. & Fortunelli, A. Amorphization mechanism of icosahedral metal nanoclusters. _Phys. Rev. Lett._**93**, 065502 (2004). * a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales. _Comput. Phys. Commun._**271**, 108171 (2022). * [43] Abascal, J. L. F., Sanz, E., Garcia Fernandez, R. & Vega, C. A potential model for the study of ices and amorphous water: Tip4p/ice. _J. Chem. Phys._**122**, 234511 (2005). * [44] Himanen, L. _et al._ DScribe: Library of descriptors for machine learning in materials science. _Comput. Phys. Commun._**247**, 106949 (2020). * [45] Virtanen, P. _et al._ SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. _Nat. Methods_**17**, 261-272 (2020). * [46] Savitzky, A. & Golay, M. J. E. Smoothing and differentiation of data by simplified least squares procedures. _Anal. Chem._**36**, 1627-1639 (1964). Supporting Information for: Machine learning of microscopic structure-dynamics relationships in complex molecular systems Martina Crippa Department of Applied Science and Technology, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy Annalisa Cardellini Department of Innovative Technologies, University of Applied Sciences and Arts of Southern Switzerland, Polo Universitario Lugano, Campus Est, Via la Santa 1, 6962 Lugano-Viganello, Switzerland Matteo Cioni Department of Applied Science and Technology, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy Gabor Csanyi Engineering Laboratory, University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, United Kingdom Giovanni M. Pavan ###### Abstract We present a novel approach to the modeling of the dynamics of a molecular system in complex molecular systems. We show that the dynamics of a molecular system is governed by a combination of a molecular system and a molecular system. We show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system. We show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system. We also show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system. We also show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system. We also show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system. We also show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system. We also show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system. We also show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system. We also show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system. We also show that the dynamics of a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular systems and a molecular system is governed by a combination of molecular system is governed by a combination Figure S1: SOAP analysis. Left: First to PC of the PCA on SOAP dataset and a representative snapshot, of **Cu(211)** (a) and **Au-NP** (b), colored accordingly to HDBSCAN* clustering (min_cluster_size=80 and min_cluster_size=150, respectively). Right: Transition probability matrix obtained from the particles transition within the trajectory: each out-of-diagonal cell represents the probability of a particle in a certain cluster (row) to transit to another cluster (column). HC-based dendrogram of the transition matrix rows: the rows are clustered based on their distance (correlation metrics) and grouped considering the average of the new clusters (average linkage). The dendrogram is cut at correlation equal to 1.
2309.06913
Joint Distributions in Probabilistic Semantics
Various categories have been proposed as targets for the denotational semantics of higher-order probabilistic programming languages. One such proposal involves joint probability distributions (couplings) used in Bayesian statistical models with conditioning. In previous treatments, composition of joint measures was performed by disintegrating to obtain Markov kernels, composing the kernels, then reintegrating to obtain a joint measure. Disintegrations exist only under certain restrictions on the underlying spaces. In this paper we propose a category whose morphisms are joint finite measures in which composition is defined without reference to disintegration, allowing its application to a broader class of spaces. The category is symmetric monoidal with a pleasing symmetry in which the dagger structure is a simple transpose.
Dexter Kozen, Alexandra Silva, Erik Voogd
2023-09-13T12:24:09Z
http://arxiv.org/abs/2309.06913v2
# Joint Distributions in Probabilistic Semantics ###### Abstract Various categories have been proposed as targets for the denotational semantics of higher-order probabilistic programming languages. One such proposal involves joint probability distributions (couplings) used in Bayesian statistical models with conditioning. In previous treatments, composition of joint measures was performed by disintegrating to obtain Markov kernels, composing the kernels, then reintegrating to obtain a joint measure. Disintegrations exist only under certain restrictions on the underlying spaces. In this paper we propose a category whose morphisms are joint finite measures in which composition is defined without reference to disintegration, allowing its application to a broader class of spaces. The category is symmetric monoidal with a pleasing symmetry in which the dagger structure is a simple transpose. probabilistic programming, disintegration, Bayesian inference, conditioning + Footnote †: journal: Computer Science 0000-0001-64298-9999]Dexter Kozen Alexandra Silva 0000-0001-64298-9999]Erik Voogd 0000-0001-64298-9999]Department of Informatics University of Oslo Gaustadalleen 23B, 0373 Oslo, Norway 0000-0001-64298-9999]Erik Voogd 0000-0001-64298-9999]Department of Informatics University of Oslo Gaustadalleen 23B, 0373 Oslo, Norway 0000-0001-64298-9999]Department of Informatics University of Oslo Gaustadalleen 23B, 0373 Oslo, Norway [MISSING_PAGE_POST] 0000-0001-64298- can be taken as first-class citizen to formalize semantics of probabilistic languages. In particular, we will show that composition in \(\mathbf{JDist}\) can be defined without having to use disintegration, thereby making this category more generally applicable than previous approaches. One recent proposal by Dahlqvist et al. [6] was the category \(\mathbf{Krn}\), whose objects are probability spaces \((X,\mathcal{A},\mu)\) with \(\mathcal{A}\) a standard Borel space. The morphisms \((X,\mathcal{A},\mu)\to(Y,\mathcal{B},\nu)\) are equivalence classes modulo \(\mu\)-nullsets of Markov kernels \(P\colon X\to Y\) such that \[\nu(B)=\int_{s\in X}P(s,B)\,d\mu(s).\] Dahlqvist et al. [6] use disintegration to show that \(\mathbf{Krn}\) is a dagger category with involutive functor \(\dagger:\mathbf{Krn}\to\mathbf{Krn}^{\mathrm{op}}\). Thus \(P^{\dagger}:Y\to X\) is a kernel in the opposite direction that models Bayesian inference of an input distribution conditioned on output samples. This works even for continuous distributions in which outcomes can occur with zero probability. We will show that the category \(\mathbf{Krn}\) can be embedded faithfully in the category \(\mathbf{JDist}\), and this is a symmetric monoidal category with joint distributions as tensors and transpose as symmetry: \(\theta^{\dagger}(A\times B)=\theta(B\times A)\). Another category was proposed by Abramsky et al. [2] who studied a subcategory of \(\mathbf{JDist}\) on Polish spaces called \(\mathbf{PRel}\), using disintegration to define composition: to compose two joint distributions, one disintegrates to obtain Markov kernels, composes the kernels as in \(\mathbf{Krn}\), then reintegrates to obtain a joint measure. This construction can be performed only under various restrictions admitting disintegration [2, 7, 12, 3, 4]. The most common assumption is that the spaces are standard Borel. The most general result of this type seems to be that of Culbertson and Sturtz [5] based on work of Faden [8], who assume countably generated spaces but that the measures are perfect. In this paper, we show that composition in \(\mathbf{JDist}\) can be defined independently without reference to disintegration and without any restriction on the underlying spaces. This allows probabilistic programs to be interpreted in a more general class of spaces with a pleasing symmetry in which the notion of equivalence up to a nullset is built in. We will illustrate our approach in defining the composition in \(\mathbf{JDist}\) through an example in Section 2. We will then define the composition in \(\mathbf{JDist}\) in 3 steps. We will first present the definition in the discrete setting (Section 3), as this is a simpler case which does not require extra measure infrastructure. Second, we will define and prove some results akin to Radon-Nikodym approximants (Section 5). This will provide the necessary results to define composition in \(\mathbf{JDist}\) and a functor \(\mathbf{J}\colon\mathbf{Krn}\to\mathbf{JDist}\) that provides a faithful embedding of categories (Section 6). ## 2 Illustrative Example To explain how composition in \(\mathbf{JDist}\) circumvents disintegration, we consider a modified version of the example from Dahlqvist et al. [6], pseudocode presented on the right: the goal is to measure the probability that \(x>1\) for \(x\sim\mathcal{N}(0,1)\) given an observation \(z>0.5\) for \(z\sim\mathcal{N}(y,1)\), and \(y\sim\mathcal{N}(x,1)\). That is, \(x\) is a standard Gaussian sample (Line 1) and \(y\) and \(z\) are normally distributed with means \(x\) and \(y\) respectively, and both with variance one (Lines 2 and 3). Intuitively, Line 4 conditions on the observation that \(z\) is greater than \(0.5\), and Line 5 returns the probability of \(x\) being greater than one given this observation. To illustrate the nature of the category \(\mathbf{JDist}\), we somewhat informally explain the semantics of this program: first in terms of Markov kernels (\(\mathbf{Krn}\)), and after that in terms of joint distributions (\(\mathbf{JDist}\)). * In \(\mathbf{Krn}\), the state of the program after Line 1 can be thought of as a measure space \((X,\mathcal{A},\mu)\) with \((X,\mathcal{A})=(\mathbb{R},\mathcal{B}_{\mathbb{R}})\) the Borel \(\sigma\)-algebra on \(\mathbb{R}\), and \(\mu\) the standard Gaussian measure. * Line 2 constructs a Markov kernel \(P:(X,\mathcal{A})\to(Y,\mathcal{B})\), where \((Y,\mathcal{B})=(\mathbb{R},\mathcal{B}_{\mathbb{R}})\), that, for every \(x\), yields the Gaussian measure with variance one centered around \(x\). The state of \(y\) in the program after Line 2 is then the measure space \((Y,\mathcal{B},\nu)\) where \(\nu(B):=\int_{x\in X}P(x,B)\ d\mu\). **D * Analogously, Line 3 constructs a kernel \(Q:(Y,\mathcal{B})\to(Z,\mathcal{C})\) of Gaussian measures \(Q(y,-)\) on \((Z,\mathcal{C})=(\mathbb{R},\mathcal{B}_{\mathbb{R}})\) centered around \(y\), and the state of \(z\) is the measure space \((Z,\mathcal{C},\rho)\) with \(\rho(C):=\int_{y\in Y}Q(y,C)\;d\nu\). Equivalently, \(\rho\) is obtained by integration of the kernel composition \(P\;;Q\) w.r.t. \(\mu\). * Lines (4-5) compute an inverse kernel of \(P\;;Q\) as follows: the composition is integrated w.r.t. \(\mu\) to obtain a joint measure \(\zeta\) on \(X\times Z\). This joint measure is transposed to \(\zeta^{\dagger}\) on \(Z\times X\) and then disintegrated to the inverse kernel \(R\). Then \(\int_{z\in(0.5,\infty)}R(z,-)\;d\rho\) evaluated on the interval \((1,\infty)\) yields the desired result. Lines (1-3) can be pictorially represented using three concrete samples \(x,y,z\) as follows: Interpreting the program in terms of joint distributions can be done as follows: * Line 1 constructs the measure space \((X,\mathcal{A},\mu)\) with \(\mu\) standard Gaussian as above. * Line 2 constructs a joint measure as follows: let \((Y,\mathcal{B})\) and \(P\) be as above. Define the joint measure \(\theta\) on \(X\times Y\) on its rectangles as \[\theta(A\times B):=\int_{x\in A}P(x,B)\;d\mu\] Then the left marginal \(\theta(-\times Y)\) of \(\theta\) is \(\mu\) and the right marginal \(\theta(X\times-)\) is \(\nu\). We consider the state of the program after Line 2 to be the joint distribution \(\theta\). Intuitively, this is a way to _remember the input distribution_, and this is, computationally speaking, crucial for Bayesian inference. * Similarly, Line 3 can be thought of as constructing a joint \(\eta\) on \(Y\times Z\) using \((Z,\mathcal{C})=(\mathbb{R},\mathcal{B}_{\mathbb{R}})\) and \(Q\) as above. This joint will have marginals \(\nu\) and \(\rho\). The joint measures obtained from Lines (1-3) can be visualized as follows: In order to say anything about the state after Line 3, we have to be able to compose the joint measures \(\theta\) and \(\eta\) to \(\zeta=\theta\;;\,\eta\). How to define this composition without assumptions on the underlying space is the central contribution of this paper, explained in the following sections. * Lines (4,5) transpose the joint measure \(\zeta\) on \(X\times Z\) to \(\zeta^{\dagger}\) on \(Z\times X\) and the output of the program is the \(\zeta^{\dagger}\)-measure of the rectangle \((0.5,\infty)\times(1,\infty)\). 1 Footnote 1: Computing the result with observe statements on _zero measure events_, e.g. \(z=0.5\), still requires disintegration: the joint \(\zeta^{\dagger}\) is disintegrated at \(z=0.5\) and then evaluated on \((1,\infty)\). Our goal now is to define how to compose joint measures \(\theta\) and \(\eta\) (as above) to a joint measure on \(X\) and \(Z\) such that 1. the marginals of \(\theta\) ; \(\eta\) are \(\mu\) and \(\rho\), and 2. integration of the kernel composition \(P;Q\) with respect to \(\mu\) yields \(\theta\) ; \(\eta\). Before our general treatment of this problem with continuous measures, we describe the problem and its solution in a discrete setting for instructive purposes. ## 3 Discrete Approach We consider a finite state space; with the appropriate convergence assumptions, this generalizes straightforwardly to countable state spaces. Through the below development, we will have forward pointers to the analogous definitions of the general case which we will present in Section 4, we mark this as "c.f. \((n)\) below". ### Preliminaries Formally, a _probability measure_ on a finite set \(\{1,\ldots,n\}\) (from now on overloadingly written \(n\)) is a finite additive set function \(2^{n}\to[0,1]\) defined uniquely on its points by \(i\mapsto x_{i}\) (sometimes written \(x(i)\)) such that \(\sum_{i=1}^{n}x_{i}=1\). Equivalently, it is a vector \(x\in[0,1]^{n}\) such that \(x^{T}\cdot\mathbf{1}=1\), where \(\mathbf{1}\) denotes the \(n\)-dimensional vector of ones. Integrating a function \(f:n\to\mathbb{R}\) (always measurable) w.r.t. \(x\) is done by weighting each element \(f(i)\) with its corresponding probability measure \(x_{i}\). Thus, integration in the discrete world is just matrix multiplication \(x^{T}f\). A _Markov kernel_ is a map \(A:n\to[0,1]^{n}\) (always measurable) such that \(A(i)\) is a probability measure for every \(i\in n\). Equivalently, it is a matrix \(A\in[0,1]^{n\times n}\) such that \(A\mathbf{1}=\mathbf{1}\). Integrating a Markov kernel \(A\) w.r.t. a probability measure \(x\) yields a new probability measure \(y=x^{T}A\). This procedure is given componentwise by \(y(j)=\sum_{i=1}^{n}A(i,j)\cdot x(i)\) (c.f. (2) below). A (joint) probability measure on the product space \(n\times n\) is a matrix \(\alpha\in[0,1]^{n\times n}\) whose entries all sum to one. We have \(1=\sum_{i,j}\alpha(i,j)=\mathbf{1}^{T}\alpha\mathbf{1}=\mathbf{1}^{T}\alpha^{ T}\mathbf{1}\), so that the row-sums \(\alpha\mathbf{1}\) and the column-sums \(\alpha^{T}\mathbf{1}\) are probability measures; they are the _left_ and _right marginal_, respectively. A joint measure \(\alpha\) on \(n\times n\) defines unique marginals by definition, but the converse is not true in general: there can be many joint measures for two given marginals. (In this setting it is solving \(2n\) linear equations of \(n^{2}\) unknowns.) ### Discrete Markov Kernels to Joint Distributions We now describe what the functor \(\mathbf{J}\colon\mathbf{Krn}\to\mathbf{JDist}\) does to _discrete_ Markov kernels. For a Markov kernel \(A\), first define the map \(A^{\prime}:n\to[0,1]^{n\times n}\) by (c.f. (4) below): \[A^{\prime}(i,j,k)=\begin{cases}A(i,k)&\text{if }i=j\\ 0&\text{otherwise}\end{cases}\] Equivalently, \(A^{\prime}\) is a \(n\times n\times n\) matrix where each page (ranged over by \(k\)) is a diagonal \(n\times n\) matrix, and the diagonal of the \(k\)-th page is given by the \(k\)-th column of \(A\). Then, we have that \((x^{T}\cdot A^{\prime})(i,j)=A(i,j)\cdot x_{i}\) for every \(i,j\in n\) and measure \(x\in[0,1]^{n}\) (c.f. (5) below). So, each \(i\)-th row-sum of \(x^{T}\cdot A^{\prime}\) is just \(x_{i}\) (because the row-sums of \(A\) are one) and the column-sums are just entry-wise computations of \(A^{T}x\). This means that \(x^{T}\cdot A^{\prime}\) is a measure on \(n\times n\) with left marginal \(x\) and right marginal \(y=A^{T}x\). The functor \(\mathbf{J}\) maps the kernel \(A\) to the joint measure \(x^{T}A^{\prime}\). It implicitly depends on \(x\). Disintegrating the joint measure \(\mathbf{J}A\) back to a kernel is always possible for discrete kernels, but does this give us back \(A\)? In the finite setting, to disintegrate a joint measure \(\alpha\in[0,1]^{n\times n}\) on the product space \(n_{1}\times n_{2}\) (where \(n_{1}=n_{2}=n\)), let \(\pi:n_{1}\times n_{2}\to n_{1}\) be the first projection, and \(x=\alpha\circ\pi^{-1}\) be the pushforward measure of \(\alpha\) through this projection. The choice of name for \(x\) is not a coincidence; it is the left marginal of \(\alpha\): \[x(i)=\sum_{j=1}^{n}\alpha(i,j).\] A _disintegration_ of \(\alpha\) along \(\pi\) is defined as a finite set \(\big{\{}a^{(i)}\big{\}}_{i\in n_{1}}\) of measures on \(n_{2}\) such that \[\forall(i,j)\in n_{1}\times n_{2}:\quad\alpha(i,j)=a^{(i)}(j)\cdot x(i)\] Equivalently, this is a kernel \(n_{1}\to[0,1]^{n_{2}}\), or a matrix in \([0,1]^{n_{1}\times n_{2}}\). The condition for disintegration in a non-discrete setting contains an integral amounting to a finite sum here, but the measures are uniquely defined by their point masses, so this condition suffices. In addition, the usual definition involves measurability conditions, but these are trivially satisfied here. The kernel \(A\) such that \(x^{T}A=y^{T}\) is a disintegration of the joint measure \(\mathbf{J}A=x^{T}A^{\prime}\) defined in the previous paragraph. The question is now whether \(A\) is the only disintegration of \(\mathbf{J}A\). The answer is yes, _but only up to negligible events_. More precisely, given \(\alpha\), the disintegration \(a^{(i)}\) at \(i\) - the \(i\)-th row of the kernel \(A\) - can be anything if \(x(i)=0\). It is then natural to consider an equivalence class \(\equiv_{x}\) on kernels \(A,B\in[0,1]^{n\times n}\) defined by \[A\equiv_{x}B\ :\Longleftrightarrow\ \ \forall i\in n\ \big{(}x_{i}>0\implies A (i)=B(i)\big{)}\] (c.f. ( 6 )) Recall that, for all \(i,j\), \(\mathbf{J}A(i,j)=A(i,j)\cdot x_{i}\) and \(\mathbf{J}B(i,j)=B(i,j)\cdot x_{i}\). So, we will have that \[\mathbf{J}A=\mathbf{J}B\text{ if and only if }A\equiv_{x}B \tag{1}\] This is exactly the analogue of Lemma 4.1 below in the discrete case. The goal of our work is to compose joint measures \(\alpha,\beta\in[0,1]^{n\times n}\) with marginals \(x,y\in[0,1]^{n}\) for \(\alpha\) and marginals \(y,z\in[0,1]^{n}\) for \(\beta\), to a joint measure \(\gamma\in[0,1]^{n\times n}\), without disintegrating them to kernels first. We want this composition to satisfy some sensible properties: if \(A\) and \(B\) are kernels such that \(\alpha=\mathbf{J}A=x^{T}A^{\prime}\) and \(\beta=\mathbf{J}B=y^{T}B^{\prime}\), then 1. the marginals of \(\gamma\) are \(x\) and \(z\); 2. \(\gamma=\mathbf{J}(A\cdot B)=x^{T}(AB)^{\prime}\) (c.f. Theorem 6.3, \(\mathbf{J}\) is a functor); and 3. if \(C\) is such that \(\gamma=\mathbf{J}C\) then \(AB\equiv_{x}C\) (faithfulness of \(\mathbf{J}\), c.f. Lemma 4.1). In the discrete case, the composition \(\gamma\) can be defined entry-wise by \[\gamma(i,k)=\sum_{\begin{subarray}{c}j=1\\ y_{j}>0\end{subarray}}^{n}\frac{\alpha(i,j)\cdot\beta(j,k)}{y_{j}}\] (c.f. ( 16 )) We can then verify the above properties. First, the left marginal of \(\gamma\) is \(x\): \[(\gamma\mathbf{1})(i)=\sum_{k=1}^{n}\sum_{\begin{subarray}{c}j=1\\ y_{j}>0\end{subarray}}^{n}\frac{\alpha(i,j)\cdot\beta(j,k)}{y_{j}}=\sum_{ \begin{subarray}{c}j=1\\ y_{j}>0\end{subarray}}^{n}\Big{(}\sum_{k=1}^{n}\beta(j,k)\Big{)}/y_{j}\cdot \alpha(i,j)=\sum_{j=1}^{n}\alpha(i,j)=(\alpha\mathbf{1})(i)=x_{i}\] Here, we have used that \(\sum_{k=1}^{n}\beta(j,k)=y_{j}\) and, if, for some \(j\), \(y_{j}=0\), then \(\sum_{i=1}^{n}\alpha(i,j)=0\), so \(\alpha(i,j)=0\) (which is why we can leave out the subscript \(y_{j}>0\) from the sum). A similar calculation shows that its right marginal is \(z\), so property (i) is verified. For property (ii), we have \[\mathbf{J}(A\cdot B)(i,k)=(A\cdot B)(i,k)\cdot x_{i}=\sum_{j=1}^{n}A(i,j)\cdot B (j,k)\cdot x_{i}=\sum_{\begin{subarray}{c}j=1\\ y_{j}>0\end{subarray}}^{n}\alpha(i,j)\cdot B(j,k)\cdot\frac{y_{j}}{y_{j}}=\sum_ {\begin{subarray}{c}j=1\\ y_{j}>0\end{subarray}}^{n}\frac{\alpha(i,j)\cdot\beta(j,k)}{y_{j}}\] identifying \(\mathbf{J}(A\cdot B)\) with \(\gamma\). Property (iii) is immediate from (1) and property (ii). ## 4 Krn and JDist We will define the basics to introduce the categories \(\mathbf{Krn}\) and \(\mathbf{JDist}\), though the composition of the latter will be defined in Section 6 as we will need a few more results to provide it without resorting to disintegration. However, we can already give the mapping \(\mathbf{J}\colon\mathbf{Krn}\to\mathbf{JDist}\) and prove that it is well-defined (Lemma 4.1). ### Preliminaries Let \((X,\mathcal{A})\) and \((Y,\mathcal{B})\) be measurable spaces. We abbreviate \((X,\mathcal{A})\) by \(X\) when \(\mathcal{A}\) is understood. The letters \(A,B,C,D\) denote measurable sets. The space \((X,\mathcal{A})\) is a _standard Borel space_ if \(\mathcal{A}\) is the set of Borel sets of a Polish space (separable and completely metrizable). The space \((X,\mathcal{A})\) is _countably generated_ if there exists a countable set \(\mathcal{A}_{0}\subseteq\mathcal{A}\) such that \(\mathcal{A}\) is the smallest \(\sigma\)-algebra containing \(\mathcal{A}_{0}\). All standard Borel spaces are countably generated. For an in-depth treatment of measure theory, see, e.g., [6, 9, 12]. A _Markov kernel_\(P:X\to Y\) is a map \(P:X\times\mathcal{B}\to[0,1]\) such that * for fixed \(s\in X\), \(P(s,-)\) is a probability measure on \(Y\), * for fixed \(B\in\mathcal{B}\), \(P(-,B)\) is a measurable function on \(X\). These properties allow kernels to be sequentially composed by Lebesgue integration. The class of measurable spaces and Markov kernels forms a category \(\mathbf{SRel}\)[11, 6], which is isomorphic to the Kleisli category of the Giry monad. We write \(P:X\to Y\) for the kernel \(P\) regarded as a morphism in this category. For \(P:X\to Y\) a Markov kernel and \(\mu\) a finite measure on \(X\), write \(\mu\;;\,P\) for the measure on \(Y\) such that \[(\mu\;;\,P)(B)=\int_{X}P(s,B)\,d\mu(s). \tag{2}\] This gives a bounded linear map \((-\;;\,P):\mathbf{Meas}\,X\to\mathbf{Meas}\,Y\) that is monotone and continuous in both the metric and Scott topologies [13]. For \(A\in\mathcal{A}\), let \(A\) also denote the subidentity kernel \[A(s,B)=\mathbbm{1}_{X}(s,A\cap B)=\begin{cases}1,&s\in A\cap B,\\ 0,&s\not\in A\cap B.\end{cases}\] Then for all \(A\in\mathcal{A}\) and \(B\in\mathcal{B}\), \[(\mu\;;\,A\;;\,P)(B)=\int_{A}P(s,B)\,d\mu(s). \tag{3}\] The category \(\mathbf{Krn}\)2 has as objects probability spaces \((X,\mathcal{A},\mu)\). The morphisms \((X,\mathcal{A},\mu)\to(Y,\mathcal{B},\nu)\) are equivalence classes modulo \(\mu\)-nullsets of Markov kernels \(P\colon X\to Y\) such that \[\nu(B)=\int_{s\in X}P(s,B)\,d\mu(s).\] ### From \(\mathbf{Krn}\) to \(\mathbf{JDist}\) The category \(\mathbf{JDist}\) is the category whose objects are probability spaces \((X,\mathcal{A},\mu)\) and whose morphisms \((X,\mathcal{A},\mu)\rightarrow(Y,\mathcal{B},\nu)\) are joint distributions or _couplings_ on \(X\times Y\) with marginals \(\mu\) and \(\nu\). We will define composition in \(\mathbf{JDist}\) formally in Section 6, but we can already define the embedding functor \(\mathbf{J}:\mathbf{Krn}\rightarrow\mathbf{JDist}\). It is the identity on objects, and for morphisms \(P:X\to Y\), let \(P^{\prime}:X\to X\times Y\) be the kernel that behaves like \(P\), but also remembers its input state: on measurable rectangles \(A\times B\), \[P^{\prime}(s,A\times B)=\begin{cases}P(s,B),&\text{if }s\in A,\\ 0,&\text{if }s\not\in A.\end{cases} \tag{4}\] The kernel \(P^{\prime}\) together with the probability measure \(\mu\) on \(X\) induce a joint measure \(\mathbf{J}P\) on \(X\times Y\) whose value on measurable rectangles \(A\times B\) is \[\mathbf{J}P(A\times B)=\int_{X}P^{\prime}(s,A\times B)\,d\mu(s)=\int_{A}P(s,B) \,d\mu(s). \tag{5}\] Since morphisms in \(\mathbf{Krn}\) are equivalence classes of kernels, there is at this point an obligation for proof of well-definedness. For \(P,Q:X\to Y\) Markov kernels and \(\mu\) a measure on \(X\), the equivalence is defined as \[P\equiv_{\mu}Q\ \Leftrightarrow\ \forall B\in\mathcal{B}\ \ \mu(\{s\mid P(s,B)\neq Q(s,B)\})=0. \tag{6}\] The following lemma now shows that \(\mathbf{J}\) is well-defined: **Lemma 4.1**: _Let \((X,\mathcal{A})\) and \((Y,\mathcal{B})\) be measurable spaces, \(\mathcal{B}\) countably generated. Let \(P,Q:X\to Y\) be Markov kernels and \(\mu\) a probability measure on \(X\). The following are equivalent:_ 1. \(P\equiv_{\mu}Q\)__ 2. \(P\equiv_{\xi}Q\) _for all_ \(\xi\) _absolutely continuous with respect to_ \(\mu\)__(notation:_ \(\xi\ll\mu\))__ 3. _for all_ \(A\in\mathcal{A}\)_,_ \(\mu\;;A\;;P=\mu\;;A\;;Q\)__ 4. \(\mathbf{J}P=\mathbf{J}Q\)_, considering_ \(P\) _and_ \(Q\) _as_ \(\mathbf{Krn}\)_-morphisms_ \((X,\mathcal{A},\mu)\rightarrow(Y,\mathcal{B},\nu)\)_, where_ \(\nu=\mu\;;P=\mu\;;Q\)_._ **Proof.** The equivalence of (i) and (ii) is clear from the definition (6). For (i) \(\Rightarrow\) (iii), suppose that \(P\equiv_{\mu}Q\). Let \(E_{B}=\{s\mid P(s,B)=Q(s,B)\}\). By definition, \(\mu(X\backslash E_{B})=0\). For all \(A\in\mathcal{A}\) and \(B\in\mathcal{B}\), \[(\mu\;;A\;;P)(B) =\int_{A}P(s,B)\,d\mu(s)\] \[=\int_{A\cap E_{B}}P(s,B)\,d\mu(s)+\int_{A\backslash E_{B}}P(s,B) \,d\mu(s) \tag{7}\] \[=\int_{A\cap E_{B}}Q(s,B)\,d\mu(s)+\int_{A\backslash E_{B}}Q(s,B) \,d\mu(s)\] (8) \[=\int_{A}Q(s,B)\,d\mu(s)=(\mu\;;A\;;Q)(B).\] The left-hand terms of (7) and (8) agree because the integral is restricted to \(E_{B}\), and the right-hand terms are \(0\) because \(A\setminus E_{B}\) is a \(\mu\)-nullset. As \(B\) was arbitrary, \(\mu\;;A\;;P=\mu\;;A\;;Q\). Conversely, for (iii) \(\Rightarrow\) (i), if \(P\not\equiv_{\mu}Q\), then \(\mu(\{s\mid|P(s,B)-Q(s,B)|\geq 1/n\})>0\) for some \(B\in\mathcal{B}\) and \(n>0\). Letting \(A\) be this set, we have \[\int_{A}\!\!|P(s,B)-Q(s,B)|\,d\mu(s)\geq\tfrac{1}{n}\mu(A)>0,\] so \[(\mu\;;A\;;P)(B)=\int_{A}P(s,B)\,d\mu(s)\neq\int_{A}Q(s,B)\,d\mu(s)=(\mu\;;A \;;Q)(B).\] The equivalence of (iii) and (iv) follows from (3) and (5). \(\Box\) Let \((X,\mathcal{A})\) be a measurable space. The countable measurable partitions \(\mathcal{D}\) of \(X\) form an upper semi-lattice ordered by refinement, denoted \(\sqsubseteq\), with least common refinement as join, denoted \(\sqcup\). We will often consider the limiting behavior of functions defined on increasingly finer such partitions. For any such map \(\phi\) taking values in a topological space, if \(\phi(\mathcal{D}_{n})\) converges to the same value for all chains \(\mathcal{D}_{0}\sqsubseteq\mathcal{D}_{1}\sqsubseteq\cdots\) that eventually become sufficiently fine, we write \(\lim_{\mathcal{D}}\phi(\mathcal{D})\) for this value. ## 5 Radon-Nikodym Approximants Integration and Radon-Nikodym derivatives are typically defined in terms of approximants. We will use a similar technique to define the composition of joint measures in **JDist**. In this section we develop the necessary infrastructure. Let \(\nu\) and \(\mu\) be finite measures on \((X,\mathcal{A})\). For any \(B\in\mathcal{A}\), consider the set \[\{\tfrac{\nu(C)}{\mu(C)}\mid C\subseteq B,\ \mu(C)>0\}\subseteq\mathbb{R}. \tag{9}\] This set is nonempty iff \(\mu(B)>0\). In that case, the set has a finite infimum, since \(\nu(B)/\mu(B)\) is a member, but it may be unbounded above. **Lemma 5.1**: _Let \(\nu\) and \(\mu\) be finite measures on \((X,\mathcal{A})\). For any \(\varepsilon>0\), there exists a countable measurable partition \(\mathcal{D}\) of \(X\) such that for all \(B\in\mathcal{D}\) with \(\mu(B)>0\), the set (9) is bounded above and_ \[\sup_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\tfrac{\nu(C)}{\mu(C)}-\inf_{\begin{subarray}{c}C \subseteq B\\ \mu(C)>0\end{subarray}}\tfrac{\nu(C)}{\mu(C)}\leq\varepsilon\qquad\qquad \qquad(\sup_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\tfrac{\nu(C)}{\mu(C)}-\inf_{\begin{subarray}{c}C \subseteq B\\ \mu(C)>0\end{subarray}}\tfrac{\nu(C)}{\mu(C)})(\sup_{\begin{subarray}{c}C \subseteq B\\ \mu(C)>0\end{subarray}}\tfrac{\nu(C)}{\mu(C)})\leq\varepsilon^{2}. \tag{10}\] _Moreover, these properties are preserved under refinement; that is, if \(\mathcal{D}\sqsubseteq\mathcal{D}^{\prime}\) and \(\mathcal{D}\) satisfies the inequalities (10) for all \(B\in\mathcal{D}\) with \(\mu(B)>0\), then the same is true of \(\mathcal{D}^{\prime}\)._ **Proof.** For \(k\geq 1\), consider the signed measure3\(\nu-(\varepsilon\ln k)\mu\). By the Hahn decomposition theorem, there exists a family of measurable partitions \(\{A_{k}^{+},A_{k}^{-}\}\) of \(X\), one for each natural number \(k\geq 1\), such that \(\nu-(\varepsilon\ln k)\mu\) is purely nonegative on \(A_{k}^{+}\) and purely nonpositive on \(A_{k}^{-}\). That is, for all measurable \(C\subseteq A_{k}^{+}\), \(\nu(C)\geq(\varepsilon\ln k)\mu(C)\) and for all measurable \(C\subseteq A_{k}^{-}\), \(\nu(C)\leq(\varepsilon\ln k)\mu(C)\). Without loss of generality we can assume \(A_{1}^{+}=X\) and \(A_{1}^{-}=\emptyset\). Let the partition \(\mathcal{D}\) consist of the pairwise disjoint sets \(\bigcap_{i=1}^{k}A_{i}^{+}\cap A_{k+1}^{-}\), \(k\geq 1\), along with \(\bigcap_{i=1}^{\infty}A_{i}^{+}\). The last is a \(\mu\)-nullset, since if \(\mu(C)>0\), then \(C\not\subseteq A_{k}^{+}\) for any \(k>\exp(\tfrac{\nu(C)}{\mu(C)})\). For any measurable \(C\subseteq\bigcap_{i=1}^{k}A_{i}^{+}\cap A_{k+1}^{-}\), we have \((\varepsilon\ln k)\mu(C)\leq\nu(C)\leq(\varepsilon\ln(k+1))\mu(C)\), so if \(\mu(C)>0\), then \(\nu(C)/\mu(C)\) exists and lies in the interval \([\varepsilon\ln k,\varepsilon\ln(k+1)]\). Since \(\ln(1+x)\leq x\) for \(x\geq 1\), \[\varepsilon\ln(k+1)-\varepsilon\ln k =\varepsilon\ln(\frac{k+1}{k})=\varepsilon\ln(1+\frac{1}{k}) \leq\frac{\varepsilon}{k}\leq\varepsilon\] \[(\varepsilon\ln(k+1)-\varepsilon\ln k)\varepsilon\ln(k+1) =\varepsilon\ln(1+\frac{1}{k})\varepsilon\ln(k+1)\leq\frac{\varepsilon}{k} \cdot\varepsilon k=\varepsilon^{2},\] from which the bounds (10) follow. \(\Box\) Let \(\mathcal{D}\) be a countable measurable partition of \(\mathcal{A}\). For \(s\in X\), define \[F_{\mathcal{D}}=\bigcup\left\{B\in\mathcal{D}\mid\mu(B)>0\right\} \tag{11}\] \[\left(\tfrac{d\nu}{d\mu}\right)^{+}_{\mathcal{D}}(s)=\sum_{ \begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\sup_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\cdot 1_{X}(s,B) \left(\tfrac{d\nu}{d\mu}\right)^{-}_{\mathcal{D}}(s)=\sum_{ \begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\inf_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\cdot 1_{X}(s,B). \tag{12}\] The set \(F_{\mathcal{D}}\) is measurable with \(\mu(X\setminus F_{\mathcal{D}})=0\), and the functions \((d\nu/d\mu)^{+}_{\mathcal{D}}\) and \((d\nu/d\mu)^{-}_{\mathcal{D}}\) are measurable step functions that vanish outside \(F_{\mathcal{D}}\). From Lemma 5.1, we have that for sufficiently fine \(\mathcal{D}\), \[\left(\tfrac{d\nu}{d\mu}\right)^{+}_{\mathcal{D}}-\left(\tfrac{d \nu}{d\mu}\right)^{-}_{\mathcal{D}}\leq\varepsilon \left(\left(\tfrac{d\nu}{d\mu}\right)^{+}_{\mathcal{D}}-\left( \tfrac{d\nu}{d\mu}\right)^{-}_{\mathcal{D}}\right)\left(\tfrac{d\nu}{d\mu} \right)^{+}_{\mathcal{D}}\leq\varepsilon^{2}.\] These definitions lead to an enhanced version of the Radon-Nikodym theorem. **Theorem 5.2** (Lebesgue-Radon-Nikodym): _Let \(\nu\) and \(\mu\) be finite measures on \((X,\mathcal{A})\). There exist measures \(\nu_{0},\nu_{1}\), a measurable set \(F\in\mathcal{A}\), a measurable real-valued function \(f\) defined on \(X\), and a countable \(\sqsubseteq\)-chain \(\mathcal{D}_{0}\sqsubseteq\mathcal{D}_{1}\sqsubseteq\cdots\) such that_ 1. \((\)_Lebesgue decomposition_\()\)__\(\nu_{0}\) _and_ \(\nu_{1}\) _form a Lebesgue decomposition of_ \(\nu\) _on_ \(F\) _with respect to_ \(\mu\)_; that is,_ \[\nu=\nu_{0}+\nu_{1} \nu_{0}\ll\mu \nu_{1}(F)=0 \mu(X\setminus F)=0;\] 2. \((\)_Radon-Nikodym theorem_\()\)__\(f(s)=0\) _for all_ \(s\not\in F\) _and_ \[\int_{A}f(s)\,d\mu(s)=\nu_{0}(A),\ A\in\mathcal{A};\] (13) 3. \((\)_Uniform approximation_\()\) _The sequence_ \((d\nu/d\mu)^{-}_{n}=(d\nu/d\mu)^{-}_{\mathcal{D}_{n}}\) _is monotone nondecreasing on_ \(F\)_, the sequence_ \((d\nu/d\mu)^{+}_{n}=(d\nu/d\mu)^{+}_{\mathcal{D}_{n}}\) _is monotone nonincreasing everywhere, and both sequences converge pointwise to_ \(f\) _and converge uniformly on_ \(F\)_._ If \(\nu\ll\mu\), we can take \(\nu_{0}=\nu\) and \(\nu_{1}=0\) in (i), in which case (ii) gives \(\int_{A}f(s)\,d\mu(s)=\nu(A)\). In this case, \(f\) is the standard Radon-Nikodym derivative \(d\nu/d\mu\). The version [12, Theorem 3, p. 258] asserts (i) and (ii) only without reference to the approximants \((d\nu/d\mu)^{+}_{n}\) and \((d\nu/d\mu)^{-}_{n}\), but (iii) is essentially how it is proved. In fact, [12] gives three proofs. We give a fourth here for completeness. **Proof.** Let \((d\nu/d\mu)^{+}_{\mathcal{D}}\), \((d\nu/d\mu)^{-}_{\mathcal{D}}\), and \(F_{\mathcal{D}}\) be defined as in (11) and (12). By definition, \[\left(\tfrac{d\nu}{d\mu}\right)^{-}_{\mathcal{D}}(s)\leq\sum_{ \begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\tfrac{\nu(B)}{\mu(B)}\cdot 1_{X}(s,B)\leq\left(\tfrac{d\nu}{d\mu} \right)^{+}_{\mathcal{D}}(s).\] If \(\mathcal{D}\sqsubseteq\mathcal{D}^{\prime}\), then \(F_{\mathcal{D}^{\prime}}\subseteq F_{\mathcal{D}}\), \((d\nu/d\mu)^{-}_{\mathcal{D}}\leq(d\nu/d\mu)^{-}_{\mathcal{D}^{\prime}}\) pointwise on \(F_{\mathcal{D}^{\prime}}\), and \((d\nu/d\mu)^{-}_{\mathcal{D}^{\prime}}\leq(d\nu/d\mu)^{+}_{\mathcal{D}^{\prime }}\leq(d\nu/d\mu)^{+}_{\mathcal{D}}\) pointwise everywhere. Moreover, by Lemma 5.1, for all \(\varepsilon>0\) there exists a sufficiently fine \(\mathcal{D}\) that \((d\nu/d\mu)^{+}_{\mathcal{D}}-(d\nu/d\mu)^{-}_{\mathcal{D}}<\varepsilon\) pointwise. It follows that for any countable chain \(\mathcal{D}_{0}\sqsubseteq\mathcal{D}_{1}\sqsubseteq\cdots\) of sufficiently fine countable measurable partitions, \((d\nu/d\mu)^{+}_{n}\) and \((d\nu/d\mu)^{-}_{n}\) converge pointwise to a measurable function \(f=\inf_{n}(d\nu/d\mu)^{+}_{n}\) and converge uniformly on \(F=\bigcap_{n}F_{\mathcal{D}_{n}}\). In addition, the region of uniform convergence \(F\) is of full \(\mu\)-measure, as \(\mu(F)=\inf_{n}\mu(F_{\mathcal{D}_{n}})=\mu(X)\), and \(f\) vanishes outside \(F\). This establishes (iii). For (i), if \(\mu(C)=0\), then \(\nu(C\cap A^{-}_{k})\leq(\varepsilon\ln k)\mu(C\cap A^{-}_{k})=0\) for all \(k\), where the \(A^{-}_{k}\) are the Hahn decomposition sets constructed in the proof of Lemma 5.1. Assuming \(\mathcal{D}_{n}\) refines one of the partitions defined in that lemma, we have \(F\subseteq F_{\mathcal{D}_{n}}\subseteq\bigcup_{k}A^{-}_{k}\), so \(\nu(C\cap F)=0\). Taking \(\nu_{0}(C)=\nu(C\cap F)\) and \(\nu_{1}(C)=\nu(C\setminus F)\) give a Lebesgue decomposition of \(\nu\) on \(F\) with respect to \(\mu\); in particular, \[\mu(C)=0\Rightarrow\nu(C)=\nu(C\setminus F)=\nu_{1}(C). \tag{14}\] For (ii), for sufficiently fine \(\mathcal{D}\), \[\int_{A}\left(\tfrac{d\nu}{d\mu}\right)^{+}_{\mathcal{D}}(s)\,d \mu(s) =\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\sup_{\begin{subarray}{c}C\subset B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\cdot\mu(A\cap B)\leq\sum_{ \begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}(\inf_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}+\varepsilon)\cdot\mu(A\cap B)\] \[\leq\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}(\frac{\nu(B)}{\mu(B)}+\varepsilon)\cdot\mu(B)\leq\nu(X )+\varepsilon\mu(X),\] thus the integral exists and is finite. Moreover, \[\int_{A}\left(\tfrac{d\nu}{d\mu}\right)^{+}_{n}(s)\,d\mu(s)=\sum_{ \begin{subarray}{c}B\in\mathcal{D}_{n}\\ \mu(B)>0\end{subarray}}\sup_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\cdot\mu(A\cap B)=\sum_{ \begin{subarray}{c}B\in\mathcal{D}_{n}\\ \mu(A\cap B)>0\end{subarray}}\sup_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\cdot\mu(A\cap B). \tag{15}\] The right-hand equality in (15) follows from the observation that all summands corresponding to \(B\in\mathcal{D}_{n}\) with \(\mu(A\cap B)=0\) vanish, whereas for those with \(\mu(A\cap B)>0\), the test \(\mu(B)>0\) is redundant. Specializing the supremum in (15) at \(C=A\cap B\), \[\int_{A}\left(\tfrac{d\nu}{d\mu}\right)^{+}_{n}(s)\,d\mu(s) \geq\sum_{\begin{subarray}{c}B\in\mathcal{D}_{n}\\ \mu(A\cap B)>0\end{subarray}}\nu(A\cap B)=\nu(A)-\sum_{\begin{subarray}{c}B \in\mathcal{D}_{n}\\ \mu(A\cap B)=0\end{subarray}}\nu(A\cap B)\] \[=\nu(A)-\sum_{\begin{subarray}{c}B\in\mathcal{D}_{n}\\ \mu(A\cap B)=0\end{subarray}}\nu_{1}(A\cap B)\] by (14) \[\geq\nu(A)-\sum_{B\in\mathcal{D}_{n}}\nu_{1}(A\cap B)=\nu(A)-\nu_{ 1}(A)=\nu_{0}(A).\] Similarly, \[\int_{A}\left(\tfrac{d\nu}{d\mu}\right)^{-}_{n}(s)\,d\mu(s)=\sum_{ \begin{subarray}{c}B\in\mathcal{D}_{n}\\ \mu(A\cap B)>0\end{subarray}}\inf_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\cdot\mu(A\cap B)=\sum_{ \begin{subarray}{c}B\in\mathcal{D}_{n}\\ \mu(A\cap B\cap F)>0\end{subarray}}\inf_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\cdot\mu(A\cap B\cap F).\] The former equality follows from an argument similar to (15), the latter from the fact that \(\mu\) vanishes outside \(F\). Specializing the infimum at \(C=A\cap B\cap F\), \[\int_{A}\left(\tfrac{d\nu}{d\mu}\right)_{n}^{-}(s)\,d\mu(s)\leq\sum_{ \begin{subarray}{c}B\in\mathcal{D}_{n}\\ \mu(A\cap B\cap F)>0\end{subarray}}\nu(A\cap B\cap F)\leq\sum_{B\in\mathcal{D} _{n}}\nu_{0}(A\cap B)=\nu_{0}(A).\] Thus \(\nu_{0}(A)\) is the limit (13). \(\Box\) The value of the integral (13) is independent of the choice of the \(\sqsubseteq\)-chain \(\mathcal{D}_{n}\), but \(F\) and \(f\) are not, and there is no one choice that works uniformly for all \(\sqsubseteq\)-chains. That is why \(d\nu/d\mu\) is only defined up to a \(\mu\)-nullset. However, by taking least common refinements, one can find \(F\) and \(f\) that work for several \(\sqsubseteq\)-chains at once. That is one reason for making (iii) explicit. ## 6 Composition in JDist To show that **JDist** is a category and that \(\mathbf{J}:\mathbf{Krn}\to\mathbf{JDist}\) is a functor, we must define composition and the identity morphisms in **JDist** and show that they are preserved by \(\mathbf{J}\). Composition in **JDist** is defined as follows. For \(\theta:(X,\mathcal{A},\mu)\to(Y,\mathcal{B},\nu)\) and \(\eta:(Y,\mathcal{B},\nu)\to(Z,\mathcal{C},\xi)\), define \[(\theta\;;\eta)(A\times C)\ =\ \lim_{\mathcal{D}}\sum_{\begin{subarray}{c}B\in \mathcal{D}\\ \nu(B)>0\end{subarray}}\frac{\theta(A\times B)\cdot\eta(B\times C)}{\nu(B)}, \tag{16}\] where the limit is taken over countable measurable partitions \(\mathcal{D}\) of the mediating space \(Y\). We argue below (Theorem 6.1) that the limit exists. The identity morphisms are the joint distributions \(\mathbf{J}\mathbf{1}_{X}\) obtained from the identity kernels \(\mathbf{1}_{X}:X\to X\). Checking associativity of the composition is a mechanical exercise, where one employs commutativity of the countable limit sums over the partitions of the two intermediating spaces: for \(\theta\) and \(\eta\) as above, and \(\zeta:(Z,\mathcal{C},\xi)\to(W,\mathcal{F},\rho)\), one has \[(\theta\;;\eta\;;\zeta)(A\times F)=\lim_{\mathcal{D}}\sum_{ \begin{subarray}{c}B\in\mathcal{D}\\ \nu(B)>0\end{subarray}}\lim_{\mathcal{E}}\sum_{\begin{subarray}{c}C\in \mathcal{E}\\ \xi(C)>0\end{subarray}}\frac{\theta(A\times B)\cdot\eta(B\times C)\cdot\zeta(C \times F)}{\nu(B)\cdot\xi(C)}\] where \(\mathcal{D}\) and \(\mathcal{E}\) range over countable measurable partitions of \(Y\) and \(Z\), respectively. Note that the definition of composition is completely symmetric in the input and output space. The category **JDist** is thus a dagger category whose involution \({}^{\dagger}\) is composition with transpose: \(\theta^{\dagger}(C\times A)=\theta(A\times C)\). **Theorem 6.1**: _Let \(\mu\), \(\nu\), \(\xi\) be finite measures on \(Y\). Let \((d\nu/d\mu)_{\mathcal{D}}^{+}\) and \((d\xi/d\mu)_{\mathcal{D}}^{+}\) be the approximants defined in (12). If there exists a countable measurable partition \(\mathcal{D}\) such that_ \[\int_{Y}\left(\tfrac{d\nu}{d\mu}\right)_{\mathcal{D}}^{+}(t)\, \left(\tfrac{d\xi}{d\mu}\right)_{\mathcal{D}}^{+}(t)\,d\mu(t)<\infty,\] _then the limit_ \[\lim_{\mathcal{D}}\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\frac{\nu(B)\cdot\xi(B)}{\mu(B)} \tag{17}\] exists and is equal to_ \[\int_{Y}f(t)\,g(t)\,d\mu(t)=\inf_{n}\int_{Y}\left(\tfrac{d\nu}{d\mu}\right)^{+}_{ \mathcal{D}_{n}}(t)\,\left(\tfrac{d\xi}{d\mu}\right)^{+}_{\mathcal{D}_{n}}(t)\, d\mu(t) \tag{18}\] _for any sufficiently fine countable \(\sqsubseteq\)-chain \(\mathcal{D}_{0}\sqsubseteq\mathcal{D}_{1}\sqsubseteq\cdots\), where \(f=\inf_{n}(d\nu/d\mu)^{+}_{\mathcal{D}_{n}}\) and \(g=\inf_{n}(d\xi/d\mu)^{+}_{\mathcal{D}_{n}}\)._ **Proof.** By definition of \((d\nu/d\mu)^{+}_{\mathcal{D}}\) and \((d\xi/d\mu)^{+}_{\mathcal{D}}\), \[\int_{Y}\left(\tfrac{d\nu}{d\mu}\right)^{+}_{\mathcal{D}}(t)\, \left(\tfrac{d\xi}{d\mu}\right)^{+}_{\mathcal{D}}(t)\,d\mu(t) =\int_{Y}\left(\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\sup_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\mathbbm{1}_{Y}(t,B)\right)\left( \sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\sup_{\begin{subarray}{c}D\subseteq B\\ \mu(D)>0\end{subarray}}\frac{\xi(D)}{\mu(D)}\mathbbm{1}_{Y}(t,B)\right)\,d\mu(t)\] \[=\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\left(\sup_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}\right)\left(\sup_{ \begin{subarray}{c}D\subset B\\ \mu(B)>0\end{subarray}}\frac{\xi(D)}{\mu(D)}\right)\,\mu(B).\] To show (17) and (18), for arbitrarily small positive \(\varepsilon\), we have by Lemma 5.1 \[\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\frac{\nu(B)\xi(B)}{\mu(B)} =\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\frac{\nu(B)\xi(B)}{\mu(B)\mu(B)}\,\mu(B)\leq\int_{Y} \left(\tfrac{d\nu}{d\mu}\right)^{+}_{\mathcal{D}}(t)\,\left(\tfrac{d\xi}{d\mu }\right)^{+}_{\mathcal{D}}(t)\,d\mu(t)\] \[\leq\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\left(\inf_{\begin{subarray}{c}C\subseteq B\\ \mu(C)>0\end{subarray}}\frac{\nu(C)}{\mu(C)}+\varepsilon\right)\left(\inf_{ \begin{subarray}{c}D\subseteq B\\ \mu(D)>0\end{subarray}}\frac{\xi(D)}{\mu(D)}+\varepsilon\right)\,\mu(B)\] \[\leq\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\left(\frac{\nu(B)}{\mu(B)}+\varepsilon\right)\left( \frac{\xi(B)}{\mu(B)}+\varepsilon\right)\,\mu(B)\] \[\leq\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\left(\frac{\nu(B)\xi(B)}{\mu(B)}+\varepsilon\xi(B)+ \varepsilon\nu(B)+\varepsilon^{2}\mu(B)\right)\] \[\leq\left(\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\frac{\nu(B)\xi(B)}{\mu(B)}\right)+\varepsilon\xi(Y)+ \varepsilon\nu(Y)+\varepsilon^{2}\mu(Y).\] For the remaining statement (18), we use the stronger claim (10) of Lemma 5.1. Since \[\left(\tfrac{d\nu}{d\mu}\right)^{-}_{\mathcal{D}}(t)\leq f(t)\leq\left(\tfrac {d\nu}{d\mu}\right)^{+}_{\mathcal{D}}(t) \left(\tfrac{d\xi}{d\mu}\right)^{-}_{\mathcal{D}}(t)\leq g(t)\leq\left( \tfrac{d\xi}{d\mu}\right)^{+}_{\mathcal{D}}(t),\] we have \[\left(\tfrac{d\nu}{d\mu}\right)^{-}_{\mathcal{D}}(t)\left(\tfrac{d\xi}{d\mu} \right)^{-}_{\mathcal{D}}(t)\leq f(t)g(t)\leq\left(\tfrac{d\nu}{d\mu}\right)^{+ }_{\mathcal{D}}(t)\left(\tfrac{d\xi}{d\mu}\right)^{+}_{\mathcal{D}}(t).\] We must choose \(\mathcal{D}_{n}\) so that \[\left(\tfrac{d\nu}{d\mu}\right)^{+}_{n}(t)\left(\tfrac{d\xi}{d\mu}\right)^{+}_ {n}(t)-\left(\tfrac{d\nu}{d\mu}\right)^{-}_{n}(t)\left(\tfrac{d\xi}{d\mu} \right)^{-}_{n}(t)\] becomes arbitrarily small. By (10) of Lemma 5.1, \(\mathcal{D}\) can be chosen so that \[\left(\frac{d\nu}{d\mu}\right)^{+}_{\mathcal{D}}(t)\left(\frac{d \varepsilon}{d\mu}\right)^{+}_{\mathcal{D}}(t)-\left(\frac{d\nu}{d\mu}\right)^{- }_{\mathcal{D}}(t)\left(\frac{d\varepsilon}{d\mu}\right)^{-}_{\mathcal{D}}(t) \left(\frac{d\varepsilon}{d\mu}\right)^{-}_{\mathcal{D}}(t)\] \[\qquad\leq(\left(\frac{d\nu}{d\mu}\right)^{+}_{\mathcal{D}}(t)- \left(\frac{d\nu}{d\mu}\right)^{-}_{\mathcal{D}}(t))\left(\frac{d\varepsilon }{d\mu}\right)^{+}_{\mathcal{D}}(t)+\left(\frac{d\nu}{d\mu}\right)^{+}_{ \mathcal{D}}(t)(\left(\frac{d\varepsilon}{d\mu}\right)^{+}_{\mathcal{D}}(t)- \left(\frac{d\varepsilon}{d\mu}\right)^{-}_{\mathcal{D}}(t))\leq 2\varepsilon^{2}.\] \(\Box\) **Corollary 6.2**: _The map \(\theta\;;\,\eta\) as defined in (16) on measurable rectangles extends to a joint probability measure on \(X\times Z\)._ **Proof.** This can be done using the Caratheodory-Hahn-Kolmogorov extension theorem4. It suffices to verify the premises of that theorem, namely Footnote 4: This result is sometimes referred to simply as Caratheodory’s extension theorem and Hahn–Kolmogorov theorem, among other names. 1. \(\theta\;;\,\eta\) is finitely additive on measurable rectangles: if \(\{A_{n}\times C_{n}\}_{n}\) is a finite set of pairwise disjoint measurable rectangles whose union is a measurable rectangle, then \[(\theta\;;\eta)(\bigcup_{n}(A_{n}\times C_{n}))=\sum_{n}(\theta\;;\eta)(A_{n} \times C_{n}).\] 2. If for each \(i\geq 0\), \(\{A_{n}^{i}\times C_{n}^{i}\}_{n}\) is a finite collection of pairwise disjoint measurable rectangles with \(\bigcup_{n}(A_{n}^{i+1}\times C_{n}^{i+1})\subseteq\bigcup_{n}(A_{n}^{i} \times C_{n}^{i})\), and if \(\bigcap_{i}\bigcup_{n}(A_{n}^{i}\times C_{n}^{i})=\emptyset\), then \[\inf_{i}(\theta\;;\eta)(\bigcup_{n}(A_{n}^{i}\times C_{n}^{i}))=0.\] For (i), we can assume without loss of generality that the \(A_{n}\) are pairwise disjoint and the \(C_{m}\) are pairwise disjoint, and we are to show \[(\theta\;;\eta)(\bigcup_{n}A_{n}\times\bigcup_{m}C_{m})=\sum_{n}\sum_{m}( \theta\;;\eta)(A_{n}\times C_{m}).\] Since limits commute with finite sums, \[(\theta\;;\eta)(\bigcup_{n}A_{n}\times\bigcup_{m}C_{m}) =\lim_{\mathcal{D}}\sum_{B\in\mathcal{D}}\frac{\theta(\bigcup_{n }A_{n}\times B)\cdot\eta(B\times\bigcup_{m}C_{m})}{\nu(B)}=\lim_{\mathcal{D}} \sum_{B\in\mathcal{D}}\sum_{n}\sum_{m}\frac{\theta(A_{n}\times B)\cdot\eta(B \times C_{m})}{\nu(B)}\] \[=\sum_{n}\sum_{m}\lim_{\mathcal{D}}\sum_{B\in\mathcal{D}}\frac{ \theta(A_{n}\times B)\cdot\eta(B\times C_{m})}{\nu(B)}=\sum_{n}\sum_{m}( \theta\;;\eta)(A_{n}\times C_{m}).\] For (ii), \[\inf_{i}(\theta\;;\eta)(\bigcup_{n}(A_{n}^{i}\times C_{n}^{i})) =\inf_{i}\sum_{n}(\theta\;;\eta)(A_{n}^{i}\times C_{n}^{i})\] \[=\inf_{i}\sum_{n}\lim_{\mathcal{D}}\sum_{B\in\mathcal{D}}\frac{ \theta(A_{n}^{i}\times B)\cdot\eta(B\times C_{n}^{i})}{\nu(B)}=\inf_{i}\lim_{ \mathcal{D}}\sum_{B\in\mathcal{D}}\sum_{n}\frac{\theta(A_{n}^{i}\times B) \cdot\eta(B\times C_{n}^{i})}{\nu(B)}.\] We argue that if \(\bigcap_{i}\bigcup_{n}(A_{n}^{i}\times C_{n}^{i})=\emptyset\), then either \(\bigcap_{i}\bigcup_{n}A_{n}^{i}=\emptyset\) or \(\bigcap_{i}\bigcup_{n}C_{n}^{i}=\emptyset\). Suppose not. Let \(s\in\bigcap_{i}\bigcup_{n}A_{n}^{i}\) and \(t\in\bigcap_{i}\bigcup_{n}C_{n}^{i}\). Then for all \(i\) there exists \(n\) such that \(s\in A_{n}^{i}\) and there exists \(m\) such that \(t\in C_{m}^{i}\). By renumbering if necessary, we can assume that for all \(i\), \(s\in A_{1}^{i}\) and \(t\in C_{1}^{i}\). Then \((s,t)\in\bigcap_{i}(A_{1}^{i}\times C_{1}^{i})\subseteq\bigcap_{i}\bigcup_{n}(A_{ n}^{i}\times C_{n}^{i})\). By symmetry, assume without loss of generality that \(\bigcap_{i}\bigcup_{n}A_{n}^{i}=\emptyset\). Since \[\eta(B\times C_{n}^{i})/\nu(B)=\eta(B\times C_{n}^{i})/\eta(B\times Z)\leq 1,\] we have \[\inf_{i}\lim_{\mathcal{D}}\sum_{B\in\mathcal{D}}\sum_{n}\frac{\theta(A_{n}^{i }\times B)\cdot\eta(B\times C_{n}^{i})}{\nu(B)}\leq\inf_{i}\lim_{\mathcal{D}} \sum_{B\in\mathcal{D}}\sum_{n}\theta(A_{n}^{i}\times B)=\inf_{i}\theta(\bigcup _{i}A_{n}^{i}\times Y)=0.\] \(\Box\) **Theorem 6.3** (Faithfulness): _The map \(\mathbf{J}\) constitutes a faithful embedding \(\mathbf{J}:\mathbf{Krn}\rightarrow\mathbf{JDist}\)._ **Proof.** Composition in \(\mathbf{Krn}\) is defined by \([P]_{\mu}\) ; \([Q]_{\nu}=[P\ ;\,Q]_{\mu}\). This is well defined by Lemma 4.1. To confirm functoriality, we show that \(\mathbf{J}(P\ ;\,Q)=\mathbf{J}P\ ;\,\mathbf{J}Q\) and that the \(\mathbf{J}1_{X}\) are identities for composition in \(\mathbf{JDist}\). For composition, using the fact that for any \(P\), \(\mathbf{J}P(A\times B)=(\mu\ ;\,A\ ;\,P)(B)\), the left-hand side gives \[\mathbf{J}(P\ ;\,Q)(A\times C)=(\mu\ ;\,A\ ;\,P\ ;\,Q)(C).\] For the right-hand side, we observe that \[\inf_{t\in B}Q(t,C)\int_{B}\nu(dt)\leq\int_{B}Q(t,C)\,\nu(dt)\leq\sup_{t\in B }Q(t,C)\int_{B}\nu(dt),\] or in other words, \[\inf_{t\in B}Q(t,C)\,\nu(B)\leq(\nu\ ;\,B\ ;\,Q)(C)\leq\sup_{t\in B}Q(t,C)\,\nu(B),\] so for \(\nu(B)>0\), \[\inf_{t\in B}Q(t,C)\leq\frac{(\nu\ ;\,B\ ;\,Q)(C)}{\nu(B)}\leq\sup_{t\in B}Q(t,C).\] Thus \[(\mathbf{J}P\ ;\,\mathbf{J}Q)(A\times C) =\lim_{\mathcal{D}}\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\frac{\mathbf{J}P(A\times B)\cdot\mathbf{J}Q(B\times C )}{\nu(B)}=\lim_{\mathcal{D}}\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}\frac{(\mu;A;P)(B)\cdot(\nu;B;Q)(C)}{\nu(B)}\] \[=\lim_{\mathcal{D}}\sum_{\begin{subarray}{c}B\in\mathcal{D}\\ \mu(B)>0\end{subarray}}(\mu\ ;\,A\ ;\,P)(B)\cdot\sup_{t\in B}Q(t,C)=\int_{Y}(\mu\ ;\,A\ ;\,P)(dt) \cdot Q(t,C)=(\mu\ ;\,A\ ;\,P\ ;\,Q)(C).\] For the identities, \(\mathbf{J}1_{X}\) ; \(\mathbf{J}P=\mathbf{J}(1_{X}\ ;\,P)=\mathbf{J}P\) and \(\mathbf{J}P\) ; \(\mathbf{J}1=\mathbf{J}(P\ ;\,1_{Y})=\mathbf{J}P\). The equivalence of Lemma 4.1(i) and (iv) establishes the faithfulness of the embedding. \(\Box\) The embedding is also full when \(\mathbf{Krn}\) is restricted to standard Borel spaces--hence, the category considered by Dahlqvist et al.[6] can be fully and faithfully embedded in \(\mathbf{JDist}\). ## 7 Conclusion We have presented a category of joint distributions **JDist** in which composition can be defined without reference to disintegration. To define this composition we explored approximants and an enhanced version of the Radon-Nikodym theorem. Our motivation to define this new category is to provide a setting in which semantics of probabilistic programs can be done under weaker assumptions than previous work (where e.g. standard Borel spaces were considered). We showed that the category **Krn** of Markov kernels on countably generated spaces can be faithfully embedded to **JDist** and, when **Krn** is restricted to standard Borel spaces, the embedding is also full. Although the assumption of spaces being standard Borel may not seem to be a very restrictive one, we believe our new way of composing joint measures will be profitable nevertheless. The category **JDist**, with its definition of composition, is a natural domain to contrive and prove correctness of approximation schemes of joint distributions for probabilistic semantics. This is something we already hinted at with the discrete approach in Section 3. A natural direction for future work is to explore the formulation of a point-free treatment based on the observation that the objects \((X,\mathcal{A},\mu)\) of **Krn** and **JDist** do not really depend on the measure \(\mu\), but only its \(\sigma\)-ideal of nullsets. Thanks to Fredrik Dahlqvist, Vincent Danos, Nate Foster, Justin Hsu, Bart Jacobs, Bobby Kleinberg, Radu Mardare, Prakash Panangaden, Daniel Roy, and Steffen Smolka. Special thanks to Sam Staton for catching a serious error in an earlier draft. Thanks to the Bellairs Research Institute of McGill University for providing a wonderful research environment. The support of the National Science Foundation under grant CCF-2008083 is gratefully acknowledged.
2310.20109
Multi-Objective Intrinsic Reward Learning for Conversational Recommender Systems
Conversational Recommender Systems (CRS) actively elicit user preferences to generate adaptive recommendations. Mainstream reinforcement learning-based CRS solutions heavily rely on handcrafted reward functions, which may not be aligned with user intent in CRS tasks. Therefore, the design of task-specific rewards is critical to facilitate CRS policy learning, which remains largely under-explored in the literature. In this work, we propose a novel approach to address this challenge by learning intrinsic rewards from interactions with users. Specifically, we formulate intrinsic reward learning as a multi-objective bi-level optimization problem. The inner level optimizes the CRS policy augmented by the learned intrinsic rewards, while the outer level drives the intrinsic rewards to optimize two CRS-specific objectives: maximizing the success rate and minimizing the number of turns to reach a successful recommendation in conversations. To evaluate the effectiveness of our approach, we conduct extensive experiments on three public CRS benchmarks. The results show that our algorithm significantly improves CRS performance by exploiting informative learned intrinsic rewards.
Zhendong Chu, Nan Wang, Hongning Wang
2023-10-31T01:07:30Z
http://arxiv.org/abs/2310.20109v1
# Multi-Objective Intrinsic Reward Learning for Conversational Recommender Systems ###### Abstract Conversational Recommender Systems (CRS) actively elicit user preferences to generate adaptive recommendations. Mainstream reinforcement learning-based CRS solutions heavily rely on handcrafted reward functions, which may not be aligned with user intent in CRS tasks. Therefore, the design of task-specific rewards is critical to facilitate CRS policy learning, which remains largely under-explored in the literature. In this work, we propose a novel approach to address this challenge by learning intrinsic rewards from interactions with users. Specifically, we formulate intrinsic reward learning as a _multi-objective bi-level optimization problem_. The inner level optimizes the CRS policy augmented by the learned intrinsic rewards, while the outer level drives the intrinsic rewards to optimize two CRS-specific objectives: _maximizing the success rate_ and _minimizing the number of turns to reach a successful recommendation_ in conversations. To evaluate the effectiveness of our approach, we conduct extensive experiments on three public CRS benchmarks. The results show that our algorithm significantly improves CRS performance by exploiting informative learned intrinsic rewards. ## 1 Introduction Conversational recommender systems (CRS) leverage interactive conversations to delineate a user's preferences (Zhang et al., 2018; Lei et al., 2020; Chu et al., 2023). The conversations revolve around questions aimed at discerning users' preferences on specific item attributes (e.g., music genres). Through an interactive process of questions and answers, a profile about a user's intended item can be depicted. Numerous CRS formulations have been proposed (Chen et al., 2019; Christakopoulou et al., 2018, 2016). In this work, we investigate a prevalent CRS setting known as the multi-round conversational recommendation (Lei et al., 2020; Deng et al., 2021), where a CRS agent can ask a question or recommend an item in consecutive rounds of conversations. The conversation continues until the user accepts the recommendation (indicating a successful conversation) or quits the conversation (considered as a failed conversation). CRS fundamentally embodies a sequential decision making problem, for which numerous reinforcement learning (RL)-based solutions have been proposed (Lei et al., 2020; Chu et al., 2023). However, as the users only provide textual or binary responses (e.g., accepting or rejecting the inquired attributes), existing RL-based solutions heavily rely on heuristic reward functions that are manually defined to train CRS policies. These reward functions, such as promoting attributes accepted by a user and penalizing those rejected, may not accurately reflect user intent due to their heuristic nature. This becomes problematic since the effectiveness of CRS policy learning largely depends on the quality of pre-defined reward function - an inadequately designed reward function can lead to solutions that significant deviates from optimality. Additionally, these arbitrary reward functions can inadvertently distort the modeling of conversation states, influencing the subsequent actions taken by the RL agent. Arguably, an effective reward function should promote actions that lead to more precise modeling of users' preferences. As a result, different attributes or items, including those rejected, can each uniquely contribute to user preference modeling. As an example illustrated in Figure 1, even though _Heavy metal rock_ is rejected by the user, it still, to certain extent, contributes to identifying the target item, _Hey Jude_. However, existing handcrafted heuristic reward functions fall short in delivering information at this granularity, as they assign uniform rewards to all accepted or rejected actions. This motivates us to _learn a reward function_ that enables more fine-grained policy learning. Instead of manually define reward functions, we introduce a principled approach to reward learning for CRS, where we learn a _intrinsic reward_ for each action taken by the agent utilizing the optimal rewards framework (Singh et al., 2010). This framework delineates the optimal intrinsic reward function as the one that, when employed by an RL agent, fosters behaviors that optimize the task-specific or _extrinsic rewards_ - in the case of CRS, successful recommendations. Two notable technical challenges stand out when learning intrinsic rewards for CRS. First, explicit extrinsic rewards in CRS are extremely sparse, which complicates the intrinsic reward learning. Despite that the agent interacts with the user in each round, the only clear extrinsic reward signal, which is whether the overall conversation is successful or not, is only revealed from the user at the conclusion of the conversation. The significance of each accepted or rejected attribute/item prior to the conversation's ends remain ambiguous. For instance, an inquired attribute that is rejected by the user does not necessarily imply a negative reward for policy learning, as it can signify what the user is not looking for. Second, the assessment of CRS is multi-dimensional, entailing various factors that contribute to the overall effectiveness and user experience, such as recommendation quality and user effort. On the one hand, asking more questions may be necessary to accurately profile user preferences to facilitate a successful recommendation. On the other hand, reducing user effort in conversations (i.e., fewer conversation turns) is essential to ensure users' engagement and maintain their satisfaction. Balancing these factors is a delicate task. To tackle the challenges for improving CRS from a reward learning perspective, we develop an online algorithm for learning intrinsic reward functions via multi-objective bi-level optimization. We name the proposed solution **CRSIRL**, meaning **CRS** with **I**ntrinsic **R**eward **L**earning. In the inner loop of CRSIRL, the policy is optimized with the learned intrinsic reward function. In the outer loop, the intrinsic reward function is updated to satisfy two specific criteria designed for CRS. The first criterion aims to maximize the sparse extrinsic reward, augmented by a reward shaping strategy to encourage actions that promote the target item as quickly as possible. The second criterion involves tailoring the learnt reward function to promote successful trajectories over the failed ones. The results of our extensive experiments demonstrate that CRSIRL not only improves the success rate of CRS but also achieves it with shorter conversations. ## 2 Related Work **Conversational Recommder Systems.**Christakopoulou et al. (2016) pioneered the concept of Conversational Recommender Systems (CRS). Their approach primarily focused on determining which items to solicit feedback on and applied off-the-shelf metrics such as the upper confidence bound (Auer, 2002) for this purpose. This laid the groundwork for reinforcement learning (RL) based methods, which have recently become the prevalent solutions for CRS. For example, Sun and Zhang (2018) developed a policy network to decide whether to recommend an item or inquire about an item attribute at each conversation turn. However, these initial studies terminated the conversation upon Figure 1: Motivating example of intrinsic reward learning. making a recommendation, regardless of user acceptance. Lei et al. (2020) studied multi-round conversational recommendation, where CRS can ask a question or recommend an item in multiple rounds before the user accepts the recommendation or quits. This is also the setting of our work in this paper. To better address multi-round CRS, Lei et al. (2020) leveraged knowledge graphs to select more relevant attributes to ask across turns. Xu et al. (2021) extended (Lei et al., 2020) by revising user embeddings dynamically based on users' feedback on attributes and items. And Deng et al. (2021) unified the question selection module and the recommendation module in an RL-based CRS solution. However, all the aforementioned works depend on heuristically crafted reward functions, which may lead policies to deviate from the optimal solution. In this work, we propose to learn intrinsic rewards which can maximize the recommendation performance. **Intrinsic Reward Learning in Reinforcement Learning.** Intrinsic reward learning has emerged as a promising approach to enhance the performance and efficiency of reinforcement learning algorithms. Singh et al. (2010) introduced the Optimal Reward Framework which aims to find a good reward function that allows agents to solve a distribution of tasks using exhaustive search. Pathak et al. (2017) introduced the concept of curiosity-driven intrinsic rewards, where the agent is rewarded for actions that lead to novel states, improving its ability to explore complex environments. Zheng et al. (2018) proposed a meta-gradient method named LIRPG to learn intrinsic rewards via a bi-level optimization framework. Zheng et al. (2020) extended LIRPG by learning intrinsic rewards on a distribution of tasks. Liu et al. (2022) developed another meta-gradient method to learn intrinsic rewards from trajectory preferences. In this work, we propose a novel intrinsic reward learning framework designed for CRS, where we learn intrinsic reward functions to satisfy multiple CRS-specific objectives from users' extremely sparse explicit reward feedback. ## 3 Preliminaries In this section, we define the notations to be used in our technical discussions and some basic notions in multi-objective optimization. ### Problem Definition Similar to traditional recommender systems, CRS serves a set of users \(\mathcal{U}\) with a set of items \(\mathcal{V}\); and we denote a specific user as \(u\) and an item as \(v\). Each item \(v\) is associated with a set of pre-defined attributes \(\mathcal{P}_{v}\). Attributes describe basic properties of the items, such as genres in movie recommendations and cuisine type in restaurant recommendations. We formulate the CRS problem using a Markov decision process (MDP) (Deng et al., 2021; Lei et al., 2020, 2020), which can be fully described by a tuple \((\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R})\). \(S\) denotes the state space, which summarizes the conversation between the system and user so far. \(\mathcal{A}\) denotes the action space for the system, which includes recommending a particular item or asking for feedback on a specific attribute. \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) is the state transition function, and \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is a reward function. With this formulation, a conversation in CRS can be represented as \(d=\{(a_{1},r_{1}),...(a_{T},r_{T})\}\), where \(T\) is the maximum number of allowed turns. A conversation (or an episode in the language of RL, which we will use exchangeablely) terminates when: (1) the user accepts the recommended item; or (2) the CRS agent runs out of maximum number of allowed turns. At each time step \(t\), the CRS agent, which follows a policy \(\pi_{\theta}(a_{t}|s_{t})\) parameterized by \(\theta\), selects an action \(a_{t}\) based on the current state \(s_{t}\). The training objective of a CRS policy is to maximize the expected cumulative rewards over the set of observed episodes \(\mathcal{D}\), i.e., minimizing the loss \[\mathcal{L}(\pi)=-\operatorname*{\mathbb{E}}_{d\sim P(\mathcal{D})}\big{[} \sum_{t=0}^{T}R_{t}\big{]}, \tag{1}\] where \(R_{t}=\sum_{t^{\prime}=t}^{T}\gamma^{T-t^{\prime}}r(a_{t})\) is the accumulated reward from turn \(t\) to the final turn \(T\), and \(\gamma\in[0,1]\) is a discount factor to emphasize rewards collected in a near term. Instead of using handcrafted reward functions \(\mathcal{R}\) as in previous works (Lei et al., 2020, Deng et al., 2021; Chu et al., 2023), we learn an intrinsic reward function defined as \(r_{\phi}^{in}(s,a)\) parameterized by \(\phi\) to enhance CRS policy learning. In this context, the original CRS-specific reward is referred to as the extrinsic reward, denoted as \(r^{ex}(s,a)\). The extrinsic reward is inherently sparse, as the only discernible and useful reward signal is the success or failure of an episode, with the intermediate actions' contributions remaining ambiguous. We assign a positive extrinsic reward at the conclusion of a successful episode and a negative reward otherwise. All intermediate actions are assigned a zero extrinsic reward. ### Multi-Objective Optimization We utilize multi-objective optimization (MOO) to achieve the multi-dimensional goal of CRS, i.e., maximizing the success rate and reducing the length of conversations. MOO aims to simultaneously optimize multiple objectives, possibly conflicting ones. This results in a trade-off among objectives, making the CRS problem more complex and challenging to solve. In these cases, the Pareto-optimal solutions represent different optimal trade-offs between the objectives (Deb and Deb, 2013). Consider \(M\) objective functions \(\{\mathcal{L}^{1},...,\mathcal{L}^{M}\}\), a model parameterized by \(\theta\) is optimized towards them. We specify the multi-objective optimization using a vector valued loss \(\mathbb{L}\), \[\underset{\theta}{\text{min}}\,\mathbb{L}(\theta)=\underset{\theta}{\text{ min}}\,\big{(}\mathcal{L}^{1}(\theta),...,\mathcal{L}^{M}(\theta)\big{)}^{\top} \tag{2}\] The goal of multi-objective optimization is achieving Pareto optimality. **Definition 1** (Pareto optimality).: 1. _[label=()]_ 2. _A solution_ \(\theta\) _dominates a solution_ \(\bar{\theta}\) _if_ \(\mathcal{L}^{t}(\theta)\leq\mathcal{L}^{t}(\bar{\theta})\) _under all objectives and_ \(\mathbb{L}(\theta)\neq\mathbb{L}(\bar{\theta})\)_._ 3. _A solution_ \(\theta^{*}\) _is called Pareto optimal if there exists no solution_ \(\theta\) _that dominates_ \(\theta^{*}\)_._ The set of Pareto optimal solutions is called the Pareto front \(\mathcal{P}_{\theta}\). ## 4 Multi-Objective Intrinsic Reward Learning for CRS In this section, we elaborate on the structure of CRSIRL, as illustrated in Figure 2. CRSIRL operates on two tiers of optimization: the inner optimization, which improves the policy using both the extrinsic reward and the learned intrinsic reward function, and the outer optimization, which refines the intrinsic reward function based on the policy assessment derived from the inner optimization. Given the absence of supervision for the intrinsic reward function, we establish the relationship between it and the refined policy via gradient descent in the inner optimization. Specifically, we compute meta-gradient for the intrinsic reward function using chain rule in the outer optimization. In the outer optimization, we design a point-wise objective striving to enhance extrinsic rewards in the learnt intrinsic reward function, through the use of hindsight reward shaping (HRS). This objective aids in pinpointing pivotal actions that significantly improve the target item's ranking, thereby shortening the conversation length. In parallel, we introduce a pair-wise objective that favors successful trajectories over unsuccessful ones, which assists in identifying actions that result in Figure 2: Overview of CRSIRL,which consists of two modules, a policy parameterized by \(\theta\) and an intrinsic reward function parameterized by \(\phi\). The optimization of CRSIRL has two levels. In the inner level, a policy is trained to maximize the return defined by both intrinsic and extrinsic rewards. In the outer level, the intrinsic reward function is trained to optimize two CRS-specific objectives realized by the learnt policy’s behaviors. preferred conversations. This objective is named as recommendation preference matching (RPM). Finally, we introduce a holistic multi-objective bi-level optimization framework that optimizes intrinsic rewards to meet both objectives. ### Hindsight Reward Shaping As we discussed before, the extrinsic reward is extremely sparse in CRS. The only clear and informative signal from the extrinsic reward is whether the conversation is successful, making it hard to judge the progress of user preference elicitation during the conversation. Reward shaping, as proposed by Ng et al. (1999), serves as a valuable tool for incorporating task-specific knowledge to estimate the reward function. We leverage reward shaping within the outer loop of our model to imbue the process of intrinsic reward learning with more nuanced, task-specific guidance. We use the following hindsight reward shaping to augment the extrinsic reward, \[\tilde{r}^{ex}(s_{t},a_{t})=r^{ex}(s_{t},a_{t})+\gamma w(s_{t+1},v)-w(s_{t},v), \tag{3}\] where \(w\) is a scoring function, \(v\) is the target item and \(\gamma\) is a discount factor. \(\tilde{r}^{ex}(s_{t},a_{t})\) encourages actions which promote the target item. In turn, it helps shorten the conversation length. In our experiments, we use \(w=\log(\rho(s_{t},v)+1)\), where \(\rho(s_{t},v)\) is the rank of target item \(v\) under state \(s_{t}\). **Lemma 1**.: _Consider any reward shaping function \(\mathcal{F}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\), we say \(\mathcal{F}\) is **a potential-based** reward shaping function (PBRS) if there exists a real-valued function \(\Phi:\mathcal{S}\rightarrow\mathbb{R}\) satisfying,_ \[\mathcal{F}(s,a,s^{\prime})=\gamma\Phi(s^{\prime})-\Phi(s), \tag{4}\] _Then \(\mathcal{F}\) being PBRS is a necessary and sufficient condition for it to guarantee the consistency of the optimal policy, i.e., the optimal policy of (\(\mathcal{S}\), \(\mathcal{A}\), \(\mathcal{T}\), \(\mathcal{R}+\mathcal{F}\)) is the same as (\(\mathcal{S}\), \(\mathcal{A}\), \(\mathcal{T}\), \(\mathcal{R}\))._ The proof is based on (Ng et al., 1999) and omitted in this paper. By matching the form of Eq.(4) and Eq.(3), we can conclude the hindsight reward shaping satisfies the PBRS condition, and thus the optimal policy is consistent. The resulted objective induced by HRS is \[\mathcal{L}^{ex}(\theta)=-\mathbb{E}\big{[}\sum_{t=0}^{T}\tilde{R}^{ex}_{t} \big{]}, \tag{5}\] where \(\tilde{R}^{ex}_{t}=\sum_{t^{\prime}=t}^{T}\gamma^{T-t^{\prime}}\tilde{r}^{ex}_ {t^{\prime}}\). Note that the information of target item is unknown beforehand, we can only use HRS after the target item is hit, which is why we call it _hindsight_. Otherwise HRS degenerates to the original extrinsic reward. ### Recommendation Preference Matching Even though the contributions of intermediate actions to a conversation are undefined in the extrinsic reward, it is still feasible to discern valuable intermediate actions that could potentially lead to a successful conversation, and the learned intrinsic reward should help us identify them. We realize this by contrasting successful and failed episodes by the learnt intrinsic reward: a preferred episode should have a higher likelihood under the optimal policy, comparing to a less preferred one; and this optimal policy should be achieved by the correct reward function. Given a policy \(\pi_{\theta}\), the probability of conversation \(\tau^{0}\) is preferred over \(\tau^{1}\) is computed based on the likelihood of the trajectories, \[P_{\theta}\big{[}\tau^{0}\succ\tau^{1}\big{]}=\frac{\exp\sum_{t\in\tau^{0}} \log\pi_{\theta}(a_{t}|s_{t})}{\exp\sum_{t\in\tau^{0}}\log\pi_{\theta}(a_{t}|s _{t})+\exp\sum_{t\in\tau^{1}}\log\pi_{\theta}(a_{t}|s_{t})}, \tag{6}\] Assume \(\tau^{0}\) is preferred over \(\tau^{1}\), the resulting loss function is given by, \[\mathcal{L}^{p}(\theta)=-\mathbb{E}\Big{[}\sum_{\tau^{0}\succ\tau^{1}}\log P _{\theta}\big{[}\tau^{0}\succ\tau^{1}\big{]}\Big{]}, \tag{7}\] where \(\tau^{0},\tau^{1}\in\mathcal{B}\) are sampled from a buffer storing past trajectories. This follows the Bradley-Terry model (Bradley and Terry, 1952) for estimating score functions from pairwise preferences. In the context of CRS, the preference is naturally defined by whether the recommendation is successful or not; and among successful recommendations, we prefer the one shorter. We truncate the failed trajectory to match the length of the successful trajectory. ### Multi-Objective Bi-Level Optimization The intrinsic reward function is expected to lead to a policy satisfying the above two objectives. This translates to a bi-level optimization procedure for policy learning: first update the policy with learned intrinsic rewards, and then improve the intrinsic rewards to help the resulting policy better satisfy the above two objectives. More formally, we define, \[\min_{\phi} \mathbb{L}(\theta^{\prime}),\] (8) s.t. \[\theta^{\prime}=\arg\min_{\theta}\mathcal{L}^{ex+in}(\theta,\phi).\] where \(\mathbb{L}(\theta^{\prime})=\left(\mathcal{L}^{ex}(\theta^{\prime}),\mathcal{ L}^{p}(\theta^{\prime})\right)\) and \(\mathcal{L}^{ex+in}(\theta,\phi)\) is the negative cumulative reward calculated with weighted sum \(r^{ex}+\lambda r_{\phi}^{in}\), \(\lambda\) is a hyper-parameter to balance two rewards. In the inner loop, we optimize the policy with both the extrinsic reward and the learned intrinsic reward function. In the outer loop, we optimize the intrinsic reward function to minimize the vector value loss. To derive the gradients for optimization, we first build the connection between \(\theta\) and \(\phi\) in the inner loop, and then derive the gradients on \(\phi\) in the outer loop. **Inner Loop: Optimizing \(\theta\), building the connection between \(\theta\) and \(\phi\).** We update \(\theta\) as follows, \[\theta^{\prime}=\theta-\eta\cdot\nabla_{\theta}\mathcal{L}^{ex+in}(\theta, \phi), \tag{9}\] where \(\nabla_{\theta}\mathcal{L}^{ex+in}(\theta,\phi)\) can be calculated by the policy gradient theorem (Sutton et al., 1999) and \(\eta\) is the learning rate used in the inner loop. In this way, the updated parameter \(\theta^{\prime}\) becomes a function of \(\phi\). With the built connection, we are able to compute the gradient of \(\phi\) by taking the gradient of gradient on \(\theta^{\prime}\), i.e., the _meta-gradient_. **Outer Loop: Optimizing \(\phi\).** In the outer loop, we optimize the vector value loss \(\mathbb{L}(\theta^{\prime})\) to satisfy aforementioned two CRS-specific objectives. Even though we do not have supervision on \(\phi\), the gradient of \(\phi\) can still be derived using the chain rule, \[g(\phi)=\frac{\partial\mathbb{L}(\theta^{\prime})}{\partial\theta^{\prime}} \cdot\frac{\partial\theta^{\prime}}{\partial\phi} \tag{10}\] Different from single objective optimization, the first part of Eq.(13) is the derivative w.r.t. the multi-objective function \(\mathbb{L}(\theta^{\prime})\). Sener and Koltun (2018) adopt the multiple gradient descent algorithm (MDGA) (Desideri, 2012) to find a Pareto stationary point for a MOO problem. We follow their approach to solve the following optimization problem, \[\min_{\alpha\in[0,1]}\biggl{\{}\left\|\alpha\nabla_{\theta^{\prime}} \mathcal{L}^{ex}(\theta^{\prime})+(1-\alpha)\cdot\nabla_{\theta^{\prime}} \mathcal{L}^{p}(\theta^{\prime})\right\|_{2}^{2}\biggr{\}}, \tag{11}\] where \(\alpha\) has the following analytical solution, \[\alpha=\left[\frac{\nabla_{\theta^{\prime}}\mathcal{L}^{p}(\theta^{\prime})- \nabla_{\theta^{\prime}}\mathcal{L}^{ex}(\theta^{\prime})^{\top}\nabla_{ \theta^{\prime}}\mathcal{L}^{p}(\theta^{\prime})}{\|\nabla_{\theta^{\prime}} \mathcal{L}^{ex}(\theta^{\prime})-\nabla_{\theta^{\prime}}\mathcal{L}^{p}( \theta^{\prime})\|}\right]_{+,\uparrow}, \tag{12}\] where \([.]_{+,\uparrow}\) represents clipping to \([0,1]\) as \([a]_{+,\uparrow}=\max(\min(a,1),0)\). The resulted meta-gradient of \(\phi\) becomes, \[g(\phi)=\alpha\cdot\frac{\partial\mathcal{L}^{ex}(\theta^{\prime})}{\partial \theta^{\prime}}\cdot\frac{\partial\theta^{\prime}}{\partial\phi}+(1-\alpha) \cdot\frac{\partial\mathcal{L}^{p}(\theta^{\prime})}{\partial\theta^{\prime}} \cdot\frac{\partial\theta^{\prime}}{\partial\phi}. \tag{13}\] Thus \(\phi\) is updated by, \[\phi^{\prime}=\phi-\beta\cdot g(\phi), \tag{14}\] where \(\beta\) is the learning rate used in the outer loop. We can conclude the optimization in the outer loop as an automatic trade-off between two objectives, and thus the resulted intrinsic reward function is expected to strike a good balance between two CRS objectives. **Training procedure.** Now we are finally equipped to illustrate the complete learning solution for CRSIRL in Algorithm 1. In the inner loop, we first rollout an episode to calculate \(\mathcal{L}^{ex+in}\). In the outer loop, we also rollout an episode to calculate \(\mathcal{L}^{ex}\) and sample a pair from the trajectory buffer \(\mathcal{B}\) to calculate \(\mathcal{L}^{p}\). We run the inner loop and outer loop alternately until the model convergence. In this section, we conduct extensive experiments on three widely-used CRS benchmarks to study the following research questions: (1) Can CRSIRL achieve better performance than state-of-the-art CRS solutions? (2) How does each proposed component contribute to the final performance of CRSIRL? (3) How does the degree of intrinsic rewards affect the policy learning? ### Setup **Datasets.** We evaluate CRSIRL on three multi-round CRS benchmarks [11, 12]. The **LastFM** dataset is for music artist recommendation. Lei et al. [12] manually grouped the original attributes into 33 coarse-grained attributes. The **LastFM*** dataset is the version where attributes are not grouped. The **Yelp*** dataset is for local business recommendation. We summarize their statistics in Table 1. **User simulator.** Training and evaluating CRS through direct user interactions can be prohibitively expensive at scale. We address this by employing the user simulator approach from [11], simulating a conversation session for each observed user-item interaction pair \((u,v)\). In this simulation, item \(v\) is considered the target item, and its attribute set \(\mathcal{P}_{v}\) is treated as the oracle set of attributes preferred by user \(u\). The session begins with the simulated user specifying an attribute, randomly selected from \(\mathcal{P}_{v}\). This simulation adheres to the "System Ask, User Respond" paradigm in CRS, as described in [12]. **Baselines.** We consider a rich set of state-of-the-art CRS solutions. **Max Entropy** chooses to select an attribute with maximum entropy based on the current state, or to recommend the top ranked item. **Abs Greedy**[13] continues to suggest items until it either makes a successful recommendation or reaches the maximum number of allowed attempts. **CRM**[23] is an RL-based solution. It integrates user preferences into a belief tracker, which then guides the decision-making process regarding when to ask which attribute. **EAR**[11] proposes a three-stage RL solution consisting of estimation, action and reflection. **SCPR**[11] reconceptualizes the CRS problem as an interactive path reasoning process within a user-item-attribute graph. It selects candidate attributes and items based on their relationship to attributes that have already interacted with users within this graph. **FPAN**[22] extends the EAR model by utilizing a user-item-attribute graph to enhance offline representation learning. User embeddings are revised dynamically based on users' feedback on items and attributes in the conversation. **UNICORN**[12] merges the conversation and recommendation components into a unified RL agent. To streamline the RL training process, it proposes two heuristic strategies for pre-selecting attributes and items at each turn. **Evaluation metrics.** We follow previous works on multi-round CRS to evaluate the performance of CRS with success rate at turn \(T\) (SR@\(T\)) and average turns (AT) of conversations. SR@\(T\) is the average ratio of successful episodes with \(T\) turns, while AT is the average number of turns for all conversations. We also report the two-level hierarchical normalized discounted cumulative gain [12] defined as \[hDCG@(T,K)=\sum_{t=1}^{T}\sum_{k=1}^{K}r(t,k)\bigg{[}\frac{1}{\log_{2}(t+2)}+ \bigg{(}\frac{1}{\log_{2}(t+1)}-\frac{1}{\log_{2}(t+2)}\bigg{)}\frac{1}{\log_{ 2}(k+1)}\bigg{]},\] \begin{table} \begin{tabular}{l r r r} \hline \hline & **LastFM** & **LastFM*** & **Yelp*** \\ \hline \#Users & 1,801 & 1,801 & 27,675 \\ \#Items & 7,432 & 7,432 & 70,311 \\ \#Attributes & 33 & 8,438 & 590 \\ \#Interactions & 76,693 & 76,693 & 1,368,606 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary statistics of datasets. where \(T\) and \(K\) represent the number of conversation turns and recommended items in each turn, \(r(t,k)\) denotes the relevance of the results at the turn \(t\) and position \(k\). Intuitively, successful sessions with fewer turns are preferable for CRS. Also, the target item is expected to be ranked higher on the recommendation list at the success turn. We report \(hDCG@(15,10)\) by default. **Training details.** All datasets are split by 7:1.5:1.5 ratio for training, validation and testing. We used the Transformer-based state encoder proposed in (Chu et al., 2023). We adopt TransE (Bordes et al., 2013) to pretrain the node embeddings on the training set, and use the user simulator described before for online policy learning on the validation set. We first pretrain the policy with only extrinsic reward using policy gradient and then apply CRSIRL to fine-tune the pretrained policy. The learning rates in the inner and outer loop are searched from \(\{1e^{-5},5e^{-5},1e^{-4}\}\) with Adam optimizer. The coefficient of intrinsic reward \(\lambda\) is searched from \(\{0.05,0.1,0.5,1.0\}\). The discount factor \(\gamma\) is set to 0.999. All experiments are run on an NVIDIA Geforce RTX 3080Ti GPU with 12 GB memory. RL-based baselines rely on handcrafted rewards, we follow Lei et al. (2020a) to set (1) \(r_{\text{rec\_suc}}=1\) for successful recommendation; (2) \(r_{\text{rec\_fail}}=-0.1\) for failed recommendation; (3) \(r_{\text{ask\_suc}}=0.1\) when the inquired attribute is confirmed by the user; (4) \(r_{\text{rec\_fail}}=-0.1\) when the inquired attribute is dismissed by the user; (5) \(r_{\text{quit}}=-0.3\) when the user quits the conversation without a successful recommendation. We set the maximum turn \(T\) as 15 and the size \(K\) of the recommendation list as 10. We provide more implementation details in the supplementary material. ### Results & Analysis We present the main results in Table 2. We can clearly observe the CRSIRL outperformed all baselines with a large margin. Both FPAN and EAR are policy gradient based methods, but they pretrain their policies using conversation history generated by a rule-based strategy via supervised learning. This training approach biases policies towards pre-set rules, limiting the performance of policy learning on datasets with larger action spaces (like LastFM* and Yelp*), where more exploration is necessary. SCPR and UNICORN have relatively stable performance on all the datasets. Our CRSIRL outperforms all baselines significantly with its learned intrinsic rewards. Rather than arbitrarily assigning the reward values, we dynamically optimize them in CRSIRL. Any action that contributes to a final successful recommendation should receive credit and thus be promoted by policy learning, regardless of whether it involves a rejected attribute or a failed recommendation. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{LastFM} & \multicolumn{3}{c}{LastFM*} & \multicolumn{3}{c}{Yelp*} \\ \cline{2-10} & SR@15 & AT & hDCG & SR@15 & AT & hDCG & SR@15 & AT & hDCG \\ \hline Abs Greedy & 0.222 & 13.48 & 0.073 & 0.635 & 8.66 & 0.267 & 0.189 & 13.43 & 0.089 \\ Max Entropy & 0.283 & 13.91 & 0.083 & 0.669 & 9.33 & 0.269 & 0.398 & 13.42 & 0.121 \\ \hline CRM & 0.325 & 13.75 & 0.092 & 0.580 & 10.79 & 0.224 & 0.177 & 13.69 & 0.070 \\ EAR & 0.429 & 12.88 & 0.136 & 0.595 & 10.51 & 0.230 & 0.182 & 13.63 & 0.079 \\ SCPR & 0.465 & 12.86 & 0.139 & 0.709 & 8.43 & 0.317 & 0.489 & 12.62 & 0.159 \\ FPAN & 0.630 & 10.16 & 0.224 & 0.667 & 7.82 & 0.407 & 0.236 & 12.77 & 0.116 \\ UNICORN & 0.535 & 11.82 & 0.175 & 0.788 & 7.58 & 0.349 & 0.520 & 11.31 & 0.203 \\ \hline **CRSIRL** & **0.772\({}^{\dagger}\)** & **10.12\({}^{\dagger}\)** & **0.231\({}^{\dagger}\)** & **0.913\({}^{\dagger}\)** & **6.79\({}^{\dagger}\)** & **0.431\({}^{\dagger}\)** & **0.622\({}^{\dagger}\)** & **10.61\({}^{\dagger}\)** & **0.228\({}^{\dagger}\)** \\ \hline \hline \end{tabular} \end{table} Table 2: Main results. For SR@15 and hDCG, higher is better. For AT, lower is better. \({}^{\dagger}\) represents the improvement over all baselines is statistically significant with \(p\)-value \(<0.01\). \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{LastFM} & \multicolumn{3}{c}{LastFM*} & \multicolumn{3}{c}{Yelp*} \\ \cline{2-10} & SR@15 & AT & hDCG & SR@15 & AT & hDCG & SR@15 & AT & hDCG \\ \hline PG & 0.724 & 10.42 & 0.217 & 0.882 & 7.41 & 0.401 & 0.598 & 10.95 & 0.186 \\ \(-\)HRS & 0.732 & 10.58 & 0.219 & 0.898 & 7.16 & 0.401 & 0.602 & 11.21 & 0.196 \\ \(-\)RPM & 0.754 & 10.14 & 0.227 & 0.904 & 6.89 & 0.415 & 0.606 & 10.81 & 0.213 \\ MTL & 0.768 & **10.09** & 0.224 & 0.908 & 6.92 & 0.426 & 0.613 & 10.73 & 0.207 \\ \hline **CRSIRL** & **0.772** & 10.12 & **0.231** & **0.913** & **6.79** & **0.431** & **0.622** & **10.61** & **0.228** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study of different components of CRSIRL. ### Ablation Study **Contributions of each component in CRSIRL.** We evaluate different variants of CRSIRL to study the contributions of each proposed component. Firstly, we disable the fine-tuning with CRSIRL and directly report the results after the policy gradient pretraining with only extrinsic rewards, denoted as PG. Secondly, we remove HRS and RPM to evaluate their individual effectiveness. In both variants, the outer loop degenerates to a single objective optimization problem. Finally, we conduct an experiment where instead of updating the objectives with MOO, we treat the two objectives as distinct tasks and assign them equal weights, a process referred to as Multi-Task Learning (MTL). We present the results in Table 3. Interestingly, we observe that directly optimizing the extrinsic rewards with policy gradient already outperformed most of baselines. It is worth noting that PG uses sparser rewards than other baselines in Table 2 with manually-defined rewards for intermediate actions. However, the exploratory behavior of PG enables it to outperform these baselines. We can observe that without HRS, the AT metric degenerated on all three datasets. HRS prefers actions which can increase the rank of the target item, which is the most direct metric of action utilities in CRS. Even though the asked questions could still be helpful without HRS, HRS provides explicit hints about how to ask the most useful questions, leading to a smaller AT. Besides, the performance decreases after removing RPM, which finds actions leading to successful recommendations by comparing successful and failed trajectories. Lastly, MTL shows a significant improvement compared to PG, and it occasionally outperforms CRSIRL (e.g., AT on LastFM). However, MTL has difficulty balancing the two objectives, generally resulting in worse performance than CRSIRL. **Analysis of \(\lambda\).** We use the hyper-parameter \(\lambda\) to control the influence of intrinsic rewards. It is important to study how it affects the policy learning. In this experiment, we evaluate CRSIRL with \(\lambda=\{0.05,0.1,0.5,1.0\}\) on the LastFM dataset. The result is shown in Figure 3. We can clearly observe that the performance peaks when an appropriate \(\lambda\) is chosen. With a small \(\lambda\), the intrinsic reward is not strong enough to affect the final performance. However, a large value of \(\lambda\) can unfortunately impair performance. This is because the estimation errors in the intrinsic reward could overwhelm the extrinsic reward. Even though it is sparse, the extrinsic reward can help calibrate the intrinsic reward. ### Case Study Additionally, we performed a qualitative study to analyze the learned intrinsic rewards of CRSIRL (shown in Figure 4) on the LastFM dataset. The natural language questions and user responses are generated by predefined templates. We observe that the intrinsic rewards depend not only on whether the user accepts or rejects the action, but also on how well the action contributes to the final recommendation. Even though the user accepts _pop_, the intrinsic reward for this action remains negative. This is because _pop_ is a very general attribute and contributes little to modeling the user's preference. Conversely, although _vocalist_ is rejected by the user, it still carries a small positive value as it aids in identifying the target artist. Finally, _Indie_ and _Punk_ are two attributes that are accepted and best describe the target artist, _Franz Ferdinand2_ (a band known for _indie rock_ and _post-punk revival_). Consequently, they carry relatively large positive intrinsic rewards. This case shows the CRSIRL can provide more fine-grained reward signals, leading to better final performance. Figure 4: Conversations generated by CRSIRL. The values of the learned intrinsic rewards are marked in red. Figure 3: Performance with different \(\lambda\). Conclusions In this paper, we study an important but largely under-explored problem in CRS, i.e., reward function design. We present a principled solution for reward learning for CRS and formulate an online algorithm to learn intrinsic rewards via bi-level multi-objective optimization. In the inner loop of CRSIRL, we optimize the policy with the learned intrinsic reward function. And in the outer loop, we optimize the intrinsic reward function to satisfy two criteria designed for CRS: maximizing success rate and minimizing number of turns in conversations. The results on three CRS benchmarks demonstrated the effectiveness of learned intrinsic rewards. CRSIRL sheds light on learning reward functions to improve CRS. Currently, we consider two directly quantifiable objectives for CRS, i.e., success rate and conversation length. Other perspectives, such as user satisfaction (Liang et al., 2006) and fairness (Lin et al., 2022), are worth to be investigated and embedded into reward learning. Moreover, beyond the template-based conversation generation, it is important to integrate CRSIRL with advanced natural language-based conversational agents, such as (Touvron et al., 2023; Zhang et al., 2023), to learn reward functions that satisfy multiple objectives favored by humans during natural language driven interactions, such as conversational persuasiveness, cohesiveness and explainability (Moon et al., 2019). ## 7 Acknowledgement We thank the anonymous reviewers for their insightful comments. This work was partially supported by NSF IIS-2007492, IIS-2128019 and NSF IIS-1838615.
2305.19516
Deep learning inter-atomic potential for irradiation damage in 3C-SiC
We developed and validated an accurate inter-atomic potential for molecular dynamics simulation in cubic silicon carbide (3C-SiC) using a deep learning framework combined with smooth Ziegler-Biersack-Littmark (ZBL) screened nuclear repulsion potential interpolation. Comparisons of multiple important properties were made between the deep-learning potential and existing analytical potentials which are most commonly used in molecular dynamics simulations of 3C-SiC. Not only for equilibrium properties but also for significant properties of radiation damage such as defect formation energies and threshold displacement energies, our deep-learning potential gave closer predictions to DFT criterion than analytical potentials. The deep-learning potential framework solved the long-standing dilemma that traditional empirical potentials currently applied in 3C-SiC radiation damage simulations gave large disparities with each other and were inconsistent with ab-initio calculations. A more realistic depiction of the primary irradiation damage process in 3C-SiC can be given and the accuracy of classical molecular dynamics simulation for cubic silicon carbide can be expected to the level of quantum mechanics.
Yong Liu, Hao Wang, Linxin Guo, Zhanfeng Yan, Jian Zheng, Wei Zhou, Jianming Xue
2023-05-31T02:56:40Z
http://arxiv.org/abs/2305.19516v1
# Deep learning inter-atomic potential for irradiation damage in 3C-SiC ###### Abstract We developed and validated an accurate inter-atomic potential for molecular dynamics simulation in cubic silicon carbide (3C-SiC) using a deep learning framework combined with smooth Ziegler-Biersack-Littmark (ZBL) screened nuclear repulsion potential interpolation. Comparisons of multiple important properties were made between the deep-learning potential and existing analytical potentials which are most commonly used in molecular dynamics simulations of 3C-SiC. Not only for equilibrium properties but also for significant properties of radiation damage such as defect formation energies and threshold displacement energies, our deep-learning potential gave closer predictions to DFT criterion than analytical potentials. The deep-learning potential framework solved the long-standing dilemma that traditional empirical potentials currently applied in 3C-SiC radiation damage simulations gave large disparities with each other and were inconsistent with ab-initio calculations. A more realistic depiction of the primary irradiation damage process in 3C-SiC can be given and the accuracy of classical molecular dynamics simulation for cubic silicon carbide can be expected to the level of quantum mechanics. + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] + Footnote †: Corresponding author: [email protected] ## I Introduction Cubic silicon carbide has been widely used for electronic and nuclear applications due to its outstanding mechanical properties, high thermal conductivity, chemical stability, and good radiation response [1; 2]. The mechanical and electrical properties of 3C-SiC are degraded due to the changes in microstructure when it subjects to high energy neutron in the nuclear environment. Understanding the primary irradiation process is of crucial importance to estimate the usable lifetime of this material. Ab-initio molecular dynamics (AIMD) with density functional theory (DFT) and classical molecular dynamics (CMD) are the main tools to simulate the primary irradiation damage process at the atomic level beyond the limits of experimental techniques [3]. On the one hand, AIMD is accurate but computationally cost, which can only involve a few hundred atoms and several hundred picoseconds long [4]. Many thousands of atoms can be knocked out of equilibrium position by one energetic ion or neutron generated from a nuclear reaction. Therefore, AIMD can not cover the atomic scale required to simulate the primary radiation damage process [5]. On the other hand, CMD is efficient enough to satisfy the computational demand of primary irradiation dynamics simulation but the accuracy of simulation results greatly depends upon the employed inter-atomic potential. The widely used potential functions for CMD simulations of silicon carbide materials and their applications were summarized in Table 1. Although the expression forms of different empirical analytic potentials are distinguished, the processes of their development are basically the same. First, a mathematical function based on a physical understanding of interatomic interactions in the material was proposed, with a handful of global fitting parameters. Then a series of labeled physical properties from experimental or ab-initio calculations were used to fit these adjustable parameters. Finally, this fixed expression will be used for predicting the energies and forces of the new configurations in MD simulations. Although all these potential functions in Table 1 have taken the three-body effect and bond-angle effect into account \begin{table} \begin{tabular}{l l} Potentials & Applications \\ \hline Tersoff[7; 8; 9; 10; 11] & Thermal properties, Mechanical properties, Electrical properties, Polishing, Ion implantation, Crystal growth, Irradiation damage, Amorphization, Fatigue damage, Shock damage \\ Tersoff/ZBL[12] & Ion implantation, Irradiation damage \\ GW[13] & Irradiation damage \\ GW/ZBL[14] & Crystal growth, Irradiation damage \\ Vashishta[15] & Mechanical properties, Electrical properties, Deposition, Shock damage, Polishing \\ MEAM[16; 17] & Crystal growth, Thermal properties, Irradiation damage \\ EDIP[18; 19] & Mechanical properties \\ \end{tabular} \end{table} Table 1: The widely used empirical potentials for MD simulations of silicon carbide materials[6]. to describe the many-body interaction in material and strong directionality of the covalent bonds, the true interactions in silicon carbide are determined by complex many-body interactions. The ability of traditional analytical force fields to fit the corresponding potential energy surface is inherently limited by their relatively simple functional forms and a few adjustable parameters. G. Lucas and L. Pizzagalli pointed out that the use of available empirical potentials is the largest source of errors to calculate threshold displacement energies in 3C-SiC and called for the improvement of existing potentials[20]. G.D.Samolyuk's study shows that the most commonly used Tersoff and MEAM potentials for SiC are inconsistent with the ab-initio calculation of defect energetics. Tersoff potential predicts a very high interstitial formation energy and high defect migration energy[21]. GW-ZBL potential gives a more realistic description of defect formation energy but still overestimates the defect migration energy barrier. [21]. Andrey Sarikov got divergent simulation results from different potentials (including Tersoff, Vashishta [22] and an analytical bond order potential [11]) in their study of partial dislocations and stacking faults in 3C-SiC [23]. The inaccurate depiction of these key physical quantities makes us lose confidence in the correctness of MD simulation results of radiation in 3C-SiC. A new potential that can accurately describe the inter-atomic interactions is urgently needed to be developed. Recently, machine learning methods combined with DFT training data to build potential energy surfaces (PES) have been developed rapidly [24; 25; 26; 27; 28; 29]. Compared with the construction method of traditional empirical potential, machine-learning potentials have a more powerful fitting ability due to their unintended preset expressions and abundant adjustable parameters [30]. Moreover, unlike the empirical potentials, which only fit a subset of properties, machine-learning potentials can sample configurations to train the PES as many as needed. Due to more general expressions and more complete training data, machine learning potentials can give a more accurate prediction of the PES to capture the underlying physical mechanism. A variety of CMD simulations with DFT accuracy in different areas have been carried out with the help of machine-learning potentials [31; 32; 33; 34]. In the field of radiation damage, a set of machine learning potentials have also been developed to simulate the irradiation damage processes for different materials such as fcc-aluminum[35], tungsten [29], silicon[36], and bcc-iron [37]. So far, most of the machine learning potentials are for single-substance systems because the number of configurations needed to train the model increases exponentially with the increase of the number of principle elements. In this work, we applied the DP-ZBL (Deep-learning Potential interpolated with ZBL) model [35] to train a deep-learning inter-atomic potential hybrid with ZBL screened nuclear repulsion potential for 3C-SiC. In order to capture the right pictures when atoms are extremely close to each other, which are frequent events happening in the irradiation damage process due to the high kinetic energy of atoms, the generally used ZBL screened nuclear repulsion potential [38] has been interpolated into the deep learning framework so that short-range repulsion interaction between atoms can be accurately described. Here we refer Ref[35] for more details about the interpolation mechanism. Compared to the analytics empirical potential including Tersoff, MEAM, Vashishta, EIDP, and GW-ZBL, the DP-ZBL potential not only get the DFT accuracy for the near-equilibrium properties such as lattice constant, elastic properties, equation of state, and phonon dispersion but also give a correct description of short-range repulsion interaction. We put our concentration on correct prediction for defect formation energies and threshold displacement energies because these physical quantities play decisive roles in the irradiation process and there are large disparities between different studies using distinct inter-atomic potentials. The DP-ZBL model can terminate these controversies and get a more realistic molecular dynamics simulation of radiation damage in 3C-SiC. ## II Method ### Training process The accuracy and transferability of DP models are determined by the quality of the training dataset. The training dataset should be complete enough to cover the target simulation space. To get a good description of the energy and force of dimer (Si-Si, Si-C, C-C), elastic properties, phonon dispersion, and defect formation energies, the corresponding configurations recorded in Table 2 are taken as the initial training dataset. Data used to train the neural network were all generated by DFT calculations with VASP code [39; 40; 41]. General gradient approximate (GGA) with PBE [42] exchange-correlation functional has been used. The plane-wave cutoff energy was set high enough to 600eV to cover large deformation in the irradiation process. Consistent spacing between k-points in Brillouin zone (KSPACING = 0.15 A\({}^{-1}\)) was integrated using Gamma centered grid for all configurations. Gaussian smearing with 0.03 eV width was applied to help convergence. Spin-polarized calculations were considered to account for the possible spin polarization of various defect configurations. After the initial training dataset was used to kick off the first round train, an active learning training process was performed by DPGEN[43] to sample more configurations into the training dataset.The active learning process terminated when we validated that the potential energy is good enough. In this work, we went through fifty active learning iterations to get the final potential energy model, which made the training dataset sampling 33898 configurations in total. The temperature of NVT setting in the exploration stage went up from 300K to 4000K and the environmental pressure is set to one atmosphere. ### Interpolation for short range repulsion We refer to our previous work Ref[35] to get the details of the interpolation method between DP and ZBL. Meanwhile, R.E.Stoller's work in Ref[44] provides a more systematic and effective procedure to bridge the equilibrium and short-range parts, which can avoid lots of invalid tests. First, the table recording the energy of dimer Si-Si, Si-C, C-C at a short range (\(0.001\)A, \(1.200\)A) with \(0.001\)A step was generated by ZBL formula. The dimer configurations of Si-Si, Si-C, and C-C in range (\(0.5,5.0\)) with \(0.05\)A step were calculated by DFT and added to the training dataset. Then the DP and the ZBL potential were smoothly docked in the interval (\(1.0\)A, \(1.2\)A) to let ZBL plays its role in the short range and the DP model dominates at equilibrium condition. As shown in Figure 1, when Si-C dimer's distance is less than \(1.0\) A, DP-ZBL is consistent with pure ZBL potential and when Si-C dimer's distance is larger than \(1.2\) A, DP-ZBL gives results dominated by DFT calculation. And for the Si-C dimer's distance between \(1.0\) and \(1.2\) A, DP-ZBL has smoothly switched from ZBL to DFT. ## III Result We compared the energies calculated by the DP-ZBL model and the DFT method for all the configurations in the training dataset. As shown in Fig. 2 and Fig. 3, DP-ZBL prediction is consistent with DFT calculation as the points basically distribute around the y=x reference line. The root-mean-squared-errors (RMSEs) of the energies and the forces are \(0.01\) meV/atom and \(0.16\) eV/A respectively, which are within the accuracy allowed for typical Deepmd-kit training [45] compared with the range of energy and force. Several near-equilibrium material properties have been calculated by the DP-ZBL and the DFT method and four different empirical potentials including Tersoff-ZBL, GW-ZBL, MEAM, and Vashishta for comparison, as shown in Table 3. including lattice parameter a\({}_{0}\), elastic constants C\({}_{11}\), C\({}_{12}\), C\({}_{44}\), bulk modulus K\({}_{\rm H}\), Young's modulus E\({}_{\rm H}\), shear modulus G\({}_{\rm H}\) (all the moduli are in Hill noting), cohesive energy E\({}_{\rm coh}\), and defect formation energy. All results were in excellent agreement with DFT data computed in this work or referred from other works. In contrast, most empirical potentials do not give good predictions for most properties. For instance, the Tersoff-ZBL potential underestimates the lattice constant which \begin{table} \begin{tabular}{l l} Concerned properties & Configuration type \\ \hline Bulk properties & Equilibrium state \\ & Compressed \\ & Stretched \\ Thermo properties & Atom displaced \\ Elastic properties & Elastically distorted \\ Defect properties & Vacancy with strain \\ & Antisite with strain \\ & Tetrahedral interstitial with strain \\ Liquid phase & Frames of liquid trajectory \\ Short-range interactions & Dimers \\ Irradiation damage & Frames of PKA activation trajectory \\ \end{tabular} \end{table} Table 2: Configurations included in the initial training database for corresponding properties. Figure 1: The calculated energy versus distance curve of Si-C dimer by DFT, ZBL and DP-ZBL. Figure 2: Comparison of energy prediction by DFT and DP for the final training set. Both axes represent the energy of the configuration divided by the number of total atoms in the configuration. is the most basic physical quantity in simulations. However, Tersoff-ZBL works well with the elastic contants including C\({}_{11}\), C\({}_{12}\), C\({}_{44}\). The GW-ZBL potential almost mispredicts the elastic response relationship. Meanwhile, the MEAM potential has a slight error and the Vashishta potential underestimates the C\({}_{44}\) constant. The formation energy of defects in 3C-SiC is defined as follows: \[\mathrm{E_{f}=E_{defect}-E_{perfect}+n_{Si}\mu_{Si}+n_{C}\mu_{C}} \tag{1}\] where E\({}_{perfect}\) and E\({}_{defect}\) are the total energy of a perfect 3C-SiC supercell and a supercell containing a defect, respectively. The integer n\({}_{Si}\) gives the number of Si atoms removed from (n\({}_{Si}\)\(>\)0) or add to (n\({}_{Si}\)\(<\)0) the perfect supercell, and n\({}_{C}\) follows the same logic. The \(\mu_{Si}\) and \(\mu_{C}\) are respectively the chemical potential of the Si atom and the C atom in the 3C-SiC bulk environment. In this work, all defect formation energies were calculated for the Si-rich condition, which means the chemical potential of the Si atom in 3C-SiC is limited to that in the cubic silicon crystal. In this context, \(\mu_{Si}=\mu_{Si}\)(bulk) and \(\mu_{C}=\mu_{SiC}-\mu_{Si}\), where \(\mu_{SiC}\) is the chemical potential of Si-C atom pair in 3C-SiC [46]. The results calculated by the DP-ZBL model match well with the DFT results for all the defect configurations in this work. The GW-ZBL potential underestimates the defect formation energies of most configurations except the antisite of C replacing Si. The Tersoff-ZBL potential underestimates the defect formation energies of vacancies and antisites but overestimates the defect formation energies of the tetrahedral interstitial. Defect formation energy is a key physical quantity that reflects the accuracy of irradiation simulation. To sum up, the DP-ZBL model performs much better than the four listed empirical potentials for the predictions of the near-equilibrium properties. The equation of state curves of the 3C-SiC phase computed by different empirical potentials, the DP-ZBL model and the DFT method are illustrated in Figure 4. The DP-ZBL model well reproduces the DFT results, which indicates that the DP-ZBL potential is capable to cover the high compressing and stretching conditions. By contrast, the Tersoff-ZBL and Vashishta potentials produce large errors compared with the DFT criterion in the compressing condition. The GW-ZBL and Tersoff-ZBL potentials overestimate the potential energy when the system is stretched to 1.2 times relative to the equilibrium state. The phonon dispersion curve of the 3C-SiC phase has been calculated along the high symmetry directions of \(\Gamma\)-X-K-\(\Gamma\)-L by our DP-ZBL model and the DFT method. The force constants were calculated by density functional perturbation theory using VASP for the DFT method and were calculated using PhonoLammps for the DP-ZBL model. Then the Phonopy package [50] was used to compute the phonon dispersion relations. Nonmetallic crystals are polarized by atomic displacements and the generated macroscopic field changes force constants near \(\Gamma\) point[51]. To take this into consideration, phonon frequencies at general q-points with long-range dipole-dipole interaction were calculated by the method of Gonze et al. [52; 53]. The Bron effective charges (Z\({}^{*}\)) and dieletric constant (\(\epsilon_{0}\)) calculated by GGA functional (Z\({}^{*}_{\mathrm{Si}}=\) Z\({}^{*}_{\mathrm{C}}=2.69\), \(\epsilon_{0}=6.99\)) are in agreement with the experimental value (Z\({}^{*}_{\mathrm{Si}}=\) Z\({}^{*}_{\mathrm{C}}=2.69\)[54], \(\epsilon_{0}=6.52\)[55]). As shown in Fig. 5, both acoustic branches and optical branches of the six phonon modes generated by the DP-ZBL model are close to the DFT results. In addition, the results from both theoretical models match well with the experimental data measured by Serrano et al. [56] at room temperature using inelastic x-ray scattering (IXS) with the synchrotron radiation source, which means a good description of crystal thermal response of 3C-SiC can be predicted with our DP-ZBL potential. Threshold displacement energy (E\({}_{\mathrm{d}}\)) is defined as the Figure 3: Comparison of force prediction by DFT and DP for the final training set. The color bar indicates the density of data. Force greater than 50 eV/Å are not shown in the figure. Figure 4: Equation of state of the 3C-SiC phase as computed by different empirical potentials and the DP-ZBL model and the DFT method. minimum kinetic energy transferred to a lattice atom to displace it from its original Wigner-Seitz cell and form a stable Frenkel pair [57]. \(\mathrm{E_{d}}\) is a critical physical parameter for estimating damage production and predicting the defect profile under ion, neutron, and electron irradiation [58]. For example, \(\mathrm{E_{d}}\) is a key input in large-scale irradiation simulation packages such as SRIM and TRIM to determine implantation profiles in doping processes or calculate damage accumulation in materials [59]. In this work, the \(\mathrm{E_{d}}\) for both Si and C primary knock-on atoms (PKAs) along four typical low-index crystallographic directions including [100], [110], [111], and [\(\overline{111}\)] were calculated using different interatomic potentials for comparison. The simulations were performed at 300 K. A noncubic simulation box of \(10\times 10\times 12\) supercell (9600 atoms) with periodic boundary conditions was used. Kinetic energies in 0.5 eV increments were progressively assigned to a specific PKA atom in the central area to find the minimum energy. The simulation system was relaxed in the canonical ensemble (NVT) for 10 ps at 300K followed by cascade simulation in the microcanonical ensemble (NVE) for 10 ps. The Wigner-Seitz defect method was used to identify defects. From the calculation results summarized in Table 4, the \(\mathrm{E_{d}}\) values generated by our DP-ZBL are close to the DFT calculations carried out by Zhao et al[60]. GW-ZBL, MEAM, and Vashishta show divergence from the DFT calculation in multiple crystal directions. Tersoff performs better than the three other empirical potentials, but it is also out of line with the DFT values for Si PKA in direction [\(\overline{111}\)] and C PKA in direction [110]. After the comparison, it is clear that the DP-ZBL potential yield better \(\mathrm{E_{d}}\) values than the empirical potential functions. Then we carried out cascade simulations involving \(60\times 60\times 60\) unit cells (1728000 atoms) using different potentials including Tersoff-ZBL, MEAM, GW-ZBL, and our DP-ZBL, which are widely used in the simulations of irradiation damage. The simulated system was equilibrated for 10 ps with timesteps of 1 fs in NVT ensemble at 300 K. Then a single Si atom in the central area was given kinetic energy of 5.0 keV in [135] direction to initialize the cascade while holding zero total momentum of the system. The cascade evolved for 10 ps in the NVE ensemble and during this period the timesteps were modified in order that the distance traveled by the fastest particle in the system was less than 0.1 A per Figure 5: Phonon dispersion curve of 3C-SiC calculated by our DP-ZBL model and the DFT method, as well as the experimental data measured by Serrano et al. [56] using IXS. \begin{table} \begin{tabular}{c c c c c c c c c} Properties & DFT\({}_{\mathrm{Ref}}\) & DFT\({}_{\mathrm{current}}\) & DP-ZBL & Tersoff-ZBL & GW-ZBL & MEAM & Vashishta & EDIP \\ \hline a\({}_{0}\)(Å) & 4.38051 & 4.3784 & 4.3778 & 4.2796 & 4.3600 & 4.3595 & 4.3582 & 4.3624 \\ E\({}_{\mathrm{coh}}\) & -15.06241 & -15.0643 & -15.0630 & -12.68 & -12.82 & -12.86 & -12.68 & -12.67 \\ C\({}_{11}\)(GPa) & 383.91 & 380.8 & 375.3 & 445.7 & 265.2 & 396.5 & 390.1 & 396.8 \\ C\({}_{12}\)(GPa) & 127.61 & 126.8 & 124.0 & 138.7 & 219.3 & 147.1 & 142.8 & 140.5 \\ C\({}_{44}\)(GPa) & 239.51 & 240.1 & 223.1 & 220.0 & 101.1 & 135.6 & 136.9 & 170.3 \\ K\({}_{\mathrm{H}}\)(GPa) & 213.01 & 211.5 & 207.7 & 241.0 & 234.6 & 230.3 & 225.2 & 226.0 \\ E\({}_{\mathrm{H}}\)(GPa) & 432.81 & 431.4 & 413.9 & 452.2 & 156.5 & 330.6 & 330.1 & 372.5 \\ G\({}_{\mathrm{H}}\)(GPa) & 186.31 & 186.0 & 177.2 & 190.4 & 56.4 & 131.1 & 131.4 & 152.0 \\ v\({}_{\mathrm{H}}\) & 0.161 & 0.16 & 0.17 & 0.19 & 0.39 & 0.26 & 0.26 & 0.23 \\ V\({}_{\mathrm{Si}}\) & 7.752 & 7.72 & 7.66 & 8.24 & 6.89 & 4.90 & 12.73 & 4.60 \\ V\({}_{\mathrm{C}}\) & 4.092 & 4.21 & 4.10 & 3.76 & -0.84 & 1.06 & -3.38 & 1.22 \\ C\({}_{\mathrm{Si}}\) & 3.911 & 3.92 & 3.91 & 3.29 & 8.87 & 2.74 & 33.48 & 3.02 \\ SiC & 3.291 & 3.35 & 3.31 & 4.90 & 0.74 & 3.84 & -3.32 & 2.04 \\ Si\({}_{\mathrm{TSi}}\) & 10.871 & 10.22 & 10.26 & 16.65 & 3.23 & 4.00 & -2.07 & 11.69 \\ Si\({}_{\mathrm{TC}}\) & 9.041 & 8.47 & 8.34 & 16.48 & 0.33 & 3.22 & -3.41 & 12.24 \\ C\({}_{\mathrm{TSi}}\) & 10.092 & 9.97 & 9.88 & 4.89 & 7.86 & 9.08 & 17.84 & 6.69 \\ C\({}_{\mathrm{TC}}\) & 11.102 & 10.96 & 10.86 & 7.89 & 8.22 & 3.05 & 21.16 & 8.29 \\ \end{tabular} \end{table} Table 3: Basic properties of 3C-SiC: lattice constant a, cohesive energy \(\mathrm{E_{coh}}\), elastic constants \(C_{11}\), \(C_{12}\) and \(C_{44}\), bulk modulus \(K_{\mathrm{H}}\) (Hill), Young’s modulus \(E_{\mathrm{H}}\) (Hill), shear modulus \(G_{\mathrm{H}}\) (Hill), Poisson’s ratio \(\mathrm{v_{H}}\)(Hill). timestep. To dissipate the heat generated by the PKA, the NVT ensemble of 300K was applied to the boundary region (2 times the lattice constant, about 8.8 A). The Wigner-Seitz cell analysis method was used to determine the defects number with Ovito software [61]. The time-dependent curves of defect amount are shown in Figure 6. The peak of defects amount with the GW-ZBL potential is much higher than the results calculated by the other three potentials and so is the number of residual defects. We infer this is because the GW-ZBL potential underestimates the threshold displacement energy of either silicon or carbon atom. There are more than 65% recombine of interstitials and vacancies during the annealing process with the GW-ZBL potential and the DP-ZBL potential, while only 20% \(\sim\) 30% recombine with the two other potentials. Defect yields and ratios predicted by the four potentials are distinguishing. Among the different point defects, vacancies and interstitials of carbon and silicon are dominant whether in thermal peak or stable state as shown in Figure 7. The DP-ZBL model predicts slightly more vacancies and interstitials of silicon than carbon at the thermal peak and residual vacancies and interstitials of carbon but domains after annealing. ## IV Conclusions In this work, a potential energy surface for silicon carbide was developed with our DP-ZBL model. Compared with the four most commonly used empirical interatomic potentials for SiC, the DP-ZBL potential can not only give a better performance on the prediction of near-equilibrium properties including lattice constant, elastic coefficients, equation of state, phonon dispersion, and defect formation energies but also depict a more precise picture of irradiation damage. More accurate values of key parameters in irradiation such as threshold displacement energy and defect migration energy can be gotten by using the DP-ZBL potential. Furthermore, our work provides a feasible approach to figuring out the primary irradiation damage process in covalent compound materials with ab-initio accuracy. ###### Acknowledgements. This work is supported by National Natural Science Foundation of China (Grants No.12135002, No.12205269 and No.U20B2010), fund of Science and Technology on Plasma Physics Laboratory (No.22ZZJJ0601) and the Nuclear Energy Development Project. We are grateful \begin{table} \begin{tabular}{c c c c c c c} & DFT[60] & DP-ZBL & Tersoff-ZBL & GW-ZBL & MEAM & Vashishata & EDIP \\ \hline Si [100] & 41 & 33.5 & 47.0 & 20.5 & 36.5 & 29.5 & 42.0 \\ Si [110] & 50 & 47.0 & 41.0 & 23.5 & 26.5 & 22.5 & 42.0 \\ Si [111] & 21 & 23.0 & 26.0 & 12.5 & 25.5 & 46.0 & 21.5 \\ Si [\(\overline{11}\)] & 33 & 43.5 & 44.0 & 31.5 & 26.0 & 35.0 & 22.5 \\ C [100] & 18 & 15.0 & 15.5 & 11.5 & 23.0 & 47.5 & 15.5 \\ C [110] & 19 & 17.0 & 29.0 & 15.5 & 27.0 & 47.5 & 15.5 \\ C [\(\overline{11}\)] & 17 & 15.5 & 23.5 & 8.0 & 19.5 & 150.5 & 15.0 \\ C [\(\overline{11}\)] & 50 & 43.5 & 53.0 & 23.0 & 33.5 & 15.5 & 16.0 \\ \end{tabular} \end{table} Table 4: Threshold displacement energy calculated by DFT, DP-ZBL, and a range of empirical interatomic potentials. Figure 6: Comparison of the production of vacancies time dependent relationship caused by a single 5 keV Si PKA. Figure 7: Peak (slightly transparent) and stable numbers (solid) of defects caused by a single 5 keV Si PKA. for computing resources provided by High performance Computing Platform of Peking University.
2309.13488
Epidemic Forecast Follies
We introduce a simple multiplicative model to describe the temporal behavior and the ultimate outcome of an epidemic. Our model accounts, in a minimalist way, for the competing influences of imposing public-health restrictions when the epidemic is severe, and relaxing restrictions when the epidemic is waning. Our primary results are that different instances of an epidemic with identical starting points have disparate outcomes and each epidemic temporal history is strongly fluctuating.
P. L. Krapivsky, S. Redner
2023-09-23T22:34:44Z
http://arxiv.org/abs/2309.13488v1
# Epidemic Forecast Follies ###### Abstract We introduce a simple multiplicative model to describe the temporal behavior and the ultimate outcome of an epidemic. Our model accounts, in a minimalist way, for the competing influences of imposing public-health restrictions when the epidemic is severe, and relaxing restrictions when the epidemic is waning. Our primary results are that different instances of an epidemic with identical starting points have disparate outcomes and each epidemic temporal history is strongly fluctuating. ## I Background Now that the most severe (we hope) manifestations of the Covid-19 epidemic have passed, one can't help but realize that many of the early forecasts of the Covid-19 epidemic toll were wildly inaccurate and inconsistent with each other. Moreover, individual forecasts could change dramatically over a period of few days. For the USA, in particular, the earliest estimates for the Covid-19 epidemic death toll ranged from tens of thousands to many millions, with the current death toll (as of September 2023) reported to be 1.175 million out of a total of 108.5 million cases (all data taken from [1]). Perhaps even more striking are the huge fluctuations and the dramatically different time courses in the daily death rate in different countries. Figure 1: The reported time dependences of the daily Covid death rates (7-day moving average) for the (a) USA, (b) UK, (c) Brazil, (d) Italy, (e) Russia, and (f) France. These data cover the period from Feb. 15 2020 until July 29, 2023 and are all taken from Ref. [1]. To illustrate these statements, Fig. 1 plots the reported daily death rates for the six countries in the world with populations greater than 60 million and with the largest total death rates. They are: USA (3.507 deaths/1000), UK (3.339/1000), Brazil (3.275/1000), Italy (3.174/1000), Russia (2.743/1000), and France (2.556/1000). For reference, the country with the largest reported total death rate is Peru (6.582/1000), while the world average is (0.887/1000). For many reasons, the accuracy of the data may vary widely from country to country so that some of the numbers reported in Ref. [1], such as the suspicious smoothness of the data for Russia, should be interpreted with caution. One of the many confounding features of Covid-19 is asymptomatic transmission, in which the epidemic may be unknowingly spread by individuals who did not know that they were contagious. Partly because of this feature, a wide variety of increasingly sophisticated multi-compartment models were developed that build on the classic SIR and SIS models of epidemic spread. These models typically attempted to faithfully account for subpopulations in various stages of the disease and recovery, as well as the transitions between these stages. Models of this type gave rise to complex dynamical behaviors that could sometimes mirror reality in a specific setting or over a limited time range. However, embellishments of SIR and SIS-type models still seem to be incomplete because of the difficulty in simultaneously accounting for both the disease dynamics and its interaction with social forces. The discrepancy between the observed wildly varying features of Covid-19 and supposedly deterministic outcomes of SIR and SIS models is especially striking. In fact the determinism of the SIR and SIS models is actually illusory. The SIR model, for example, is an inherently stochastic process [2; 3] that is characterized by the reproductive number \(R_{0}\). This quantity is defined as the average number of individuals to whom a single infected individual transmits the infection before this single individual recovers. In the supercritical regime, \(R_{0}>1\), it is possible that the outbreak may quickly die out. This happy event occurs with probability \(R_{0}^{-1}\) if one individual was initially infected. Otherwise, the infection quickly spreads, and the behavior becomes effectively deterministic. Namely a finite fraction \(c=c(R_{0})\) individuals catch the disease, with \(c\) implicitly determined by the criterion [4] \[c+e^{-cR_{0}}=1 \tag{1}\] Conversely, if \(R_{0}<1\), the outbreak quickly dies out, so while the subcritical SIR process is still manifestly stochastic, it is not a threat to the population at large. The interesting and the most strongly stochastic behavior emerges in critical SIR and SIS models [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. For the SIR mode, in particular, the distribution of the number of infected individuals has a power-law tail. For a finite population of size \(N\) the critical SIR model does not lead to a pandemic, because the average number of individuals who contract the disease scales as \(N^{1/3}\). Here we argue that significant forecasting uncertainties are an integral feature of processes caused by the interplay between the dynamics of the disease transmission and the social forces that arise in response to the epidemic. Each attribute alone typically leads to either exponential growth (due to disease transmission at early times) or to exponential decay (due to effective mitigation strategies). Within our model, the competition between these two exponential processes leads to a dynamics that is extremely sensitive to seemingly minor details. The basic mechanism in our modeling is that the reproductive number \(R_{0}\) can sometimes decrease, due to the imposition of public-health measures, such as social distancing, vaccinations, etc., and sometimes increase, because of the relaxation of these measures. Focusing only on the dynamics of the reproductive number serves as a useful proxy for the myriad of influences that control the true epidemic dynamics. Within this framework, we will determine the duration of an epidemic, the time dependence of the number of infected individuals, and the total number of individuals infected when an epidemic finally ends. All three quantities exhibit huge fluctuations that are reminiscent of the actual data. ## II Systematic mitigation As a preliminary, we first investigate what we term as the systematic mitigation strategy. Here increasingly stringent controls are imposed as soon as an outbreak is detected until the reproductive number \(R_{0}\) is reduced to below 1. Once \(R_{0}<1\), progressively fewer individuals are infected after each incubation period, so that the epidemic soon disappears. The condition \(R_{0}=1\) defines the end of the epidemic. Because society is a complicated, with many competing social forces in play, we posit that it is not possible to reduce \(R_{0}\) instantaneously, but rather, the reduction happens gradually. We therefore assume that after each successive incubation period \(R_{0}\) is decreased by a random number \(r\) whose average value \(\left\langle r\right\rangle\) is less than 1. While additional individuals will become infected after \(R_{0}\) has been reduced to less than 1, their number decays exponentially with time and constitute a negligible contribution to the total number of infections. Define \(R_{k}\) as the reproductive number on the \(k^{\text{th}}\) period. Then \(R_{k}\) is given by \[R_{k}=r_{k}\,R_{k-1}=r_{k}\,r_{k-1}\ldots r_{2}\,r_{1}\,R_{0}\,, \tag{2}\] where \(r_{k}\) is the value of the random variable \(r\) in the \(k^{\text{th}}\) period. The typical number of periods \(k\) until \(R_{0}\) reaches 1 is determined by \(R_{0}\left\langle r\right\rangle^{k}=1\). In what follows, we assume that when the epidemic is first detected, the reproductive number \(R_{0}=2.5\), and we take \(\langle r\rangle=0.95\) for illustration. With these conventions, \[k=\frac{\ln(1/R_{0})}{\ln\langle r\rangle}=\frac{\ln(1/2.5)}{\ln(0.95)}\approx 17.86\] Thus the epidemic typically disappears after 18 periods. However, because of the inherent randomness in the mitigation, with \(R_{0}\) sometimes decreasing by less than 0.95 and sometimes by more than 0.95, the true epidemic dynamics can be very different, as illustrated in Fig. 2. We simulate the systematic mitigation strategy by starting with a single infected individual and reproductive number \(R_{0}=2.5\). We then choose a set of random numbers \(r_{1},r_{2},r_{3},\ldots\), each of which are uniformly distributed between 0.9 and 1, so that \(\langle r\rangle=0.95\). We first measure how long it takes until \(R_{k}\), the reproductive number in the \(k^{\rm th}\) period, is reduced to 1, which signals the end of the epidemic. We perform this same measurement for \(5\times 10^{6}\) different choices of the set of random numbers \(r_{1},r_{2},\ldots,r_{k}\). As shown in Fig. 2(a), the probability \(Q(k)\) that the epidemic is extinguished in \(k\) periods is peaked at roughly 18 periods, consistent with the naive estimate above. If one is lucky, that is, if most of the reduction factors \(r_{i}\) are close to 0.9, the epidemic can be extinguished in as little as 11 periods. If one is unlucky (many of the \(r_{i}\) are close to 1), the epidemic can last more than 30 periods. While the distribution of epidemic durations is fairly narrow, the size of an epidemic, namely, the total number \(I\) of people who were infected during the course of an epidemic, \[I=R_{0}+R_{1}+R_{2}+\ldots+R_{k}\,, \tag{3}\] can vary by several orders of magnitude. It is important to point out that the number of newly infected people is based on the assumption that this number is small compared to the total population size, so that the growth in the number of new infections is truly exponential. As shown in Fig. 2(b), while the most probable epidemic size is \(\approx 10^{4}\) (again starting with a single infected individual), there is a non-vanishing probability that the outbreak size can be as small as a few thousand or greater than \(10^{7}\). This large disparity in outbreak sizes illustrates how small changes in the way that the epidemic is mitigated can lead to huge changes in the outbreak size. More dramatically, suppose that the mitigation strategy is slightly less effective and that the reproductive number is reduced at each period by a uniform random variable that lies between \([0.95,1]\) rather than between \([0.9,1]\). Now the epidemic can last between 22 and 55 periods, with a most probable duration of 36 periods. However, the epidemic size ranges between roughly \(10^{5}\) and \(10^{12}\), with a most probable size of roughly \(7\times 10^{7}\). The upper value is much larger than the world population and the finiteness of the population would now provide the upper bound. Although this second epidemic lasts twice as long as the first one, it typically infects 7,000 times more people! We emphasize that the stochastic nature of the random variables \(r_{j}\) plays a decisive role. Very different behaviors emerge in the deterministic case [17]. We also mention that the systematic mitigation strategy is analytically tractable because of a close relation between the epidemic size in (3) and Kesten variables [18], which appear in probability theory, one-dimensional disordered systems, and various other subjects. We explain this connection in Appendix A and also several analytical results that qualitatively agree with our numerical observations. As one example, we show that the slightly faster than exponential decay of \(P(I)\) suggested by Fig. 2 may be close to a factorial decay. Figure 2: Systematic mitigation: (a) The probability \(Q(k)\) that the epidemic lasts \(k\) periods. (b) The probability \(P(I)\) that the epidemic has ultimately infected \(I\) people (under the assumption that the initial epidemic size is one person). ## III Vacillating mitigation During the acute period of the pandemic in 2020-2021, there was considerable and even vitriolic debate about the efficacy of various mitigation strategies, or even about the utility of any mitigation. If the epidemic is severe, as quantified by the reproductive number \(R_{k}\) in the \(k^{\text{th}}\) period being substantially greater than 1, people may be more likely to accept restrictions on their behaviors, such as isolating, masking, vaccinating, etc., to reduce their risk of getting sick. These adaptations will reduce the reproductive number. If, however, the reproductive number becomes less than 1, then people will want to relax their vigilance and may also advocate for the opening of various public venues, such as schools, theaters, stadiums, etc. We model this tug-of-war between increased and decreased restrictions by what we term as the vacillating mitigation strategy. This perspective of treating the competition between epidemiology and social behavior was previously treated in more sophisticated models [19; 20]. We emphasize that our model merely a proxy for the two competing influences of epidemiology and social behavior. The two competing steps of the vacillating strategy are the following: * Mitigation: if \(R_{k}>1\), decrease \(R_{k}\) by a factor \(r\) that is uniformly distributed in \([a,1]\), with \(a<1\). * Relaxation: if \(R_{k}<1\), change \(R_{k}\) by a factor \(s\) that is uniformly distributed in \([a,3-2a]\). The first option is the same as in the systematic mitigation strategy. We construct the second option by requiring that \(\langle s\rangle=1+\frac{1}{2}(1-a)\) and \(\langle r\rangle=1-\frac{1}{2}(1-a)\) are symmetrically located about 1. That is, the average decrease in \(R_{k}\) in a mitigation step equals the average increase in \(R_{k}\) in the relaxation step. This symmetrical construction seems appropriate to probe the long-term influence of vacillation on the dynamics. If the vacillation strategy was biased towards relaxation, \(R_{0}\) would remain greater than 1 and the entire planet would be infected. If this strategy was biased towards mitigation, the epidemic would be similar to that in systematic mitigation. Neither of these cases is interesting from the viewpoint of probing long-time behaviors. In this vacillating strategy, \(R_{k}\) varies between values greater than 1 and values less than 1. This would lead to an Figure 3: Representative trajectories for the number of people \(I(t)\) infected at time \(t\) for the vacillating mitigation strategy when starting with \(R_{0}=1\) and a single infected person. eternal epidemic. To avoid this unrealistic outcome, the other important feature of the relaxation step is that the value of \(R_{k}\) could still decrease during a relaxation step because \(a<1\). This possibility ensures that eventually less than one person will be infected in the current incubation period. We now define this event as signaling the end of the epidemic. Figure 3 shows a few representative trajectories of the number of people infected \(I(t)\) as a function of time (incubation periods) from the same initial condition of a single infected person and \(R_{0}=2.5\). While there are some qualitative differences between the trajectories of Fig. 1 and the model outcomes, the important points that are common to the real data and the simulation results are the disparities in the individual trajectories and the strongly fluctuating temporal behavior. For the vacillating strategy and for the choice \(a=0.9\), the most likely duration of the epidemic is roughly \(400\) periods (Fig. 4(a)), compared to \(18\) periods for the systematic strategy. The probability that the epidemic lasts much longer than the most likely value decays exponentially with time. An even more dramatic feature of the vacillating strategy is the number of people that are ultimately infected. The most probable outcome is that \(3\times 10^{5}\) people are infected when the epidemic ends (Fig. 4(b)). However, the typical size of the epidemic can range from \(10^{4}\) to \(10^{8}\). Compared to the systematic mitigation strategy with a reduction factor uniformly in the range \([0.9,1]\), the epidemic now lasts roughly \(20\) times longer and infects a factor \(30\) more individuals. ## IV Concluding remarks This work should not be construed to mean that public-health measures should be ignored. Indeed, the extremely rapid development of a vaccine that is effective against Covid-19 is an outstanding triumph of modern medical science. It should also be pointing out that some of the many forecasting models for Covid-19 were useful during the early stages of the pandemic. However, when social influences with competing viewpoints began to dictate individual and collective policy decisions, much of the predictive power of forecasting models was lost. We also emphasize that our simplistic model has little connection to the actual epidemiological and social processes that determine the spread of the epidemic and the changes in individual and collective behaviors in response to the epidemic. Nevertheless, our model seems to capture the tug of war between public-health mandates to control the spread of the disease and the social forces that often advocate for a more laissez-faire approach. Our main message is that there are huge uncertainties in predicting the time course of an epidemic, its ultimate duration, and the final outbreak size. This unpredictability seems to be intrinsic to the dynamics of epidemics where epidemiological influences occur in concert with social forces. In this setting, forecasting ambiguity is unavoidable. We thank J. M. Luck for helpful correspondence. This research was partially supported by NSF grant DMR-1910736. Figure 4: Vacillating mitigation: (a) The probability \(Q(k)\) that the epidemic lasts \(k\) periods. (b) The probability \(P(s)\) that the epidemic has ultimately infected \(s\) people (under the assumption that the initial epidemic size is one person). ## Appendix A Kesten variables We outline an analytical treatment of the systematic mitigation strategy. Since the terms in the sum in Eq. (3) decay exponentially in the number of factors in the product, we can replace the finite sum in (3) by the infinite sum \[R=1+r_{1}+r_{1}r_{2}+r_{1}r_{2}r_{3}+\ldots,\qquad R=I/R_{0}\,, \tag{10}\] because it changes the outcome just by a finite number. Random variables that are defined by (10) are known Kesten variables, which have fundamental implications [18; 21; 22] and a variety of applications [23; 24; 25; 26; 27; 28; 29]. We now show how to probe the probability distribution \(P(R)\) using Kesten variables. The definition of Kesten variables implies that \(P(R)\) satisfies the integral equation \[P(R)=\int dr\,\rho(r)\int dQ\,P(Q)\,\delta[R-1-rQ]\,. \tag{11}\] By performing the Laplace transform \[\widehat{P}(s)=\int_{1}^{\infty}dR\,P(R)\,e^{-sR}\,, \tag{12}\] the Laplace transform of the probability distribution \(P(R)\) can be expressed as \[\widehat{P}(s)=\int_{1}^{\infty}dQ\,P(Q)\int dr\,\rho(r)\,e^{-s \cdot srQ}\,. \tag{13}\] As a simple example, let us treat the uniform distribution, \(\rho(r)=1\) when \(r\in[0,1]\). Then Eq. (13) becomes \[s\,e^{s}\widehat{P}(s)=\int_{1}^{\infty}dQ\,P(Q)\,\frac{1-e^{-sQ} }{Q}\,. \tag{14}\] Differentiating with respect to \(s\) we obtain \[\frac{d\Pi(s)}{ds}=s^{-1}e^{-s}\,\Pi(s),\qquad\Pi(s)=s\,e^{s}\, \widehat{P}(s)\,, \tag{15}\] whose solution is \[\widehat{P}(s)=|s|^{-1}\exp[-s-\gamma+\text{Ei}(-s)]\,, \tag{16}\] where \(\gamma=0.577\,215\ldots\) is the Euler constant and \(\text{Ei}(\cdot)\) is the exponential integral. Because this Laplace transform exists for all \(s\in\mathbb{R}\), \(P(R)\) decays faster than exponentially in \(R\) for \(R\to\infty\); this bound ensures that the Laplace transform (12) remains well-defined when \(s\to-\infty\). Using (16) we find \[\ln\widehat{P}(-\sigma)=\text{Ei}(\sigma)+\sigma-\ln\sigma-\gamma\,, \tag{17}\] which grows as \(\sigma^{-1}e^{\sigma}\) when \(\sigma\gg 1\). This limiting behavior leads to \[\ln P(R)\simeq-R\,\ln R \tag{18}\] for \(R\gg 1\). This is essentially a factorial decay: \(P(R)\propto 1/\Gamma(R)\), where \(\Gamma(\cdot)\) is the Euler gamma function. This behavior is consistent with the faster than exponential decay of \(P(R)\) observed in simulations (Fig. 2(b)). For the small-\(R\) behavior, we use the asymptotic \(\widehat{P}(s)\simeq s^{-1}\exp[-s-\gamma]\) as \(s\gg 1\) to give \(P(1)=e^{-\gamma}=0.561\,459\ldots\). This disagrees with simulations (see Fig. 2(b)), where \(\rho(r)\) was chosen from a uniform distribution in \([a,1]\), with \(a=0.9\). The reason for this discrepancy is simple: when \(\rho(r)\) vanishes for \(r<a\), it is very unlikely to generate a value of \(R\) that is close to the minimum value \(R_{\text{min}}=(1-a)^{-1}\) because it requires each \(r_{i}\) to be close to \(a\). If the support \([a,b]\) of the distribution \(\rho(r)\) is not inside \([0,1]\), that is, \(b>1\), the Kesten variable still has a stationary distribution if \[\int_{a}^{b}dr\,\rho(r)\ln r<0 \tag{19}\] Here, the large-\(R\) behavior of \(P(R)\) is again algebraic [18], \(P(R)\sim R^{-\beta}\), where \(\beta\) is the smallest root of the equation \(\int_{\mathfrak{a}}^{b}dr\,\rho(r)r^{\beta-1}=1\) that also satisfies \(\beta>1\). For example, for an arbitrary distribution with support that is symmetric about \(r=1\) (so that it satisfies \(\rho(r)=\rho(2-r)\)), the requirement (10) is always obeyed, so the Kesten variable is stationary. Here the decay exponent is universal: \(\beta=2\). Thus, already the first moment \(\int dR\,RP(R)\) diverges. Mitigation strategies are necessarily successful when \(\rho(r)\) has its support inside \([0,1]\). For distributions \(\rho(r)\) defined in \([a,b]\) with \(b>1\), even if the stationarity requirement (10) is obeyed, the distribution for for the outbreak size \(P(R)\) has an algebraic tail, which implies that a finite fraction of population contracted the disease. While the emergence of these heavy-tailed distributions sparked interest [21; 22; 23; 24; 25; 18; 26] in Kesten variables, in the context of pandemics, such a feature is to be avoided.
2310.06858
Design of JiuTian Intelligent Network Simulation Platform
This paper introduced the JiuTian Intelligent Network Simulation Platform, which can provide wireless communication simulation data services for the Open Innovation Platform. The platform contains a series of scalable simulator functionalities, offering open services that enable users to use reinforcement learning algorithms for model training and inference based on simulation environments and data. Additionally, it allows users to address optimization tasks in different scenarios by uploading and updating parameter configurations. The platform and its open services were primarily introduced from the perspectives of background, overall architecture, simulator, business scenarios, and future directions.
Lei Zhao, Miaomiao Zhang, Guangyu Li, Zhuowen Guan, Sijia Liu, Zhaobin Xiao, Yuting Cao, Zhe Lv, Yanping Liang
2023-09-28T07:02:39Z
http://arxiv.org/abs/2310.06858v1
# Design of JiuTian Intelligent Network Simulation Platform ###### Abstract This paper introduced the _JiuTian Intelligent Network Simulation Platform_1, which can provide wireless communication simulation data services for the Open Innovation Platform. The platform contains a series of scalable simulator functionalities, offering open services that enable users to use reinforcement learning algorithms for model training and inference based on simulation environments and data. Additionally, it allows users to address optimization tasks in different scenarios by uploading and updating parameter configurations. The platform and its open services were primarily introduced from the perspectives of background, overall architecture, simulator, business scenarios, and future directions. Wireless communication, simulation platform, emulators Footnote 1: jiuTian Intelligent Network Simulation Platform ## I Introduction The application of artificial intelligence technology to solve technical and application problems in the communication field and vertical industries has become a mainstream direction [1, 2, 3]. However, the communication network system is robust, complex, and has a long industrial chain. The intersection of communication and AI poses significant challenges. Additionally, the communication industry lacks a flexible and adaptable real-world verification environment, which hinders the iterative testing for the development and validation of network AI capabilities. To address these challenges, the JiuTian Intelligent Network Simulation Platform(JINSP) provides an implementation solution for innovative smart networks, offering services in four dimensions: dynamic network simulation, model training, capability invocation, and capability evaluation. By creating virtual objects in the digital world to represent communication network entities, people, network services, and their topological relationships, it can effectively simulate, respond to, and predict the states and behaviors of various entities in the physical environment. This facilitates the incubation and breakthrough of network intelligence technology. The JINSP provides various types of simulation services for research and application personnel in the "AI + communication" field and releases corresponding research tasks. The platform aims to simulate and construct a simulation environment for wireless communication networks, industry-specific networks, etc., of a certain scale. This environment consists of different business scenarios and can serve scenarios such as intelligent network operation, maintenance, optimization, and service provision, as well as the development, testing, and validation of AI capabilities for network element intelligence. ## II Related work The JINSP follows the design principles and implementation logic of some commonly used AI platforms and simulation software in the industry. It is dedicated to providing an open-service platform for the wireless field. The following is a comparative analysis between the intelligent network simulation platform and several types of platforms currently existing in the industry. * **AI General Cloud Services:** This type of platform's service capabilities are mostly built on its own core business and provide a wide range of interface-based applications for the general audience. Users can obtain inference results by calling the API of the AI cloud service platform. Some well-known examples include Google Cloud AI Platform, OpenAI API, Baidu AI Open Platform, Tencent Cloud AI Lab, and so on. The original intention of the intelligent network simulation platform design is also based on existing core businesses, aiming to achieve platformization of services, and support user interaction and capability openness. * **Online Training Platforms:** This type of platform supports mainstream algorithm frameworks and provides GPU computing resources as well as some task datasets. Its capabilities cover academic areas such as deep learning, computer vision, natural language processing, etc. This category of platforms is favored by educators and researchers due to its superior hardware environment and user-friendly interface. However, these platforms primarily serve to provide GPU computing services, and only a small portion of datasets applicable to communication directions. They have a limited impact on research in the "AI + communication" field and still cannot address the issues of limited data and lack of interactive network environments in communication research. * **Toolkits:** This type of platform does not rely on interfaces for online inference, nor does it provide a trainable platform. Instead, it offers the complete core product and functionality to the user. Users can download or reference the toolkit to create new instances locally, with each instance tailored to meet individual user needs. Users can then use this as a basis for training or inference. For example, with OpenAI's Gym [4] platform, developers can easily build and train intelligent agents [5, 6, 7] and evaluate their performance. * **Simulation Platforms:** This category of platforms focuses on the simulation implementation in specific vertical domains and does not inherently include AI capabilities. [8] The simulation effects are achieved through underlying engines. Some examples include Atoll, Planet, Exata, MATLAB, etc. Atoll and Planet specialize in specific types of network simulation, such as wireless network planning and satellite communication system design. Exata supports various network types and technologies, including wireless networks and sensor networks. These platforms are suitable for professional users in related fields. However, due to the lack of support for AI algorithms, their scalability and user flexibility are relatively low. MATLAB provides a rich set of mathematical and simulation toolboxes for various simulation modeling and performance evaluation tasks. It also offers the capability to incorporate extended algorithm functionalities. However, using MATLAB as a platform can be more challenging as it requires a corresponding programming background. Compared to the various open platforms described above, the intelligent network simulation platform designed in this paper integrates features such as online training and inference capabilities, dynamic simulation functionality, and future open API services. Additionally, it is capable of providing specific AI capabilities for the wireless communication vertical domain's particular business needs. ## III Framework The JINSP constructs different types of simulators and integrates them based on the requirements of business scenarios. This enables users to conduct online model training and inference for corresponding open tasks. Users immerse themselves in the simulation platform environment and interact with it to invoke simulation capabilities and manage tasks. For instance, users can issue simulation configurations to invoke specific simulation capabilities and perform simulation functions. Subsequently, users can update simulation configurations based on the simulation results returned by the environment (e.g., antenna parameters, base station positions, etc.). They can iterate the invocation of simulator capabilities to accomplish online model training and inference. This process is similar to the concept of an intelligent agent [9, 10], which can adapt its understanding based on the environmental configuration. This chapter will provide a detailed introduction to the basic simulators and the process of combining them in Figure 1. ### _Basic Emulator_ * **User Behavior Simulation** involves trajectory generation and business generation. Trajectory generation utilizes generative algorithms such as Variational Autoencoders (VAE) [11, 12, 13] and Generative Adversarial Networks (GAN) [14, 15] to simulate user trajectories, achieving the generation of latitude and longitude sequences at a daily level. Business generation addresses the synthesis of user-level business traffic. It is based on a Knowledge-Enhanced GAN model. This model takes user behavior representation vectors and packet distribution characteristic vectors extracted from user traffic data packets as inputs for knowledge enhancement. Subsequently, it employs GAN to generate packet sequences. * **Large-Scale Channel Simulation** is based on the antenna pattern of any sub-beam, combined with the location, antenna height, transmit power, mechanical azimuth, tilt angle, geographical terrain, etc., simulate the large-scale channel simulation results for each grid or user-specific cell with any type of beam. Subsequently, this supports the calculation of relevant metrics for base stations and terminals, such as Service Cell RSRP/SINR [16, 17] values, neighboring cell RSRP/SINR values, and the calculation of the corresponding beam IDs for the service cell and neighboring cells. * **Channel Simulation** is based on real geographical environments and considering the regional characteristics of the simulation scenario (such as Indoor, Umi, Uma, etc.), wireless channel modeling is conducted across regions and multiple scenarios. Channel simulation achieves channel modeling by invoking the foundational simulator, which includes Large-Scale Channel Simulation, and combines it with small-scale fading (fast fading) simulation. In this process, small-scale fading simulation employs a common statistical modeling approach, thoroughly simulating the multipath effects generated by processes such as direct propagation, reflection, diffraction, transmission, and diffuse scattering in the transmission of electromagnetic waves. Finally, the results of the large-scale fading model are combined with those of the small-scale fading model to generate the simulated channel matrix. * **Base Station/Terminal Simulation** provides wireless protocol stack processes and functional simulation for Radio Resource Control (RRC), Medium Access Control (MAC), and Physical Layer (PHY) in system-level simulation [18]. In the RRC layer simulation, key processes like cell access, handover/reselection, and wireless resource management are included. The MAC layer simulation, conducted separately on both the base station and terminal sides, models critical processes such as wireless resource scheduling, MIMO [19], link adaptation, wireless resource mapping, uplink power control, and Hybrid Automatic Repeat reQuest (HARQ). The PHY layer simulation, guided by the functional processes in the above two layers, ultimately calculates system-level network performance metrics like the number of connected users, cell load, uplink/downlink transmission rates, and more. Additionally, the Base Station/Terminal Simulation module can be combined with map information, cell parameters, and beam configurations. With the aid of AI-calibrated path loss calculation formulas, it computes the Service Cell RSRP/SINR values under any sub-beam antenna pattern in wireless coverage simulation scenarios. ### _Combination Emulator_ In the previous section, individual basic simulators are responsible for simulating the fundamental capabilities within the network system, but they may not meet the simulation requirements of complex business scenarios [20]. The platform combines multiple basic simulators and provides unified services to meet the task opening and online model training requirements of different business scenarios. Currently, the intelligent network simulation platform provides three types of combination simulators: * Real Environment Dynamic User Protocol Stack Simulation: This combination simulator is composed of user behavior simulation, base station simulation, and terminal simulation. It provides users with metrics such as RSRP, SINR, traffic, and rates at both user and cell granularity in a real environment. * Real Environment Dynamic User Coverage Simulation: To improve the efficiency of base station and terminal simulation, this simulator primarily invokes the physical layer simulation from the above simulators. It combines with user simulation to output coverage metrics such as RSRP and SINR at both user and cell levels. * Link-Level Channel Simulation: This combination simulator utilizes the core capability provided by channel simulation. It combines with base station and terminal physical layer protocol stack simulation to provide users with frequency-domain channel response information at the Resource Element (RE) level. Fig. 1: The Overall Architecture Diagram of the Intelligent Network Simulation Platform, includes basic emulators, combination emulators and business scenario. Besides, open service and AI block can provide the basic service for the JINSP. ## IV Business Scenario ### _Multi-objective antenna opt._ Multi-objective optimization aims to assist network operators in using the simulation platform's twin-modeling capability of real network situations. This allows for a cost-effective understanding of the evaluation results of various network metrics under different cell SSB antenna weight parameters [21]. Consequently, it enables more convenient and efficient optimization results in different regions and time periods, considering multiple optimization objectives. A wireless cellular network consists of multiple base stations, each with several cells. These cells achieve coverage through the transmission of Synchronization Signal Blocks (SSBs). In 5G, cells can adjust the azimuth, tilt angle, horizontal beamwidth, and vertical beamwidth of eight sub-beams to control the coverage range and radius of the cell. Users scan through each SSB beam and select the one with the strongest coverage signal. The cell simulates access metrics and rates based on factors such as the user's coverage signal and interference. For the open task of multi-objective antenna optimization, considering user mobility, the task involves adjusting the sub-beams of the cell, including azimuth, tilt angle, horizontal beamwidth, and vertical beamwidth. The objective is to jointly optimize the metrics [22, 23] of all cells within the region, allowing for the optimization of SS-RSRP, SS-SINR, total number of users, total traffic volume, and rates within a five-minute granularity. This optimization task aims to achieve the overall best performance in the region, encompassing \(n\) cells and \(k\) users. Additionally, by controlling the weighting coefficients, the optimization task can be tailored to focus on specific metrics, transforming the problem into a coverage optimization problem. The output of this task includes the azimuth, tilt angle, horizontal beamwidth, and vertical beamwidth of each cell within the region every five minutes, ensuring that various metrics (as mentioned above) reach their optimal values across the entire area within a specified time frame. ### _Csi_ In large-scale MIMO systems, base stations (BS) are typically equipped with as many as several hundred active antennas, serving multiple user equipments (UEs). Accurate Channel State Information (CSI) [24] is crucial for harnessing the performance gains of large-scale MIMO. The CSI for the downlink channel needs to be obtained by UEs through downlink pilot estimation and then transmitted back to the base station through a feedback link. The base station utilizes the received feedback information for tasks like precoding and other adaptive transmission optimizations. Deep learning-based methods can address the CSI compression feedback issue in large-scale MIMO systems, efficiently and accurately enhancing the feedback precision. There are also many promising research directions in this area worthy of further investigation. This task aims to harness the feature extraction and information compression capabilities of AI. It involves training an AI model using channel characteristic information. This model is then used to compress the channel characteristic information at the UE side. The compressed information is transmitted through the channel and recovered at the receiving end. The base station uses the recovered information for tasks like precoding and other adaptive transmission optimizations. The ultimate goal of CSI compression feedback is to restore the channel state information with as little loss as possible under a fixed compression bit requirement. Smaller compression bits result in fewer resources being used for transmission. However, this can lead to lower precision in model recovery, resulting in lower feedback performance. This task requires designing a model under a specific feedback bit vector, with the objective of achieving higher restoration accuracy of the model with the smallest possible compression bits. ## V Future decision The platform starts from the interdisciplinary integration of communication and artificial intelligence, selects key technologies and application issues of concern to the academic and production fields, analyzes and extracts the necessary simulation environment capabilities for algorithm technology development corresponding to various problems. It has constructed an intelligent network simulation service that integrates various functions such as user behavior simulation, wireless network coverage simulation, base station protocol stack simulation, and channel simulation, forming a systematic network simulation environment. Through simulation performance optimization, capability combination, and interface encapsulation, the platform provides a good user experience for the external service opening of simulated network environments and data. It also contributes important infrastructure to the integration and development of communication and artificial intelligence technologies. To provide stronger support for technological breakthroughs and promote industry development, the platform still needs to continuously accumulate strength in the following directions: First, with the development of generative artificial intelligence and large-scale model technologies, artificial intelligence technology has gradually made significant breakthroughs in many areas that were once considered unsuitable or unfeasible. The future focus of the platform is how to use multimodal communication network big data and large-scale model technology to build network simulation capabilities. Second, digital twinning is an important evolution direction of 6G communication technology. The platform will continue to expand the business scenarios covered by simulation capabilities, achieve end-to-end full-link simulation, and accelerate the construction and opening of trial networks in existing networks. It will overcome the coexistence of simulation environment and real network, providing a reliable basis for the technical feasibility demonstration of digital twin networks. Third, to meet the personalized research and production needs of users from all walks of life, the platform will continuously improve the flexibility and freedom of the network simulation environment. It will support more parameters configuration such as user behavior, network equipment, and simulation models, while actively promoting the decoupling of simulation capabilities and the design of unified interfaces. It will mobilize industry partners to contribute basic simulation capabilities and build a flexible and assembled network simulation environment. It will promote the open source and open access of high-quality resources in the industry, establish a collaborative and innovative industry ecosystem, and provide strong support for the national network power strategy.
2310.20121
Ling-CL: Understanding NLP Models through Linguistic Curricula
We employ a characterization of linguistic complexity from psycholinguistic and language acquisition research to develop data-driven curricula to understand the underlying linguistic knowledge that models learn to address NLP tasks. The novelty of our approach is in the development of linguistic curricula derived from data, existing knowledge about linguistic complexity, and model behavior during training. By analyzing several benchmark NLP datasets, our curriculum learning approaches identify sets of linguistic metrics (indices) that inform the challenges and reasoning required to address each task. Our work will inform future research in all NLP areas, allowing linguistic complexity to be considered early in the research and development process. In addition, our work prompts an examination of gold standards and fair evaluation in NLP.
Mohamed Elgaar, Hadi Amiri
2023-10-31T01:44:33Z
http://arxiv.org/abs/2310.20121v1
# Ling-CL: Understanding NLP Models through Linguistic Curricula ###### Abstract We employ a characterization of linguistic complexity from psycholinguistic and language acquisition research to develop data-driven curricula to understand the underlying linguistic knowledge that models learn to address NLP tasks. The novelty of our approach is in the development of linguistic curricula derived from data, existing knowledge about linguistic complexity, and model behavior during training. By analyzing several benchmark NLP datasets, our curriculum learning approaches identify sets of linguistic metrics (indices) that inform the challenges and reasoning required to address each task. Our work will inform future research in all NLP areas, allowing linguistic complexity to be considered early in the research and development process. In addition, our work prompts an examination of gold standards and fair evaluation in NLP. ## 1 Introduction Linguists devised effective approaches to determine the linguistic complexity of text data (Wolfe-Quintero et al., 1998; Bulte and Housen, 2012; Housen et al., 2019). There is a spectrum of _linguistic complexity indices_ for English, ranging from lexical diversity (Malvern et al., 2004; Yu, 2010) to word sophistication (O'Dell et al., 2000; Harley and King, 1989) to higher-level metrics such as readability, coherence, and information entropy (van der Sluis and van den Broek, 2010). These indices have not been fully leveraged in NLP. We investigate the explicit incorporation of linguistic complexity of text data into the training process of NLP models, aiming to uncover the linguistic knowledge that models learn to address NLP tasks. Figure 1 shows data distribution and accuracy trend of Roberta-large (Liu et al., 2019) against the linguistic complexity index "verb variation" (ratio of distinct verbs). This analysis is conducted on ANLI (Nie et al., 2020) validation data, where balanced accuracy scores are computed for individual bins separately. The accuracy trend indicates that verb variation can describe the difficulty of ANLI samples to the model. In addition, the data distribution illustrates potential _linguistic disparity_ in ANLI; see SS3.4 To reveal the linguistic knowledge NLP models learn during their training, we will employ known linguistic complexity indices to build _multiview linguistic curricula_ for NLP tasks. A curriculum is a training paradigm that schedules data samples in a meaningful order for iterative training, e.g., by starting with easier samples and gradually introducing more difficult ones (Bengio et al., 2009). Effective curricula improve learning in humans (Tabibian et al., 2019; Nishimura, 2018) and machines (Bengio et al., 2009; Kumar et al., 2010; Zhou et al., 2020; Castells et al., 2020). Curriculum learning has been found effective in many NLP tasks (Settles and Meeder, 2016; Amiri et al., 2017; Platanios et al., 2019; Zhang et al., 2019; Amiri, 2019; Xu et al., 2020; Lalor and Yu, 2020; Jafarpour et al., 2021; Kreutzer et al., 2021; Agrawal and Carpuat, 2022; Maharana and Bansal, 2022). A multiview curriculum is a curriculum able to integrate multiple difficulty scores simultaneously and leverage their collective value (Vakil and Amiri, 2023). Figure 1: Data distribution and trend of model accuracy against the linguistic index _verb variation_ computed on ANLI (Nie et al., 2020) validation data. Samples with greater verb variation are more complex and also harder for the model to classify. Such linguistic indices can inform difficulty estimation and linguistic curriculum development for NLP tasks. We assume there exists a subset of linguistic complexity indices that are most influential to learning an NLP task by a particular model. To identify these indices for each model and NLP task, we derive a weight factor \(\rho_{i}\in[-1,1]\) for each linguistic index that quantifies how well the index estimates the _true_ difficulty of data samples to the model, determined by model instantaneous loss against _validation_ data. By learning these weight factors, we obtain precise estimations that shed light on the core linguistic complexity indices that each model needs at different stages of its training to learn an NLP task. In addition, these estimates can be readily used for linguistic curriculum development, e.g., by training models with linguistically easy samples (with respect to the model) and gradually introducing linguistically challenging samples. To achieve the above goals, we should address two gaps in the existing literature: First, existing curricula are often limited to a single criterion of difficulty and are not applicable to multiview settings. This is while difficulty is a condition that can be realized from multiple perspectives, can vary across a continuum for different models, and can dynamically change as the model improves. Second, existing approaches quantify the difficulty of data based on instantaneous _training_ loss. However, training loss provides noisy estimates of sample difficulty due to data _memorization_(Zhang et al., 2017; Arpit et al., 2017) in neural models. We will address both issues as part of this research. The contributions of this paper are: * incorporating human-verified linguistic complexity information to establish an effective scoring function for assessing the difficulty of text data with respect to NLP models, * deriving linguistic curricula for NLP models based on linguistic complexity of data and model behavior during training, and * identifying the core sets of linguistic complexity indices that most contribute to learning NLP tasks by models. We evaluate our approach on several NLP tasks that require significant linguistic knowledge and reasoning to be addressed. Experimental results show that our approach can uncover latent linguistic knowledge that is most important for addressing NLP tasks. In addition, our approach obtains consistent performance gain over competing models. Source code and data is available at [https://github.com/CLU-UML/Ling-CL](https://github.com/CLU-UML/Ling-CL). ``` 0:\(\mathcal{D}_{train}\), \(\mathcal{D}_{val}\), Model \(\Theta\), Optimizer \(g\), Loss function \(f\), Curriculum \(C\) 1:step \(\gets 0\) 2:\(\rho\leftarrow\) random initialization 3:while step \(<\) total_steps do 4: training_batch \(\leftarrow\) SampleBatch(step, \(\mathcal{D}_{train}\)) 5: loss \(\leftarrow\) ComputeLoss (training_batch, \(\Theta\)) 6: ling \(\leftarrow\) Get.LinguisticFeatures(training_batch) 7: difficulty \(\leftarrow\) CalculateDiffoulty(ling, \(\rho\)) 8: confidence \(\leftarrow\) DetermineConfidence(step, difficulty) 9: weighted_loss \(\leftarrow\) loss \(\otimes\) confidence 10:\(\Theta\leftarrow\) UpdateModel (weighted_loss, \(\Theta\)) 11:if step % validation_step = 0 then 12:\(l\leftarrow\) ComputeLoss \((\mathcal{D}_{val},\Theta)\) 13: ling \(\leftarrow\) Get.LinguisticFeatures(\(\mathcal{D}_{val}\)) 14:for\(\rho_{i}\in\rho\)do 15:\(\rho_{i}\leftarrow\) pearson\((l,\text{ling}[:,i])\) 16:endfor 17:endif 18: step \(\leftarrow\) step \(+\) 19:endwhile ``` **Algorithm 1** Correlation Method ## 2 Multiview Linguistic Curricula We present a framework for multiview curriculum learning using linguistic complexity indices. Our framework estimates the importance of various linguistic complexity indices, aggregates the resulting importance scores to determine the difficulty of samples for learning NLP tasks, and develops novel curricula for training models using complexity indices. The list of all indices used is in Appendix A. ### Linguistic Index Importance Estimation #### 2.1.1 Correlation Approach Given linguistic indices \(\{\mathbf{X}_{j}\}_{j=1}^{k}\) of \(n\) data samples, where \(k\) is the number of linguistic indices and \(\mathbf{X}_{j}\in\mathbb{R}^{n}\), we start by standardizing the indices \(\{\mathbf{Z}_{j}=\frac{X_{j}-\mu_{j}}{\sigma_{j}}\}_{j=1}^{k}\). We consider importance weight factors for indices \(\{\rho_{j}\}_{j=1}^{k}\), which are randomly initialized at the start of training. At every validation step, the weights are estimated using the _validation_ dataset by computing the Pearson's correlation coefficient between loss and linguistic indices of the validation samples \(\rho_{j}=r(\mathbf{l},\mathbf{Z}_{j})\) where \(r\) is the correlation function and \(\mathbf{l}\in\mathbb{R}^{n}\) is the loss of validation samples. The correlation factors are updated periodically. It is important to use validation loss as opposed to training loss because the instantaneous loss of _seen_ data might be affected by memorization in neural networks (Zhang et al., 2017; Arpit et al., 2017; Wang et al., 2020). This is while _unseen_ data points more accurately represent the difficulty of samples for a model. Algorithm 1 presents the correlation approach. #### 2.1.2 Optimization Approach Let \(\mathbf{Z}\in\mathbb{R}^{n\times k}\) be the matrix of \(k\) linguistic indices computed for \(n\) validation samples and \(\mathbf{l}\in\mathbb{R}^{n}\) indicate the corresponding loss vector of validation samples. We find the optimal weights for linguistic indices to best approximate validation loss: \[\boldsymbol{\rho}^{*}=\arg\min_{\boldsymbol{\rho}}\quad\|\mathbf{l}-\mathbf{Z} \boldsymbol{\rho}\|_{2}^{2}+\lambda_{\rho}\|\boldsymbol{\rho}\|_{1}, \tag{1}\] where \(\lambda_{\rho}\in\mathbb{R}\) and \(\boldsymbol{\rho}^{*}\in\mathbb{R}^{k}\) is jointly optimized over all indices. The index that best correlates with loss can be obtained as follows: \[i^{*}=\arg\min_{i}\|\mathbf{l}-\mathbf{Z}_{*i}\boldsymbol{\rho}_{i}\|_{2}^{2}, \tag{2}\] where \(\mathbf{Z}_{*i}\) denotes the \(i^{th}\) column of \(\mathbf{Z}\). Algorithm 2 presents this approach. We note that the main distinction between the two correlation and optimization approaches lies in their scope: the correlation approach operates at the index level, whereas the optimization approach uses the entire set of indices. #### 2.1.3 Scoring Linguistic Complexity We propose two methods for aggregating linguistic indices \(\{X_{j}\}^{k}\) and their corresponding importance factors \(\{\rho_{j}\}^{k}\) into a linguistic complexity score. The first method selects the linguistic index with the maximum importance score at each timestep: \[S_{i}=Z_{i\hat{j}},\quad\hat{j}=\arg\max_{j}\rho_{j}, \tag{3}\] which provides insights into the specific indices that determine the complexity to the model. The second method computes a weighted average of linguistic indices, which serves as a difficulty score. This is achieved as follows: \[S_{i}=\frac{\sum_{j}\rho_{j}\mathbf{Z}_{ij}}{\sqrt{\sum_{j}\rho_{j}^{2}}}, \tag{4}\] where \(S_{i}\in\mathbb{R},(\mu_{S_{i}},\sigma_{S_{i}})=(0,1),\) is an aggregate of linguistic complexity indices for the input text. If an index \(\mathbf{Z}_{j}\) is negatively correlated with loss, \(\rho_{j}\) will be negative, and \(\rho_{j}\mathbf{Z}_{j}\) will be positively correlated with loss. Therefore, \(S_{i}\) is an aggregate complexity that is positively correlated with loss. And using weighted average results in the indices that are most highly correlated with loss to have the highest contribution to \(S_{i}\). ``` 0:\(\mathcal{D}_{train}\), \(\mathcal{D}_{val}\), Model \(\Theta\), Optimizer \(g\), Optimizer \(h\), Loss function \(f\), [Optional] Curriculum \(C\) 1: step \(\gets 0\) 2:\(\rho\leftarrow\) random initialization 3:while step \(<\) total_steps do 4: training_batch \(\leftarrow\) SampleBatch(step, \(\mathcal{D}_{train}\)) 5: loss \(\leftarrow\) ComputeLoss (training_batch, \(\Theta\)) 6: line \(\leftarrow\) Get.LinguisticFeatures(training_batch) 7: difficulty \(\leftarrow\) CalculateDifficulty(line, \(\rho\)) 8: confidence \(\leftarrow\) DetermineConfidence(step, difficulty) 9: weighted_loss \(\leftarrow\) loss \(\otimes\) confidence 10:\(\Theta\)\(\leftarrow\) UpdateModel (weighted_loss, \(\Theta\)) 11:if step % validation_step = 0 then 12:\(l\leftarrow\) ComputeLoss (\(\mathcal{D}_{val}\), \(\Theta\)) 13: line \(\leftarrow\) Get.LinguisticFeatures(\(\mathcal{D}_{val}\)) 14:\(\rho\leftarrow\arg\min_{\rho}\|\text{ling}\cdot\rho-l\|_{2}^{2}+\lambda_{\rho} \|\rho\|_{1}\) 15:endif 16: step \(\leftarrow\) step \(+1\) 17:endwhile ``` **Algorithm 2** Optimization Method ### Linguistic Curriculum We evaluate the quality of weighted linguistic indices as a difficulty score and introduce three new curricula based on a moving logistic (Richards, 1959) and Gaussian functions, see Figure 2. #### 2.2.1 Time-varying Sigmoid We develop a time-varying sigmoid function to produce weights (Eq. 3). The sigmoid function assigns a low weight to samples with small difficulty scores and a high weight to larger difficulty scores. Weights are used to emphasize or de-emphasize the loss of different samples. For this purpose, we use the training progress \(t\in[0,1]\) as a shift parameter, to move the sigmoid function to the left throughout training, so that samples with a small difficulty score are assigned a higher weight in the later stages of training. By the end of the training, all samples are assigned a weight close to 1. Additionally, we add a scale parameter \(\beta\in[1,\infty)\) that controls the growth rate of weight (upper bounded by \(1\)) for all samples. \[w(S_{i},t;\beta)=\frac{1}{1+\exp(-S_{i}-t\cdot\beta)}. \tag{5}\] The sigmoid function saturates as the absolute value of its input increases. To account for this issue, our input aggregated linguistic index follows the standard scale, mean of zero, and variance of one, in (4) and (3). #### 2.2.2 Moving Negative-sigmoid The positive-sigmoid function assigns greater weights to samples with a large value for \(S\) that are linguistically more complex. In order to establish a curriculum that starts with easy samples and gradually proceeds to harder ones, we use a negative sigmoid function: \[w(S_{i},t;\beta)=\frac{1}{1+\exp(S_{i}-t\cdot\beta)}. \tag{6}\] Figure 2 illustrates the process of time-varying positive and negative sigmoid functions. Over the course of training, larger intervals of linguistic complexity are assigned full confidence, until the end of training when almost all samples have a confidence of one and are fully used in training. #### 2.2.3 Time-varying Gaussian Function We hypothesize that training samples that are not too hard and not too easy are the most useful in training, and should receive the most focus. In fact, samples that are too easy or hard may contain artifacts that are harmful to training, may contain noise, and may not be generalizable to the target task. Therefore, we use the Gaussian function to prioritize learning from medium-level samples. The function starts with a variance of \(1\), and scales up during the course of training so that the easier and harder samples, having lower and higher linguistic complexity values, respectively, are assigned increasing weights and are learned by the end of training. We propose the following function: \[w(S_{i},t;\gamma)=\exp(\frac{-S_{i}^{2}}{2(1+t\cdot\gamma)}), \tag{7}\] where \(\gamma\) is the rate of growth of variance and \(t\) is the training progress, see Figure 2. #### 2.2.4 Weighting-based Curriculum We define a curriculum by weighting sample losses according to their confidence. Samples that are most useful for training receive higher weights, and those that are redundant or noisy receive smaller weights. Weighting the losses effectively causes the gradient update direction to be dominated by the samples that the curriculum thinks are most useful. Weights \(w\) are computed using either Equation 5, 6 or 7: \[\mathcal{L}=\frac{1}{\sum_{i}w(S_{i},t;\beta)}\sum_{i}w(S_{i},t;\beta)\cdot \ell_{i}, \tag{8}\] where \(\ell_{i}\) is the loss of sample \(i\), \(t\) the current training progress, and \(\mathcal{L}\) is the average weighted loss. ### Reducing Redundancy in Indices We have curated a list of 241 linguistic complexity indices. In the case of a text pair input (e.g. NLI), we concatenate the indices of the two text inputs, for a total of 482. Our initial data analysis reveals significant correlation among these indices in their estimation of linguistic complexity. To optimize computation, avoid redundancy, and ensure no single correlated index skews the complexity aggregation approach 2.1.3, we propose two methods to select a diverse and distinct set of indices for our study. We consider the choice of using full indices or filtering them as a hyper-parameter. In the first approach, for each linguistic index, we split the dataset into \(m\) partitions based on the index values 1 (similar to Figure 1). Next, using a trained No-CL (SS3.3) model, we compute the accuracy for each partition. Then, we find the first-order Figure 2: At the beginning of training, the sigmoid function with the lowest opacity is used. It is the right-most curve in (a), the left-most curve in (b), and the middle curve in (c). Then, as training progresses, the function is shifted using the parameter \(t\) in (5) and (7), causing samples with a higher complexity to be assigned higher confidence if (a) is used, samples with a lower complexity to be assigned higher confidence if (b) is used, and samples with medium complexity to be assigned higher confidence if (c) is used. accuracy trend across these partitions. Linguistic indices with a pronounced slope describe great variance in the data and are considered for our study; we select the top 30% of indices, reducing their count from 482 to 144 for text pair inputs. In the second approach, we compute pair-wise correlations between all indices. Then, we group highly correlated indices, as shown in Figure 3. From each cluster, we select a representative index, aiming to prevent correlated indices from dominating the aggregation approach and to eliminate redundancy. This method narrows our focus to the following 16 key indices: 1) type-token ratio (TTR), 2) semantic richness, 3) ratio of verbs to tokens, 4) mean TTR of all \(k\) word segments, 5) Total number of verbs, 6) number of unique words, 7) adverbs per sentence, 8) number of unique words in the first \(k\) tokens, 9) ratio of nouns to verbs, 10) semantic noise, 11) lexical sophistication, 12) verb sophistication, 13) clauses per sentence, 14) average SubtlexUS CDlow value per token, 15) adjective variation, 16) ratio of unique verbs. Please refer to Appendix A for definitions and references to indices. ## 3 Experiments ### Datasets We evaluate NLP models in learning the tasks of the following datasets: * **SNLI**: Stanford Natural Language Inference (Bowman et al., 2015). The task is to classify a pair of sentences by the relation between them as one of entailment, neutral, or contradiction. * **CoLA**: Corpus of Linguistic Acceptability (Warstadt et al., 2019). It is a task of classifying sentences as grammatical vs. ungrammatical. * **ANLI**: Adverserial Natural Language Inference (Nie et al., 2020). This NLI dataset was created with a model in the loop, by only adding samples to the dataset that fool the model. We train only on the ANLI training set of 162k samples. * **SST-2**: Stanford Sentiment Treebank (Socher et al., 2013). The task is to predict the sentiment of a given sentence as positive or negative. * **RTE**: Recognizing Textual Entailment (Wang et al., 2018). The task is to determine if a given sentence is entailed by another given sentence. * **AN-Pairs**: Adjective-noun pairs from the Cambridge ESOL First Certificate in English (FCE) exams (Kochmar and Briscoe, 2014). The task is to detect if an adjective-noun pair, including pairs that are typically confusing to language learners, is used correctly in the context of a sentence. * **GED**: Grammatical Error Detection (Yannakoudakis et al., 2011). The task is to identify grammar errors at word level in given sentences. ### Difficulty Scoring Functions The curriculum learning approaches in SS2.2 use difficulty scores or compute confidence to quantify sample difficulty in order to rank sentences. We use as difficulty scores: aggregate linguistic complexity **Ling**, see Section 2.1.3, and **Loss**(Xu et al., 2020; Wu et al., 2021; Zhou et al., 2020). We take the loss from a proxy model (No-CL in SS3.3) by recording all samples losses two times per epoch during training and computing the sample-wise average. Figure 3: Eliminating redundancy in linguistic indices. (a) shows the Pearson’s correlation coefficient between each pair of linguistic indices. (b) is created by reordering the rows and columns of (a), such that mutually correlated indices are clustered into blocks using hierarchical clustering (Kumar et al., 2000). Best seen in color; lighter areas indicate greater correlations among index pairs or groups. ### Baselines We consider a no-curriculum baseline as well as several recent curriculum learning approaches. * **No-CL**: no-curriculum uses standard random mini-batch sampling from the whole dataset without sample weighting. * **Sampling**(Bengio et al., 2009) uses the easiest subset of the dataset at each stage of training. Instead of randomly sampling a mini-batch from the whole dataset, a custom data sampler is created that provides the subset consisting of the easiest \(\alpha\)% of data when training progress is at \(\alpha\)%. * **SL-CL & WR-CL**(Platanios et al., 2019) is a curriculum learning approach that defines a time-varying function of the model's competence (defined as the fraction of training data that the model uses at every step), and a difficulty score of the data. At each iteration, a minibatch is sampled from the subset of data with difficulty smaller than the model's competence--a pre-defined non-linear function. The model employs sentence length (SL-CL) and word rarity (WR-CL) as difficulty measures. Sampling is the same as **Competence**-based curriculum learning with a linear competence function. * **SuperLoss**(Castells et al., 2020) uses instantaneous loss to compute task-agnostic sample confidence. It emphasizes easy samples and de-emphasizes hard samples based on the global average loss as the difficulty threshold. * **Concat**(Lee et al., 2021) concatenates linguistic indices to language embeddings before classification. Lee et al. (2021) and Meng et al. (2020) reported low performance as a result of appending features to embeddings. However, our approach succeeds in utilizing concatenated features. * **Data Selection**(Mohiuddin et al., 2022) is an online curriculum learning approach. It evaluates the training data at every epoch and uses loss as the difficulty score. It selects the middle 40% of samples according to difficulty. We compare the above models against our approaches, **Ling-CL**, which aggregates linguistic indices using weighted average or max-index aggregation, and applies different curriculum strategies: sigmoid, negative-sigmoid, and Gaussian weighting, as well as sampling an competence-based approaches, seeSS3.3. We test variants of our approach with the correlation method, optimization method, and indices filtering. We report results of the _max_ aggregation (SS2.1.3) approach as it performs better than the weighted average and is computationally cheaper. **Loss-CL** computes loss as a difficulty score by recording the losses of samples during training of No-CL. The loss during the early stages of training generated by an under-trained model is a good measure of the relative difficulty of both training and validation samples. ### Evaluation Metrics Linguistic disparity can be quantified by the extent of _asymmetry_ in the probability distribution of the linguistic complexity of samples in a dataset, e.g., see Figure 1 in SS1. A natural solution to evaluate models is to group samples based on their linguistic complexity. Such grouping is crucial because if easy samples are overrepresented in a dataset, then models can result in unrealistically high performance on that dataset. Therefore, we propose to partition datasets based on a difficulty metric (linguistic index or loss) and compute balanced accuracy of different models on the resulting groups. This evaluation approach reveals great weaknesses in models, and benchmark datasets or tasks that seemed almost "solved" such as as the complex tasks of NLI. ### Experimental Settings We use the transformer model _roberta-base_(Liu et al., 2019) from (Wolf et al., 2020), and run each experiment with at least two random seeds and report the average performance. We use AdamW (Loshchilov and Hutter, 2018) optimizer with a learning rate of \(1\times 10^{-5}\), batch size of 16, and weight decay of \(1\times 10^{-2}\) for all models. The model checkpoint with the best validation accuracy is used for final evaluation. In NLI tasks with a pair of text inputs, the indices of both texts are used. For Ling-CL, we optimize the choice of index importance estimation method and aggregation method. For the baselines, we optimize the parameters of SuperLoss (\(\lambda\) and moving average method), and the two parameters of SL-CL and WR-CL models for each dataset. For the data selection, we use a warm-up period of \(20\)% of the total training iterations. ### Enhanced Linguistic Performance Tables 1 show the performance of different models when test samples are grouped based on word rarity. The results show that the performance of the baseline models severely drops compared to standard training (No-CL). This is while our Ling-CL approach results in 4.5 absolute points improvement in accuracy over the best-performing baseline averaged across tasks, owing to its effective use of linguistic indices. Appendix D shows the overall results on the entire test sets, and results when test samples are grouped based on their loss; we use loss because it is a widely-used measure of difficulty in curriculum learning. These groupings allow for a detailed examination of the model's performance across samples with varying difficulty, providing insights into the strengths and weaknesses of models. For example, the performance on SNLI varies from 89.8 to 90.6. However, when word rarity is used to group data based on difficulty, the performance range significantly drops from 74.4 to 83.6, indicating the importance of the proposed measure of evaluation. We observe that such grouping does not considerably change the performance on ANLI, which indicates the high quality of the dataset. In addition, it increases model performance on AN-Pair and GED, which indicates a greater prevalence of harder examples in these datasets. On average, the optimization approach outperforms correlation by 1.6% \(\pm 1.9\%\) accuracy in our experiments. Also notably, on average, the argmax index aggregation outperforms the weighted average by 1.9% \(\pm 1.9\%\), and the filtered indices outperform the full list of indices by 1.4% \(\pm 1.1\%\). ### Learning Dynamics for NLP Tasks Identification of Key Linguistic IndicesWe analyze the linguistic indices that most contribute to learning NLP tasks. For this purpose, we use the evaluation approach described in SS3.4 for computing balanced accuracy according to linguistic indices. Table 2 shows the top three important linguistic indices for each dataset as identified by our optimization algorithm using the Gaussian curriculum. Importance is measured by the average \(\rho\) value. Early, middle, and late divide the training progress into three equal thirds. The top index in the early stage is the index with the highest average \(\rho\) during the first 33.3% of training. The top indices are those that most accurately estimate the true difficulty of samples, as they should highly correlate with validation loss. Table 2 shows that different indices are important for different tasks. This means that it is not possible to use a single set of linguistic indices as a general text difficulty score, important indices can be identified for each task, which can be achieved by our index importance estimation approach (SS2.1) and evaluation metric (SS3.4). Analysis of Linguistic Indices for Grammar TasksWe consider the grammar tasks for analysis. For AN-Pairs (adjective-noun pair), during the early stage, the top indices are the number of tokens per sentence, age of acquisition (AoA) of words, and mean length of sentences. This is meaningful because longer sentences might introduce modifiers or sub-clauses that can create ambiguity or make it more challenging to discern the intended adjective-noun relationship accurately. Regarding AoA, words that are acquired later in life or belong \begin{table} \begin{tabular}{l c c c c c c c c c} \hline & **ANLI** & **COLA** & **RTE** & **SNLI** & **SST2** & **AN-Pairs** & **GED** & **Average** \\ \hline **Ling-CL [NegSig]** & 59.3 \(\pm\) 2.55 & 72.4 \(\pm\) 0.40 & **79.1** \(\pm\) 8.47 & 82.8 \(\pm\) 8.35 & 92.2 \(\pm\) 0.22 & 79.1 \(\pm\) 1.55 & 75.3 \(\pm\) 0.67 & 77.2 \(\pm\) 3.17 \\ **Ling-CL [Gauss]** & **60.9** \(\pm\) 1.41 & **73.0** \(\pm\) 0.02 & 77.2 \(\pm\) 8.08 & 83.5 \(\pm\) 3.93 & 92.4 \(\pm\) 0.27 & 82.9 \(\pm\) 1.24 & 75.5 \(\pm\) 0.41 & **77.9** \(\pm\) 2.83 \\ **Ling-CL [Sig]** & 58.1 \(\pm\) 0.17 & 64.6 \(\pm\) 8.91 & 78.7 \(\pm\) 8.57 & 83.0 \(\pm\) 8.48 & 92.3 \(\pm\) 0.01 & 82.3 \(\pm\) 0.03 & **75.9** \(\pm\) 0.10 & 76.4 \(\pm\) 3.92 \\ \hline **Loss-CL [NegSig]** & 59.0 \(\pm\) 0.31 & 55.6 \(\pm\) 0.64 & 68.1 \(\pm\) 1.59 & 75.1 \(\pm\) 0.05 & 91.6 \(\pm\) 0.26 & 76.4 \(\pm\) 5.70 & 75.1 \(\pm\) 1.44 & 71.6 \(\pm\) 1.43 \\ **Loss-CL [Sig]** & 49.7 \(\pm\) 9.58 & 56.6 \(\pm\) 0.37 & 66.8 \(\pm\) 0.28 & **33.6** \(\pm\) 8.37 & 90.9 \(\pm\) 0.42 & 81.4 \(\pm\) 0.61 & 73.3 \(\pm\) 0.29 & 71.8 \(\pm\) 2.85 \\ **Loss-CL [Gauss]** & 49.4 \(\pm\) 11.67 & 57.0 \(\pm\) 1.29 & 67.2 \(\pm\) 1.41 & 75.1 \(\pm\) 0.52 & 91.8 \(\pm\) 0.12 & 80.5 \(\pm\) 2.08 & 74.5 \(\pm\) 0.07 & 70.8 \(\pm\) 2.37 \\ \hline **Sampling** & 49.9 \(\pm\) 10.00 & 64.6 \(\pm\) 8.89 & 67.9 \(\pm\) 0.03 & 83.2 \(\pm\) 8.72 & 91.5 \(\pm\) 0.07 & 82.6 \(\pm\) 3.93 & 73.8 \(\pm\) 1.23 & 73.4 \(\pm\) 4.7 \\ **Competence** & 50.1 \(\pm\) 11.27 & 63.4 \(\pm\) 0.08 & 68.8 \(\pm\) 0.64 & 74.7 \(\pm\) 0.06 & 91.6 \(\pm\) 0.03 & **84.0** \(\pm\) 1.14 & 74.1 \(\pm\) 0.39 & **72.4** \(\pm\) 3.23 \\ **SL-CL** & 50.3 \(\pm\) 10.05 & 55.8 \(\pm\) 0.06 & 67.7 \(\pm\) 1.27 & 82.6 \(\pm\) 8.35 & **93.1** \(\pm\) 0.00 & 81.6 \(\pm\) 0.72 & 75.2 \(\pm\) 0.26 & 72.3 \(\pm\) 2.96 \\ **WR-CL** & 50.9 \(\pm\) 9.80 & 56.1 \(\pm\) 0.53 & 68.4 \(\pm\) 0.73 & 74.5 \(\pm\) 0.16 & 91.5 \(\pm\) 0.16 & 80.1 \(\pm\) 0.81 & 75.2 \(\pm\) 0.17 & 71.0 \(\pm\) 1.77 \\ **SuperLoss** & 39.5 \(\pm\) 0.14 & 56.9 \(\pm\) 0.69 & 69.6 \(\pm\) 0.50 & 75.2 \(\pm\) 0.14 & 91.7 \(\pm\) 0.26 & 77.8 \(\pm\) 1.89 & 74.2 \(\pm\) 0.15 & 69.3 \(\pm\) 0.54 \\ **Concal** & 51.3 \(\pm\) 9.83 & 64.3 \(\pm\) 8.03 & 71.4 \(\pm\) 0.51 & 75.2 \(\pm\) 0.24 & 91.9 \(\pm\) 0.14 & 81.8 \(\pm\) 1.66 & 73.3 \(\pm\) 0.91 & 72.8 \(\pm\) 3.05 \\ **Data Selection** & 46.8 \(\pm\) 6.12 & 55.1 \(\pm\) 1.71 & 66.6 \(\pm\) 1.49 & 74.4 \(\pm\) 0.49 & 91.5 \(\pm\) 0.30 & 79.6 \(\pm\) 1.03 & 75.5 \(\pm\) 0.52 & 69.9 \(\pm\) 1.67 \\ **No-CL** & 51.7 \(\pm\) 8.21 & 57.0 \(\pm\) 0.22 & 70.0 \(\pm\) 0.48 & 83.3 \(\pm\) 8.42 & 83.7 \(\pm\) 8.22 & 82.1 \(\pm\) 0.51 & 74.0 \(\pm\) 0.14 & 71.1 \(\pm\) 3.74 \\ \hline \end{tabular} \end{table} Table 1: Balanced accuracy by linguistic index (Word rarity). Accuracy is the metric for all datasets except CoLA and GED, CoLA uses Matthew’s correlation and GED uses \(F_{\beta=0.5}\) score. Ling-CL uses aggregate linguistic complexity as a difficulty score we create, and Loss-CL uses the average loss of a sample throughout a full training. to more specialized domains might pose challenges in accurately judging the correct usage of adjective-noun pairs because of their varying degrees of familiarity and potential difficulty associated with specific vocabulary choices. During the middle stage the AoA increases in importance and remains challenging to the model, the number of adverbs per sentence increases in rank and joins the top three indices. In the context of adjective-noun pairs, the presence of multiple adverbs in a sentence can potentially affect the interpretation and intensity of the adjective's meaning. This is because adverbs often modify verbs, adjectives, or other adverbs in sentences. In addition, depending on the specific adverbs used, they may enhance, weaken, or alter the intended relationship between the adjective and the noun. Moreover, the presence of several adverbs can simply introduce potential challenges in identifying and correctly interpreting the relationship between adjectives and nouns due to increasing syntactic complexity. In the third stage, the number of adverbs per sentence becomes the top important index, while AoA and the number of tokens per sentence drop out of the top three. In the early stage, AoA and the number of tokens has \(\rho\) values of 0.168 and 0.164, respectively. In the late stage, they drop to 0.11 and 0.13, while the number of adverbs per sentence is 0.138 early, and increases to 0.181 in the late stage. We see that indices may become dominant not only by increasing their \(\rho\) value but also by waiting for other indices to drop down when they have been learned by the model. Therefore, Ling-CL can determine the order to learn linguistic indices, and then learn them sequentially. Regarding GED, noun variation is the dominant index throughout the training process. Such variation is important because it affects syntactic agreement, subject-verb agreement, modifier placement, and determiner selection. These factors affect gram \begin{table} \begin{tabular}{c c c} & **Aarly** & **Middle** \\ \hline \multirow{3}{*}{**AN-Pairs**} & \# Tokens per sentence & Lemmas age of acquisition & \# Adverbs per sentence \\ & Lemmas age of acquisition & \# Tokens per sentence & Corrected TTR \\ & Mean sentence length & \# Adverbs per sentence & Nouns to adjective ratio \\ \hline \multirow{3}{*}{**GED**} & \# Corrected noun variation & \\ & \# Tokens per sentence & \# Nouns per sentence & \# Tokens per sentence \\ & \# Nouns per sentence & \# Tokens per sentence & \# Nouns per sentence \\ \hline \multirow{2}{*}{**RTE**} & Ratio of Adverbs to Verbs (P) & \multirow{2}{*}{\begin{tabular}{c} RATE \\ \end{tabular} } \\ & Ratio of Subordinating Conjunctions to Verbs (P) & \multirow{2}{*}{\begin{tabular}{c} Adverb Variation (P) \\ Adverbs per sentence (P) \\ \end{tabular} } \\ & Verb sophistication (P) & & Adverbs per sentence (P) \\ \hline \multirow{3}{*}{**ANLI**} & Lexical verb variation (P) & \multicolumn{2}{c}{Function words per sentence (H)} \\ & Unique Entities (P) & \multicolumn{2}{c}{Log Tokens per log sentences} \\ & Unique Entities per token (P) & \\ \hline \multirow{3}{*}{**SST-2**} & \# Complex nominals & \multicolumn{2}{c}{\multirow{2}{*}{ \begin{tabular}{c} \\ Noun variation \\ \end{tabular} } \\ & Ratio of nouns to verbs & \multicolumn{2}{c}{Verb variation} \\ \hline \multirow{3}{*}{**CoLA**} & \# Function words & \multicolumn{2}{c}{\# Coordinating Conjunctions} \\ & Number of T-units & \multicolumn{2}{c}{T-units per sentence} \\ \hline \multirow{3}{*}{**SNLI**} & \multicolumn{2}{c}{Lemmas age of acquisition (P)} \\ & Linear Write Formula Score (P) & \multicolumn{2}{c}{Gunning Fog Count Score (P)} \\ \hline \end{tabular} \end{table} Table 2: Top three important linguistic indices at each stage of learning. For datasets with a premise (P) and hypothesis (H), they are indicated in parentheses. Figure 4: The progression of the estimated importance factors \(\rho\), and balanced accuracy for groups of linguistic indices. matical consistency and coherence within the sentence structure, leading to the importance of noun variation throughout the training process. Dominant Indices for CoLA TaskRegarding CoLA, the number of function words and coordinating conjunctions indices are the dominant indices at the early stage, and middle and late stages of training respectively. These words are crucial in establishing the syntactic structure of a sentence. They directly contribute to agreement and references, coherence, and adherence to grammar rules. We note that T-units (independent/main clauses clauses with their associated subordinate clauses) are higher-order linguistic constituents that provide information about the dependency relations between sub-constituents, and the overall coherence of sentences. Indices related to T-units are among the top three crucial indices. Trends and Relationships between \(\rho\) and Balanced AccuracyWe use the GED dataset (SS3.1) to analyze the trends of \(\rho\) throughout training, and the relation between \(\rho\) and balanced accuracy. Figure 4 shows the progression of \(\rho\) with the progression of balanced accuracy for selected linguistic indices. This figure is produced using No-CL. We observe across several indices that \(\rho\) is high when balanced accuracy is low, indicating that the index is challenging to the model and therefore used for learning with a high \(\rho\), and decreases as the index is learned. However, Figure 3(a) shows that it is not necessary that when balanced accuracy increases \(\rho\) would decrease. In this case, it means that the model is performing relatively well on the index, but the index remains predictive of loss. So, although the average performance increased, the variation in performance among different values of the index remains high. We find that numerous indices follow the same of trend of \(\rho\). In Appendix B, we propose a method for clustering \(\rho\) to effectively uncover patterns and similarities in the learning of different indices. However, further analysis of the dynamics of \(\rho\) is the subject of our future work. In addition, we find that the rank of top indices is almost constant throughout the training. This quality may be useful in creating an approach that gathers the indices rankings early on and utilizes them for training. Appendix E lists influential indices by their change in \(\rho\) across stages of training. We note that the "number of input sentences" index is the least important metric because the index is almost constant across samples--75% of the samples consist of a single sentence in the datasets. ## 4 Conclusion and Future Work We propose a new approach to linguistic curriculum learning. Our approach estimates the importance of multiple linguistic indices and aggregates them, provides effective difficulty estimates through correlation and optimization methods, and introduces novel curricula for using difficulty estimates, to uncover the underlying linguistic knowledge that NLP models learn during training. Furthermore, we present a method for a more accurate and fair evaluation of computational models for NLP tasks according to linguistic indices. Furthermore, the estimated importance factors present insights about each dataset and NLP task, the linguistic challenges contained within each task, and the factors that most contribute to model performance on the task. Further analysis of such learning dynamics for each NLP task will shed light on the linguistic capabilities of computational models at different stages of their training. Our framework and the corresponding tools serve as a guide for assessing linguistic complexity for various NLP tasks and uncover the learning dynamics of the corresponding NLP models during training. While we conducted our analysis on seven tasks and extracted insights on the key indices for each task, NLP researchers have the flexibility to either build on our results or apply our approach to other NLP tasks to extract relevant insights. Promising areas for future work include investigations on deriving optimal linguistic curriculum tailored for each NLP task; examining and enhancing linguistic capabilities of different computational models, particularly with respect to linguistically complex inputs; and developing challenge datasets that carry a fair distribution of linguistically complex examples for various NLP tasks. In addition, future work could study why specific indices are important, how they connect to the linguistic challenges of each task, and how different linguistic indices jointly contribute to learning a target task. We expect other aggregation functions, such as log-average, exponential-average, and probabilistic selection of maximum, to be effective approaches for difficulty estimation based on validation loss. Finally, other variations of the proposed Gaussian curriculum could be investigated for model improvement. ## 5 Limitations Our work requires the availability of linguistic indices, which in turn requires expert knowledge. Such availability requirements may not be fulfilled in many languages. Nevertheless, some linguistic complexity indices are language independent, such as the commonly-used "word rarity" measure, which facilitates extending our approach to other languages. Moreover, our approach relies on the effectiveness of specific linguistic complexity indices for target tasks and datasets employed for evaluation; different linguistic complexity indices may not capture all aspects of linguistic complexity and may yield different results for the same task or dataset. In addition, the incorporation of linguistic complexity indices and the generation of data-driven curricula can introduce additional computational overhead during the training process. Finally, our approach does not provide insights into the the interactions between linguistic indices during training.
2309.14481
Strange Expectations in Affine Weyl Groups
Our main result is a generalization, to all affine Weyl groups, of P. Johnson's proof of D. Armstrong's conjecture for the expected number of boxes in a simultaneous core. This extends earlier results by the second and third authors in simply-laced type. We do this by modifying and refining the appropriate notion of the "size" of a simultaneous core. In addition, we provide combinatorial core-like models for the coroot lattices in classical type and type $G_2$.
Eric Nathan Stucky, Marko Thiel, Nathan Williams
2023-09-25T19:25:09Z
http://arxiv.org/abs/2309.14481v1
# Strange expectations in affine Weyl groups ###### Abstract. Our main result is a generalization, to all affine Weyl groups, of P. Johnson's proof of D. Armstrong's conjecture for the expected number of boxes in a simultaneous core. This extends earlier results by the second and third authors in simply-laced type. We do this by modifying and refining the appropriate notion of the "size" of a simultaneous core. In addition, we provide combinatorial core-like models for the coroot lattices in classical type and type \(G_{2}\). 2020 Mathematics Subject Classification: Primary 05E15; Secondary 20F55, 13F60 ## 1. Introduction ### Motivation Macdonald's celebrated affine denominator formula \[\prod_{\alpha\in\widetilde{\Phi}^{+}}(1-e^{-\alpha})^{\operatorname{mult}( \alpha)}=\sum_{w\in\widetilde{W}}(-1)^{\ell(w)}e^{w(\rho)-\rho}\] specializes to many famous identities, including Euler's pentagonal number theorem, Jacobi's triple product identity, and Dyson's identity for Ramanujan's \(\tau\)-function [10, 11]. One such specialization--for simply-laced types--is the equality \[\prod_{i=1}^{\infty}c(x^{i})=\left(\prod_{i=1}^{\infty}\frac{1}{1-x^{hi}} \right)^{n}\sum_{q\in\mathcal{Q}}x^{\left(\frac{h}{2}q-\rho,q\right)}, \tag{1}\] where \(h\) is the _Coxeter number_ and \(c(x)\) is the characteristic polynomial of a _Coxeter element_. There is a version for all types, which Macdonald refers to but omits in [10]1: Footnote 1: At the end of [10, Section 8], Macdonald writes “When \(R\) contains roots of different lengths, the formula corresponding to [Equation (1)] is more complicated, and we shall not reproduce it here.” \[\sum_{q\in\mathcal{Q}}x^{\left(\frac{h}{2}q-\rho,q\right)}=\prod_{i=1}^{ \infty}\left[(1-x^{i})^{n_{s}}(1-x^{ri})^{n_{\ell}}\left(\prod_{\alpha\in \Phi_{s}}(1-x^{i}\omega^{\operatorname{ht}(\alpha)})\right)\left(\prod_{ \alpha\in\Phi_{\ell}}(1-x^{ri}\omega^{\operatorname{ht}(\alpha)})\right) \right], \tag{2}\] where \(n_{s}\) and \(n_{\ell}\) count the number of short and long roots, \(\omega\) is a primitive \(h\)th root of unity, \(r\) is the ratio of the length of a long to short root, \(\Phi_{s}\) and \(\Phi_{\ell}\) are the sets of short and long roots, and \(\operatorname{ht}(\alpha)\) is the height of the root \(\alpha\). ### Partitions Recall that an _integer partition_ is a sequence of non-increasing positive integers \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{k})\). The _Ferrers diagram_ of an integer partition \(\lambda\) (under the English convention) is a top-left justified subset of \(\mathbb{N}\times\mathbb{N}\) with \(\lambda_{i}\) boxes in the \(i\)-th row (counting from the top). A _hook_ of a given box in a Ferrers diagram is the collection of boxes to the right and below the given box. An example is given in Figure 1. An _\(a\)-core_ is an integer partition with no hook of length \(a\). For example, in type \(A\), Equation (1) can be interpreted as the beautiful combinatorial formula \[\prod_{i=1}^{\infty}\frac{1}{1-x^{i}}=\left(\prod_{i=1}^{\infty}\frac{1}{1-x^{ ai}}\right)^{a}\sum_{q\in\mathsf{core}(a)}x^{\mathsf{size}(q)}. \tag{3}\] An _\((a,b)\)-core_ is a partition that is simultaneously an \(a\)-core and a \(b\)-core. For \(a\) and \(b\) relatively prime, it turns out that there are only finitely many \((a,b)\)-cores: \[\big{|}\mathsf{core}(a,b)\big{|}=\frac{1}{a+b}\binom{a+b}{b}.\] For \(\lambda\) a partition, write \(\lambda^{\intercal}\) for its conjugate and \(\mathsf{size}(\lambda)\) for the number of its boxes. The starting point for a number of recent investigations has been Armstrong's conjecture on the average number of boxes in an \((a,b)\)-core, and in a self-conjugate \((a,b)\)-core [1, 1], which can be thought of as a sort of finite version of Equation (3). **Theorem 1.1** ([18]).: _For \(\gcd(a,b)=1\),_ \[\operatorname*{\mathbb{E}}_{\lambda\in\mathsf{core}(a,b)}(\mathsf{size}( \lambda))=\frac{(a-1)(b-1)(a+b+1)}{24}=\operatorname*{\mathbb{E}}_{\begin{subarray} {c}\lambda\in\mathsf{core}(a,b)\\ \lambda=\lambda^{\intercal}\end{subarray}}(\mathsf{size}(\lambda)).\] Both equalities in Theorem 1.1 were proven by Johnson using weighted Ehrhart theory [18]; the second equality was first proven by Chen, Huang, and Wang [1]. In [19], we generalized Armstrong's conjecture and Johnson's proof of the first equality to all _simply-laced_ affine Weyl groups, thereby giving a sort of finite analogue of Equation (1). In the present paper, we find and--in the words of Macdonald--_reproduce_ the generalization to all affine Weyl groups, giving a finite version of Equation (2). ### Combinatorial Models of Coroot Lattices The set of \(a\)-cores under the action of the affine symmetric group \(\widetilde{\mathfrak{S}}_{a}\) is a well-studied combinatorial model for the coroot lattice \(\mathcal{Q}_{a}^{{}^{\vee}}\) of type \(A_{a-1}\). Indeed, for all affine Weyl groups \(\widetilde{W}=\widetilde{W}(X_{n}):=W\ltimes\mathcal{Q}_{X_{n}}^{{}^{\vee}}\), there is a well-known \(\widetilde{W}\)-equivariant map from the group to the coroot lattice \(\widetilde{W}\to\mathcal{Q}_{X_{n}}^{{}^{\vee}}\) given by \(\widetilde{w}\mapsto\widetilde{w}(0)\), which restricts to a \(\widetilde{W}\)-equivariant bijection on the cosets \(\widetilde{W}/W\). Thus, combinatorial models for \(\mathcal{Q}_{X_{n}}^{{}^{\vee}}\) also give models for \(\widetilde{W}/W\), representatives usually taken to be dominant affine elements. In type \(A_{a-1}\), these correspondences give \(\widetilde{\mathfrak{S}}_{a}\)-equivariant bijections \[\begin{split}\mathsf{core}(a)\leftrightarrow\mathcal{Q}_{A_{a-1} }^{{}^{\vee}}&\leftrightarrow\widetilde{\mathfrak{S}}_{a}/ \mathfrak{S}_{a}\\ \lambda\leftrightarrow q_{\lambda}&\leftrightarrow \widetilde{w}_{\lambda}.\end{split} \tag{4}\] We describe the first of these bijections \((\lambda\leftrightarrow\mathcal{Q}_{A_{a-1}}^{{}^{\vee}})\) in detail in Section 3.1. **Remark 1.2**.: Here and throughout, all actions of \(\widetilde{W}\) on various sets are left-actions. In particular, this means that given a reduced expression \(a_{1}\cdots a_{k}\), the corresponding simple transpositions act in _decreasing_ order of the indices (that is, we "read" right to left). To produce similar combinatorial models for the quotients \(\widetilde{W}/W\) of other classical types \((X_{n}\in\{A_{n},B_{n},C_{n},D_{n}\})\), we may embed \(\mathcal{Q}_{X_{n}}^{{}^{\vee}}\) into an appropriate type \(A\) coroot lattice. Figure 2 illustrates these in rank \(2\), as well as a similar model for \(X_{n}=G_{2}\). Figure 1. The partition \(\lambda=(5,3,1,1)\), with boxes labeled by the lengths of their hooks. It is a \(3\)-core (since it has no hooks of size \(3\)) but not a \(4\)-core. Under the correspondence between \(a\)-cores and \(\mathcal{Q}_{a}^{\vee}\) of Equation (4), the set of \((a,b)\)-cores turn out to be exactly those coroot points that sit inside of a certain affine transformation of the fundamental alcove \(\mathcal{S}(b)\), which includes a \(b\)-fold dilation, called the \(b\)-Sommers region (see Definition 6.1). The natural generalization of \(\mathsf{core}(a,b)\) to any affine Weyl group is therefore the intersection of the coroot lattice \(\mathcal{Q}_{X_{n}}^{\vee}\) with its \(b\)-Sommers region, so that \(\mathsf{core}(a,b)=\mathsf{core}(A_{a-1},b)\). In other words, \[\mathsf{core}(X_{n},b):=\mathcal{Q}_{X_{n}}^{\vee}\cap\mathcal{S}_{X_{n}}(b). \tag{5}\] ### Previous Work Under the bijections of Equation (4), we noticed in [13] that the number of boxes in \(\lambda\) could be computed from the coroot \(q_{\lambda}\) as described above, or the inversion set of \(\widetilde{w}_{\lambda}^{-1}\), where \(\mathsf{inv}(\widetilde{w})=\widetilde{\Phi}^{+}\cap\widetilde{w}(-\widetilde {\Phi}^{+})\). More precisely: **Proposition 1.3** ([13, Proposition 6.4 & Corollary 6.7]).: _Let \(\lambda\) be an \(a\)-core and \(\rho^{\vee}\) be the sum of the fundamental coweights in type \(A_{a-1}\). Then_ \[\mathsf{size}(\lambda)=\sum_{\alpha+k\delta\in\mathsf{inv}(\widetilde{w}_{ \lambda}^{-1})}k=\left\langle\frac{a}{2}q_{\lambda}-\rho^{\vee},q_{\lambda} \right\rangle.\] It was natural to consider the corresponding statistic in _any_ affine Weyl group \(\widetilde{W}(X_{n})\) acting on \(V\), restricting to a certain finite set of coroots \(\mathsf{core}(X_{n},b)\) (defined below in Equation (5), in analogy with simultaneous \((a,b)\)-cores). The latter two authors showed that for simply-laced Weyl groups, the result mirrored Theorem 1.1. **Theorem 1.4** ([13, Theorem 1.10]).: _Let \(X_{n}\) be a simply-laced Cartan type with Coxeter number \(h\), and let \(b\) be coprime to \(h\). Then_ \[\operatorname*{\mathbb{E}}_{q\in\mathsf{core}(X_{n},b)}(\mathsf{size}(q))= \frac{n(b-1)(h+b+1)}{24}.\] When applied to \(X_{n}=A_{a-1}\) (so that \(n=a-1\) and \(h=a\)), this result gives a proof of the left equality of Theorem 1.1 for the expected size of simultaneous \((a,b)\)-cores. But since self-conjugate cores are a combinatorial model for coroots in the _non-simply-laced_ type \(C_{n}\), we were unable to similarly specialize Theorem 1.4 to conclude the right equality of Theorem 1.1 for the expected size of a self-conjugate simultaneous core. ### Improved Size Statistic In this paper, we describe a modification of the \(\mathsf{size}\) statistic to incorporate the lengths of the roots. This appears advantageous over the original statistic of [13]; we are able to apply the Ehrhart-theoretic techniques of P. Johnson outside of simply-laced type. Figure 2. \(3\)-cores in types \(A_{2}\) and \(G_{2}\), and self-conjugate \(4\)-cores in type \(C_{2}\). Normalize root systems so that the highest root has length \(2\), and write \(r\) for the ratio of the length of a long to a short root. For \(\widetilde{w}\in\widetilde{W}/W\), define \[\mathsf{size}^{\vee}(\widetilde{w}):=\left(\sum_{\begin{subarray}{c}\alpha+k \delta\mathsf{inw}(\widetilde{w}^{-1})\\ \alpha\text{ long}\end{subarray}}k\right)+r\left(\sum_{\begin{subarray}{c} \alpha+k\delta\mathsf{inw}(\widetilde{w}^{-1})\\ \alpha\text{ short}\end{subarray}}k\right) \tag{6}\] This recovers the original statistic \(\mathsf{size}\) in simply-laced type, but disagrees in non-simply-laced type when \(r>1\). A similar statistic was independently considered in [10]. Using a bijection analogous to those of Equation4, we interpret \(\mathsf{size}^{\vee}\) as statistics on the combinatorial models of Section5. For instance, we shows that \(\mathsf{size}^{\vee}\) in type \(C_{n}\) corresponds to the number of boxes in the corresponding self-conjugate \(2n\)-core (see Figure2). Following the same strategy as in [14], we find an affine Weyl group element that maps \(\mathcal{S}(b)\) to a \(b\)-fold dilation of the fundamental alcove (correctly modifying the \(\mathsf{size}^{\vee}\) statistic), and then apply Ehrhart theory to compute the expected value of \(\mathsf{size}^{\vee}\) on \(\mathsf{core}(X_{n},b)\). **Theorem 1.5**.: _For \(X_{n}\) an irreducible rank \(n\) Cartan type with root system \(\Phi\),_ \[\operatorname*{\mathbb{E}}_{q\in\mathsf{core}(X_{n},b)}(\mathsf{size}^{\vee} (q))=\frac{rg^{\vee}}{h}\frac{n(b-1)(h+b+1)}{24},\] _where \(h\) is the Coxeter number of \(X_{n}\), \(g^{\vee}\) is the dual Coxeter number for \(\Phi^{\vee}\), and \(r\) is the ratio of the length of a long root to the length of a short root in \(\Phi\)._ The extra factor of \(\frac{rg^{\vee}}{h}\) is _invisible_ in the simply-laced case, where \(\Phi^{\vee}=\Phi\), \(g^{\vee}=h\), and \(r=1\). As an immediate application of Theorem1.5, we conclude both equalities in Theorem1.1 by specializing to these types. Interestingly, although the expected number of boxes in a simultaneous core and in a self-conjugate simultaneous core happen to be the same, the formulas have quite different interpretations: the factor of \(a-1\) corresponds to the dimension \(n\) for ordinary simultaneous cores, but to \(g^{\vee}\) in the self-conjugate case. We prove Theorem1.5 for non-simply-laced types in Section7, after some setup, including a careful definition of \(\mathcal{S}(b)\). Along the way, we briefly generalize other results from [14], including in particular Theorem6.3 concerning the maximum size. ## 2. Background We give a brief account of most of the notation used in the remainder of the paper for objects associated to affine root systems. For definitions and greater detail, we refer the reader to standard references (e.g. [12]) or to the previous paper of the second and third author [14, Section 2]. ### Root Systems Let \(V\) be a Euclidean space of dimension \(n\), and \(\Phi\subseteq V\) be an irreducible crystallographic root system in \(V\) of type \(X_{n}\). We often suppress the \(X_{n}\) notation when there is only one root system under consideration. Denote a system of simple roots by \(\Delta=\{\alpha_{1},\dots,\alpha_{n}\}\), and the corresponding positive roots by \(\Phi^{+}\). For any \(\alpha\in\Phi\), we may write \(\alpha\) in the basis of simple roots as \(\alpha=\sum_{i=1}^{n}a_{i}\alpha_{i}\), where the coefficients \(a_{i}\) are either all nonnegative or all nonpositive. The _height_ of \(\alpha\) is the sum of the coefficients: \(\operatorname{ht}(\alpha):=\sum_{i=1}^{n}a_{i}\). Notice that \(\operatorname{ht}(\alpha)>0\) if and only if \(\alpha\in\Phi^{+}\) and \(\operatorname{ht}(\alpha)=1\) if and only if \(\alpha\in\Delta\). There is a unique root \(\tilde{\alpha}\) of maximal height called the _highest root_ of \(\Phi\), and we denote its coefficients by \(c_{i}\), that is, \(\tilde{\alpha}=\sum_{i=1}^{n}c_{i}\alpha_{i}\in\Phi\). In addition, the _Coxeter number_ of \(\Phi\) is \(h:=1+\operatorname{ht}(\tilde{\alpha})=1+\sum_{i=1}^{n}c_{i}\). For a root \(\alpha\in\Phi\), define its _coroot_ as \(\tilde{\alpha}:=\frac{2\alpha}{\|\alpha\|^{2}}\). Define the _dual root system_ of \(\Phi\) as \(\Phi^{\vee}:=\{\alpha^{\vee}:\alpha\in\Phi\}\). It is itself an irreducible crystallographic root system, and hence also has a highest root \(\widetilde{\gamma}\); note that although \(\widetilde{\gamma}\) is by definition the coroot of some \(\alpha\in\Phi\), this \(\alpha\) is typically not the highest root \(\tilde{\alpha}\). Writing \(\widetilde{\gamma}=\sum_{i=1}^{n}d_{i}\alpha_{i}^{\vee}\) as a sum of the simple coroots in \(\Phi^{\vee}\), then we define the _dual Coxeter number_\(g^{\vee}:=1+\sum_{i=1}^{n}d_{i}\). Define the _coroot lattice_\(\mathcal{Q}^{\vee}\) of \(\Phi\) as the lattice in \(V\) generated by \(\Phi^{\vee}\). Finally, let \((\omega_{1}^{\vee},\omega_{2}^{\vee},\ldots,\omega_{n}^{\vee})\) be the basis that is dual to the basis \((\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\) of \(V\) consisting of the simple roots, so that \(\langle\omega_{i}^{\vee},\alpha_{j}\rangle=\delta_{i,j}\). Then \(\omega_{1}^{\vee},\omega_{2}^{\vee},\ldots,\omega_{n}^{\vee}\) are the _fundamental coweights_. They are a basis of the _coweight lattice_ \[\Lambda^{\vee}:=\{x\in V:\langle x,\alpha\rangle\in\mathbb{Z}\text{ for all }\alpha\in\Phi\}\] of \(\Phi\), which contains \(\mathcal{Q}^{\vee}\) as a sublattice. The sum of these basis elements, \(\rho^{\vee}=\sum_{i=1}^{n}\omega_{i}^{\vee}\), will be of particular importance. For notational convenience, we define \(\omega_{0}^{\vee}:=0\). **Convention 2.1**.: We normalize the inner product \(\langle\cdot,\cdot\rangle\) on \(V\) so that \(\langle\tilde{\alpha},\tilde{\alpha}\rangle=2\) and call \(\alpha\in\Phi\) a _long root_ if \(\langle\alpha,\alpha\rangle=2\). In particular, all long roots are their own coroots. A _short root_ is a root with \(\langle\alpha,\alpha\rangle<2\). If the system has short roots \(\alpha\), then \(\alpha^{\vee}=r\alpha\) for an integer \(r\in\{2,3\}\) independent of \(\alpha\). If the system does not have short roots, it is called _simply-laced_. Note that \(\Phi^{\vee}\) is itself a root system, but not subject to Convention 2.1. ### Affine Weyl Groups and Affine Root Systems The _Weyl group_\(W\) associated to a root system \(\Phi\) is the subgroup of \(\operatorname{GL}(V)\) generated by the _simple reflections_ \[s_{i}=s_{\alpha_{i}}:x\mapsto x-2\frac{\langle\alpha_{i},x\rangle}{\langle \alpha_{i},\alpha_{i}\rangle}\alpha_{i}\] for \(\alpha_{i}\in\Delta\). The corresponding _affine Weyl group_\(\widetilde{W}\) is the subgroup of distance-preserving transformations on \(V\) generated by the simple reflections \(\{s_{\alpha}\}_{\alpha\in\Delta}\) together with the additional _affine simple reflection_ \[s_{0}:x\mapsto x-((\widetilde{\alpha},x)-1)\widetilde{\alpha}.\] One readily checks that the affine Weyl group \(\widetilde{W}\) acts on both \(\mathcal{Q}^{\vee}\) and \(\Lambda^{\vee}\). For any \(y\in V\), there is an associated translation \(t_{y}:x\mapsto x+y\). If we identify \(\mathcal{Q}^{\vee}\) with the corresponding group of translations acting on \(V\), then \(\widetilde{W}\) may be written as the semidirect product \(\widetilde{W}=W\ltimes\mathcal{Q}^{\vee}\). For \(\widetilde{w}\in\widetilde{W}\) we will use the notation \(\widetilde{w}=w\cdot t_{q}\) to denote this semidirect product decomposition. This decomposition gives a bijection \(W\backslash\widetilde{W}\to\mathcal{Q}^{\vee}\) given by \(\widetilde{w}\mapsto q\), but we will make frequent use of the following more interesting bijection: **Theorem 2.2**.: _The map \(\widetilde{W}=W\ltimes\mathcal{Q}^{\vee}\to\mathcal{Q}^{\vee}\) defined by \(\widetilde{w}\mapsto\widetilde{w}(0)\) descends to a \(\widetilde{W}\)-equivariant bijection on the cosets \(\widetilde{W}/W\)._ Proof.: Evidently the first map is \(\widetilde{W}\)-equivariant, and because \(g\in W\) implies \(g(0)=0\), we have that \(\widetilde{w}\) and \(\widetilde{w}g\) have the same image. Hence we have a well-defined equivariant map on cosets, and evidently \(q\mapsto t_{q}W\) is its inverse, as desired. The _affine root system_ is defined by \(\widetilde{\Phi}=\Phi\times\mathbb{Z}\), and--writing \(\delta\) for a formal variable to keep track of the coefficient of \(\mathbb{Z}\)--we use the notation \(\alpha+k\delta\) for a typical element of \(\widetilde{\Phi}\). The root system \(\Phi\) embeds in \(\widetilde{\Phi}\) by writing \(\alpha_{i}\) as \(\alpha_{i}+0\cdot\delta\), and we define \(\alpha_{0}:=-\tilde{\alpha}+\delta\). The affine Weyl group \(\widetilde{W}\) acts on \(\widetilde{\Phi}\) by \[\widetilde{w}\cdot(\alpha+k\delta):=w(\alpha)+(k-\langle\alpha,q\rangle)\delta,\] where \(\widetilde{w}=w\cdot t_{q}\). **Definition 2.3**.: Given a reduced word \(\widetilde{\mathbf{w}}=s_{i_{1}}s_{i_{2}}\cdots s_{i_{\ell}}\) for \(\widetilde{w}\in\widetilde{W}\), we define its _inversion sequence_ \[\mathsf{inv}(\widetilde{\mathbf{w}})=\beta_{1}+k_{1}\delta,\beta_{2}+k_{2} \delta,\ldots,\beta_{\ell}+k_{\ell}\delta,\] where \(\beta_{j}+k_{j}\delta\) are the affine roots \((s_{i_{1}}\cdots s_{i_{j-1}})(\alpha_{i_{j}})\). There may be many reduced expressions--and hence many inversion sequences--for a given \(\widetilde{w}\in\widetilde{W}\), but these differ only by a reordering: they record the affine hyperplanes that separate \(w(\mathcal{A})\) from the fundamental alcove \(\mathcal{A}\). ## 3. Core Partitions and the Type A Coroot Lattice As discussed in the introduction, there is a close relation between the coroot lattice for type \(A_{n}\) and certain kinds of partitions. Much of the work in this section is well-known [1, 10, 11], with the exception of the \(size_{i}^{\vee}\)-refinement of Proposition 3.4. We also refer the reader to the recent preprint [10] and to our previous FPSAC abstract on this work [10]. ### Coroots and Cores In type \(A_{a-1}\), one choice of simple roots is \(\alpha_{i}:=e_{i+1}-e_{i}\) for each \(1\leq i<a\). Then the highest root is \(\widetilde{\alpha}=e_{a}-e_{1}\), and the coroot lattice2 is \(\mathcal{Q}_{a}^{\vee}=\mathcal{Q}_{A_{a-1}}^{\vee}:=\{q=(q_{1},q_{2}\dots,q_ {a})\in\mathbb{Z}^{a}:\sum_{i=1}^{a}q_{i}=0\}\,.\) Footnote 2: For safety—even though roots and coroots can be identified in type \(A\)—we already throw in the distinguishing check. An integer partition \(\lambda\) can be characterized by its _boundary word_--a bi-infinite sequence of _beads_, which are either \(\bullet\)s or \(\infty\), that begins with an infinite sequence of only \(\bullet\)s and ends with an infinite sequence of only \(\circ\)s. This word encodes the boundary of \(\lambda\) (in English notation) by detailing the steps taken when traversing from bottom left to top right: \(\bullet\)s representing steps up and \(\circ\)s representing steps right. For example, the boundary word for the partition on the left of Figure 3 is read from south-west to north-east as \(\dots\bullet\bullet\circ\bullet\bullet\circ\bullet\circ\circ\bullet\circ\circ\dots\). Partitioning the boundary word into consecutive subsequences of length \(a\) and stacking them vertically gives the \(a\)_-abacus_ representation of \(\lambda\). This is illustrated in the middle of Figure 3. Finally, an \(a\)-abacus is called _balanced_ if we can draw a horizontal line between two rows with as many \(\circ\)s above the line as \(\bullet\)s below; every partition has a unique representation as a balanced \(a\)-abacus. An integer partition \(\lambda\) is an \(a\)-core if and only if its \(a\)-abacus representation is _flush_--that is, if each of the vertical "runners" of the abacus consists of an infinite sequence of only \(\bullet\)s followed by an infinite sequence of only \(\circ\)s. A flush, balanced \(a\)-abacus encodes a _coroot_ as the \(a\)-tuple of signed distances from beneath the lowest \(\bullet\) in each runner to the line witnessing the balanced condition--the balanced condition ensures that these distances sum to zero. We will say that a bead is at _level_\(\ell\) if the distance from beneath the bead to the line witnessing the balanced condition is \(\ell\); note that this means that levels increase when reading down the abacus. This is illustrated on the right of Figure 3. By the discussion above, \(\mathcal{Q}_{a}^{\vee}\) is in bijection with the set of \(a\)-cores \(\mathsf{core}(a)\). **Definition 3.1**.: For \(q\in\mathcal{Q}_{a}^{\vee}\), we write \(\lambda_{q}\) for the \(a\)-core obtained by building the flush, balanced \(a\)-abacus with levels of the lowest \(\bullet\) in each runner given by the coordinates of \(q\), and then reading this as the boundary word of a partition; for \(\lambda\in\mathsf{core}(a)\), we write \(q_{\lambda}\) for the corresponding coroot in \(\mathcal{Q}_{a}^{\vee}\) obtained by reading the boundary word of \(\lambda\), producing the corresponding \(a\)-core, and then reading off the levels of the lowest \(\bullet\) in each runner. The action of the affine symmetric group \(\widetilde{\mathfrak{S}}_{a}=\widetilde{W}(A_{a-1})\) on \(\mathcal{Q}_{a}^{\vee}\) is generated by the usual simple reflections \(s_{i}\) interchanging the \(i^{\mathrm{th}}\) and \((i+1)^{\mathrm{st}}\) positions, along with the additional affine simple reflection \(s_{0}\): \[s_{i}(q_{1},\dots,q_{i},q_{i+1},\dots,q_{a}) =(q_{1},\dots,q_{i+1},q_{i},\dots,q_{a}),\text{ and }\] \[s_{0}(q_{1},\dots,q_{a}) =(q_{a}+1,\dots,q_{1}-1).\] We can translate this action of \(\widetilde{\mathfrak{S}}_{a}\) to the set of \(a\)-cores [11, Section 2.7][10]. We think of a partition as an order ideal in \(\mathbb{N}\times\mathbb{N}\) (top-left justified), where each \((i,j)\in\mathbb{N}\times\mathbb{N}\) is indexed by its _content_\((i-j)\operatorname{mod}a\). For \(0\leq i<a\), let the simple reflection \(s_{i}\) act on a partition by toggling all possible boxes with content \(i\) mod \(n\)--that is, adding all possible missing boxes with content \(i\) which produce a valid Young diagram, or removing all possible present boxes with content \(i\) which produce a valid Young diagram. This extends to an action of the full affine symmetric group \(\widetilde{\mathfrak{S}}_{a}\) on \(a\)-cores. **Theorem 3.2**.: _The action of the affine symmetric group \(\widetilde{\mathfrak{S}}_{a}\) is preserved under the bijection between \(\mathcal{Q}_{a}^{\vee}\) and \(\mathsf{core}(a)\) of Definition 3.1. That is, for \(0\leq i<a\), \(q\in\mathcal{Q}_{a}^{\vee}\), and \(\lambda\in\mathsf{core}(a)\), we have_ \[s_{i}(q)=s_{i}(\lambda_{q})\text{ and }s_{i}(\lambda)=s_{i}(q_{\lambda}).\] ### Two Size Statistics For \(\lambda\) a partition, write: \(\lambda^{\intercal}\) for its conjugate; \(\mathsf{size}_{i}(\lambda)\) for the the number of boxes in \(\lambda\) with content \(i\operatorname{mod}a\); and \(\mathsf{size}(\lambda)\) for the total number of its boxes. Under the bijection between coroots and \(a\)-cores, we can interpret these definitions in the language of the coroot lattice. For \(q=(q_{1},\dots,q_{a})\in\mathcal{Q}_{a}^{\vee}\), write \(q^{\intercal}:=(-q_{a},\dots,-q_{1})\) and define \[\mathsf{size}_{i}^{\vee}(q):=\left\langle\frac{1}{2}q-\omega_{i}^{\vee},q \right\rangle\text{ and }\mathsf{size}^{\vee}(q):=\sum_{i=1}^{a-1}\mathsf{size}_{i}^{\vee}(q)= \left\langle\frac{a}{2}q-\rho^{\vee},q\right\rangle.\] Recall that in type \(A_{a-1}\) (up to the usual normalization that the sum of the entries ought to be zero), we have for \(0\leq i<a\): \[\omega_{i}^{\vee} =\sum_{j=1}^{i}e_{i}=(\underbrace{1,1,\dots,1}_{i\text{ ones}},0,0,\dots,0)\] \[\rho^{\vee} =\sum_{i=1}^{a-1}\omega_{i}^{\vee}=(a-1,a-2,\dots,1,0).\] Recall that we previously defined \(\omega_{0}=0\), which agrees with this description. Note that we are able to safely ignore the normalization on \(\omega_{i}^{\vee}\) and \(\rho^{\vee}\) because it is still enforced on the coroot \(q\) when computing \(\mathsf{size}_{i}^{\vee}\). For instance, \[\left\langle\frac{1}{2}q-(\omega_{i}^{\vee}+t(1,\dots,1)),q\right\rangle= \mathsf{size}_{i}^{\vee}(q)-t\langle(1,\dots,1),q\rangle=\mathsf{size}_{i}^{ \vee}(q).\] **Example 3.3**.: Continuing the example from Figure 3, the coroot \(q=(0,2,-2)\in\mathcal{Q}_{3}^{\vee}\) corresponds to the \(3\)-core \(\lambda=\begin{array}{|c|c|c|c|}\hline 0&1&2&0&1\\ \hline 2&0&1\\ \hline 1&\\ \hline 0&\end{array}\). This \(\lambda\) has four boxes with content \(0\operatorname{mod}3\), four with content \(1\operatorname{mod}3\), and two with content \(2\operatorname{mod}3\). We compute \[\mathsf{size}_{0}^{\vee}(q) =\left\langle\frac{1}{2}(0,2,-2),(0,2,-2)\right\rangle=4=\mathsf{ size}_{0}(\lambda),\] \[\mathsf{size}_{1}^{\vee}(q) =\left\langle\frac{1}{2}(0,2,-2)-(1,0,0),(0,2,-2)\right\rangle=4= \mathsf{size}_{1}(\lambda),\] \[\mathsf{size}_{2}^{\vee}(q) =\left\langle\frac{1}{2}(0,2,-2)-(1,1,0),(0,2,-2)\right\rangle=2 =\mathsf{size}_{2}(\lambda).\] This correspondence holds in general, as follows. **Proposition 3.4**.: _For any \(a\)-core \(\lambda\) and \(q=q_{\lambda}\), then \(\lambda^{\intercal}=\lambda_{q^{\intercal}}\) and \(\mathsf{size}^{\vee}(\lambda)=\mathsf{size}(q)\). In fact, for any \(0\leq i<a\), \(\mathsf{size}_{i}(\lambda)=\mathsf{size}_{i}^{\vee}(q)\)._ Proof.: The statement about conjugation follows by observing that the boundary word of a partition and its conjugate are related by reversing and interchanging \(\bullet\leftrightarrow\circ\). Write \(q=(q_{1},\ldots,q_{a})\) and \(\lambda=\lambda_{q}\). We compute directly that \[\left\langle\frac{a}{2}q-\omega_{i}^{\vee},q\right\rangle =\sum_{j=1}^{i}\left(\frac{1}{2}q_{j}-1\right)q_{j}+\sum_{j=i+1}^ {a}\frac{1}{2}q_{j}^{2}\] \[=\sum_{j=1}^{i}\left(\frac{(q_{j}-1)q_{j}}{2}-\frac{q_{j}}{2} \right)+\sum_{j=i+1}^{a}\left(\frac{(q_{j}+1)q_{j}}{2}-\frac{q_{j}}{2}\right)\] \[=\sum_{j=1}^{i}\frac{q_{j}(q_{j}-1)}{2}+\sum_{j=i+1}^{a}\frac{(q_ {j}+1)q_{j}}{2}.\] Observing that this is a sum over runners of certain triangular numbers, it would suffice to show the boxes of content \(i\) in \(\lambda\) may be partitioned so each bead on runner \(j\) at level \(\ell\) corresponds to: \[\begin{cases}\ell\text{ boxes}&\ell>0,\text{ black bead, }j\leq i,\\ -\ell\text{ boxes}&\ell\leq 0,\text{ white bead, }j\leq i,\\ \ell-1\text{ boxes}&\ell>0,\text{ black bead, }j>i,\\ -\ell+1\text{ boxes}&\ell\leq 0,\text{ white bead, }j>i.\end{cases}\] To do this, begin by partitioning the diagram beneath the main diagonal; that is, into the boxes with negative content and non-negative content. A black bead on runner \(j\) at level \(\ell>0\) represents a vertical edge on the boundary of \(\lambda\), and every box not beneath the main diagonal is in the same row as some such edge. In that row, on or above the main diagonal, there is one box each of content \(0,1,2,\ldots,(\ell-1)a+j-1\). Counting the number of such boxes with content \(i\bmod a\), we see there are \(\ell-1\) of them if \(j\leq i\), or \(\ell\) if \(j>i\). Similarly, a white bead on runner \(j\) at level \(\ell\leq 0\) represents a horizontal edge on the boundary of \(\lambda\), and every box beneath the main diagonal is in the same row as some such edge. In that row, beneath the main diagonal, there is one box each of content \(-1,-2,\ldots,(\ell-1)a+j\). Counting the number of such boxes with content \(i\bmod a\), we see there are \(-\ell\) of them if \(j\leq i\), or \(-\ell+1\) if \(j>i\). ## 4. Size Statistics in General Type We now turn to the general definition of the size statistic for affine Weyl groups--note that when we leave type \(A\), we do not have a uniform combinatorial interpretation (although see the next Section 5 for interpretations in the other classical types \(B_{n},C_{n},D_{n}\) and in type \(G_{2}\)). **Definition 4.1**.: Fix \(\widetilde{w}\in\widetilde{W}\) and a reduced word \(\widetilde{\mathbf{w}}^{-1}=a_{1}\cdots a_{\ell}\) for \(\widetilde{w}^{-1}\), with inversion sequence \(\mathsf{inv}(\widetilde{\mathbf{w}}^{-1})=\beta_{1}+k_{1}\delta,\beta_{2}+k_{2 }\delta,\ldots,\beta_{\ell}+k_{\ell}\delta.\) For any \(i\in\{0,1,\ldots,n\}\) with corresponding simple reflection \(s_{i}\) and simple root \(\alpha_{i}\), define \[\mathsf{size}_{i}^{\vee}(\widetilde{\mathbf{w}})=\frac{2}{\langle\alpha_{i}, \alpha_{i}\rangle}\sum_{\begin{subarray}{c}1\leq j\leq\ell\\ a_{j}=s_{i}\end{subarray}}k_{j}.\] **Example 4.2**.: Continuing Example 3.3, the coroot \(q=(0,2,-2)=-2\alpha_{2}^{\vee}\) corresponds to the the coset containing \(\widetilde{w}=s_{1}s_{0}s_{1}s_{2}s_{1}s_{0}\in\widetilde{A}_{2}\). We compute the inversion sequence for the reduced word \(\widetilde{\mathbf{w}}^{-1}=s_{0}s_{1}s_{2}s_{1}s_{0}s_{1}\) representing \(\widetilde{w}^{-1}\): \[-\widetilde{\alpha}+1\cdot\delta,\ -\alpha_{2}+1\cdot\delta,\ -\widetilde{ \alpha}+2\cdot\delta,\ -\alpha_{1}+1\cdot\delta,\ -\widetilde{\alpha}+3\cdot\delta,\ -\alpha_{2}+2\cdot\delta.\] We observe that \(\mathsf{size}_{0}(\widetilde{\mathbf{w}})=1+3=4\), \(\mathsf{size}_{1}(\widetilde{\mathbf{w}})=1+1+2=4\), and \(\mathsf{size}_{2}(\widetilde{\mathbf{w}})=2\), agreeing with the previously-computed \(\mathsf{size}_{i}(q)\) and \(\mathsf{size}_{i}(\lambda_{q})\). Definition 4.1 turns out to not depend on the reduced word chosen for \(\widetilde{w}^{-1}\), or on the choice of coset representative. **Proposition 4.3**.: _Let \(\widetilde{w}\), \(\widetilde{w}^{\prime}\) represent the same coset of \(\widetilde{W}\), and let \(\widetilde{\mathbf{w}}\) and \(\widetilde{\mathbf{w}}^{\prime}\) be any two reduced words for those elements. Then \(\mathsf{size}_{i}^{\vee}(\widetilde{\mathbf{w}})=\mathsf{size}_{i}^{\vee}( \widetilde{\mathbf{w}}^{\prime})\)._ Proof.: We first show that \(\mathsf{size}_{i}^{\vee}\) is constant when \(\widetilde{w}=\widetilde{w}^{\prime}\). Since the set of reduced words of \(\widetilde{w}\) are connected under braid moves, it suffices to show this when \(\widetilde{\mathbf{w}}\) and \(\widetilde{\mathbf{w}}^{\prime}\) differ by a single braid move. In that case, we have one of 1. \(\widetilde{\mathbf{w}}=\cdots(s_{i}s_{j}s_{i})\cdots\) and \(\widetilde{\mathbf{w}}^{\prime}=\cdots(s_{j}s_{i}s_{j})\cdots\), 2. \(\widetilde{\mathbf{w}}=\cdots(s_{i}s_{j}s_{i}s_{j})\cdots\) and \(\widetilde{\mathbf{w}}^{\prime}=\cdots(s_{j}s_{i}s_{j}s_{i})\cdots\), or 3. \(\widetilde{\mathbf{w}}=\cdots(s_{i}s_{j}s_{i}s_{j}s_{i}s_{j})\cdots\) and \(\widetilde{\mathbf{w}}^{\prime}=\cdots(s_{j}s_{i}s_{j}s_{i}s_{j}s_{i})\cdots\), corresponding to a braid move of type \(A_{2},B_{2}\), or \(G_{2}\). In each case, the order of the corresponding roots in the rank two parabolic subgroup is reversed; since the positions of the \(s_{i}\) are also reversed in cases (ii) and (iii), these are immediate. And in case (i), the statement follows because the roots in the rank two parabolic (of type \(A_{2}\)) are of the form \(\alpha,\alpha+\beta,\beta\). Finally, if \(\widetilde{w}=\widetilde{v}s_{i}\) for \(i\neq 0\), then the affine roots \(\mathsf{inv}(\widetilde{w}^{-1})\) are simply \(\alpha_{i}\) and \(s_{i}(\beta_{i})+k_{i}\delta\) for \(\beta_{i}+k_{i}\delta\in\mathsf{inv}(\widetilde{v}^{-1})\). By induction, the size of \(\widetilde{w}\) is invariant under right-multiplication by \(W\)-elements, as needed. For any coset representative \(\widetilde{w}\), we may therefore define the statistic \(\mathsf{size}^{\vee}\) on \(\widetilde{W}/W\) as \[\mathsf{size}^{\vee}(\widetilde{w}):=\sum_{i=0}^{n}\mathsf{size}_{i}^{\vee}( \widetilde{w})=\left(\sum_{\begin{subarray}{c}\alpha+k\delta\in\mathsf{inv}( \widetilde{w}^{-1})\\ \alpha\text{ long}\end{subarray}}k\right)+r\left(\sum_{\begin{subarray}{c} \alpha+k\delta\in\mathsf{inv}(\widetilde{w}^{-1})\\ \alpha\text{ short}\end{subarray}}k\right).\] **Definition 4.4**.: Recall that we expand the highest root as a sum of simples as \(\widetilde{\alpha}=\sum_{i=1}^{n}c_{i}\alpha_{i}\), and set \(c_{0}:=1\). For \(q\in\mathcal{Q}^{\vee}\), define \[\mathsf{size}_{i}^{\vee}(q)=\Big{\langle}\frac{c_{i}}{2}q-\omega_{i}^{\vee},q \Big{\rangle}.\] Because \(\frac{2}{\langle\alpha_{i},\alpha_{i}\rangle}=1\) if \(\alpha_{i}\) is long, and is \(r\) if \(\alpha_{i}\) is short, and because \(\sum_{i=0}^{n}c_{i}=h\) and \(\sum_{i=0}^{n}\omega_{i}^{\vee}=\rho^{\vee}\), we obtain \[\mathsf{size}^{\vee}(q)=\sum_{i=0}^{n}\mathsf{size}_{i}^{\vee}(q)=\left\langle \frac{h}{2}q-\rho^{\vee},q\right\rangle.\] **Theorem 4.5**.: _For \(\widetilde{w}=wt_{q}\in\widetilde{W}=W\ltimes\mathcal{Q}^{\vee}\), we have \(\mathsf{size}_{i}^{\vee}(\widetilde{w})=\mathsf{size}_{i}^{\vee}(w(q))\) and \(\mathsf{size}^{\vee}(\widetilde{w})=\mathsf{size}^{\vee}(w(q))\)._ Note that \(w(q)=wt_{q}(0)\), and so this theorem states that \(\mathsf{size}_{i}^{\vee}\) and \(\mathsf{size}^{\vee}\) are preserved under the equivariant bijection defined in Theorem 2.2. Proof.: It suffices to prove the statement for \(\mathsf{size}_{i}^{\vee}\). Let \(j\neq 0\) and let \(i\in\{0,1,\ldots,n\}\). We compute \(\mathsf{size}_{i}^{\vee}(s_{j}w(q))\): \[\mathsf{size}_{i}^{\vee}(s_{j}w(q)) =\left\langle\frac{c_{i}}{2}s_{j}w(q)-\omega_{i}^{\vee},s_{j}w(q)\right\rangle\] \[=\left\langle\frac{c_{i}}{2}\left[w(q)-\langle\alpha_{j}^{\vee},w (q)\rangle\alpha_{j}\right]-\omega_{i}^{\vee},\left[w(q)-\langle\alpha_{j}^{ \vee},w(q)\rangle\alpha_{j}\right]\right\rangle\] \[=\left\langle\frac{c_{i}}{2}q-\omega_{i}^{\vee},w(q)\right\rangle +\left\langle\alpha_{j}^{\vee},w(q)\right\rangle\cdot\langle\omega_{i}^{\vee },\alpha_{j}\rangle\] \[=\mathsf{size}_{i}^{\vee}(w(q))+\begin{cases}\langle\alpha_{j}^ {\vee},w(q)\rangle&\text{if }i=j\\ 0&\text{if }i\neq j\end{cases}.\] Similarly, we compute \(\mathsf{size}_{i}^{\vee}(s_{0}w(q))=\mathsf{size}_{i}^{\vee}(w(q))+ \begin{cases}1-\langle\widetilde{\alpha},w(q)\rangle&\text{if }i=0\\ 0&\text{if }i\neq 0\end{cases}\). We now argue by induction on the length of an affine element \(\widetilde{w}\), with base case coming from the identity \(e\leftrightarrow 0\) giving \(\mathsf{size}_{i}^{\vee}(e)=\mathsf{size}_{i}^{\vee}(0)=0\). Consider now an affine element \(\widetilde{w}=w\cdot t_{-q}\) of length \(\ell-1\), a reduced expression \(\widetilde{w}=a_{1}a_{2}\cdots a_{\ell-1}=w\cdot t_{q}\), and simple transposition \(s_{j}\). The result follows by comparing \(\mathsf{size}_{i}^{\vee}(s_{j}\widetilde{w})\) to the computation above. First note that the inversion sequence for \((s_{j}\widetilde{w})^{-1}\) agrees with that of \(\widetilde{w}^{-1}\) with an additional last entry, \(\widetilde{w}(\alpha_{j})\). For \(j\neq 0\), we have \(\widetilde{w}(\alpha_{j})=w(\alpha_{j})+\langle\alpha_{j},q\rangle\delta\), while if \(j=0\), then \(\widetilde{w}(-\widetilde{\alpha}+\delta)=w(-\widetilde{\alpha})+(1-\langle \widetilde{\alpha},q\rangle)\delta\). When \(j\neq i\) this last entry does not affect the computation of \(\mathsf{size}_{i}^{\vee}\). Otherwise \(\mathsf{size}_{i}^{\vee}(s_{j}\widetilde{w})\) includes the above coefficient of \(\delta\) in the sum, but with \(\alpha_{j}\) instead of \(\alpha_{j}^{\vee}\). This distinction does not matter if \(\alpha_{j}\) is a long root since \(\alpha_{j}=\alpha_{j}^{\vee}\) and (similarly for \(\widetilde{\alpha}\)). But if \(\alpha_{j}\) is short, then difference between the root and coroot in the formulas properly introduces the required scaling factor of \(r\). ## 5. Combinatorial Models In this section we describe combinatorial models for the affine Weyl groups of classical type, as well as \(G_{2}\), recovering and extending some results in [1, 21]. In each case, the models are obtained by exhibiting a suitable equivariant embedding from \(\mathcal{Q}_{X_{n}}^{\vee}\) into a type-\(A\) coroot lattice \(\mathcal{Q}_{m}^{\vee}\). Hence, the objects of these models are partitions, and we interpret the statistic \(\mathsf{size}^{\vee}\), as well as its refinements \(\mathsf{size}_{i}^{\vee}\), in terms of the partitions. Unfortunately, we do not know of such an embedding for the remaining (exceptional) types \(F_{4}\), \(E_{6},E_{7}\), and \(E_{8}\), and so we leave open the problem of finding similar combinatorial models for them. ### Type C The simple roots for \(C_{n}\) are \(\alpha_{i}:=\frac{1}{\sqrt{2}}(e_{i}-e_{i+1})\) for \(1\leq i<n\) and \(\alpha_{n}:=\sqrt{2}e_{n}\). Hence, the coroot lattice is \(\mathcal{Q}_{C_{n}}^{\vee}=\sqrt{2}\mathbb{Z}^{n}.\) The action of \(\widetilde{W}(C_{n})\) on \(\mathcal{Q}_{C_{n}}^{\vee}\) is given explicitly by \[s_{i}(x_{1},\ldots,x_{i},x_{i+1},\ldots,x_{n}) =(x_{1},\ldots,x_{i+1},x_{i},\ldots,x_{n})\text{ for }1\leq i<n,\] \[s_{n}(x_{1},\ldots,x_{n}) =(x_{1},\ldots,-x_{n}),\text{ and }\] \[s_{0}(x_{1},\ldots,x_{n}) =(\sqrt{2}-x_{1},\ldots,x_{n}).\] We embed the type \(C_{n}\) coroot lattice into the coroot lattice for \(\widetilde{\mathfrak{S}}_{2n}\) by \[\iota:\mathcal{Q}_{C_{n}}^{\vee} \hookrightarrow\mathcal{Q}_{2n}^{\vee},\] \[(x_{1},\ldots,x_{n}) \mapsto\frac{1}{\sqrt{2}}(x_{1},\ldots,x_{n},-x_{n},\ldots,-x_{1}).\] Evidently, \(\iota(\mathbf{x})=\iota(\mathbf{x})^{\intercal}\) for all \(\mathbf{x}\in\mathcal{Q}_{C_{n}}^{\vee}\), and therefore self-conjugate \(2n\)-cores serve as a combinatorial model for the cores of type \(C_{n}\). It is a particularly well-behaved model because \(\iota\) is an isometry (that is, \(\left\langle\iota(\mathbf{x}),\iota(\mathbf{y})\right\rangle=\left\langle \mathbf{x},\mathbf{y}\right\rangle\)) and also our definition of \(\mathsf{size}^{\vee}\) agrees with the number of boxes of the corresponding partitions, as we show in Theorem 5.2. Moreover, it is straightforward to check that the simple reflections of \(\widetilde{W}(C_{n})\) agree with the following \(\widetilde{\mathfrak{S}}_{2n}\)-elements acting on the \(\iota\)-embedded coroot lattice: \[s_{i} \Leftrightarrow s_{i}^{A}s_{2n-i}^{A}\text{ for }1\leq i<n,\text{ while}\] \[s_{n} \Leftrightarrow s_{n}^{A},\text{ and }\] \[s_{0} \Leftrightarrow s_{0}^{A}.\] **Example 5.1**.: The inversion sequence for \(\widetilde{W}(C_{2})\) and \(\widetilde{\mathsf{w}}^{-1}=s_{1}s_{0}s_{2}s_{1}s_{0}\) (so that \(\widetilde{w}=t_{-\alpha_{1}^{\vee}}s_{2}\)) is \[-\widetilde{\alpha}+\delta,-\alpha_{1}-\alpha_{2}+\delta,-\widetilde{\alpha}+ 2\delta,-\alpha_{2}+\delta,-\alpha_{1}-\alpha_{2}+2\delta\] and because \(\alpha_{1}\) is short and \(r=2\), we have that \(\mathsf{size}_{1}^{\vee}(\widetilde{\mathsf{w}})=2\cdot(1+2)=6\). On the other hand, observe that \(\widetilde{w}\) corresponds to \(q=\sqrt{2}(-1,1)\in\mathcal{Q}_{C_{2}}^{\vee}\). Since \(\omega_{1}^{\vee}=\sqrt{2}(1,0)\) and \(c_{1}=2\), we compute \[\mathsf{size}_{1}^{\vee}(q)=\left\langle\sqrt{2}(-1,1)-\sqrt{2}(1,0),\sqrt{2 }(-1,1)\right\rangle=2\left\langle(-1,1)-(1,0),(-1,1)\right\rangle=6.\] Moreover, the corresponding \(4\)-core has \(6\) boxes with content \(1\) or \(3\) mod \(4\): \[\lambda_{(-1,1,1,-1)}=\begin{array}{|c|c|c|}\hline 0&1&2&3\\ \hline 3&0&1\\ \hline 2&3\\ \hline 1&\hline\end{array}.\] Figure 4. The affine Dynkin diagrams, with vertex \(i\) corresponding to the affine simple reflection \(s_{i}\). **Theorem 5.2**.: _The map \(\mathbf{x}\mapsto\lambda_{\iota(\mathbf{x})}\) is a \(\widetilde{W}(C_{n})\)-equivariant bijection between the \(C_{n}\) coroot lattice and self-conjugate \((2n)\)-cores. Moreover, for any \(\mathbf{x}\in\mathcal{Q}^{\vee}_{C_{n}}\),_ \[\mathsf{size}^{\vee}_{i}(\mathbf{x})=\begin{cases}\mathsf{size}^{\vee}_{i}( \lambda_{\iota(\mathbf{x})})+\mathsf{size}^{\vee}_{2n-i}(\lambda_{\iota( \mathbf{x})})&\text{if }1\leq i<n\\ \mathsf{size}^{\vee}_{i}(\lambda_{\iota(\mathbf{x})})&\text{if }i=0,n\end{cases},\] _and hence \(\mathsf{size}^{\vee}(\mathbf{x})=\mathsf{size}^{\vee}(\lambda_{\iota( \mathbf{x})})\)._ Proof.: Note that the map is well-defined because by definition, \(\iota(\mathbf{x})^{\intercal}=\iota(\mathbf{x})\) and thus by Proposition 3.4 we have \(\lambda_{\iota(\mathbf{x})}\) is self-conjugate. It is evidently injective, and it is easy to see that every self-conjugate partition is in the image as well. Equivariance follows from the straightforward check above and Theorem 3.2. Thus it remains to prove that it preserves the statistic \(\mathsf{size}^{\vee}\). Begin by observing the following: \[\iota(\omega_{i}^{\vee})=((\omega_{i}^{A})^{\vee}+(\omega_{2n-i}^{A})^{\vee}),\quad 1\leq i<n,\qquad\text{and}\qquad\iota(\omega_{i}^{\vee})=(\omega_{i}^{A})^{ \vee},\quad i=0,\,n.\] where \((\omega_{i}^{A})^{\vee}\) is the \(i^{\text{th}}\) fundamental coweight in type \(A_{2n-1}\) (and \((\omega_{0}^{A})^{\vee}=0\) as always). Write \(\lambda_{i}\) for the number of boxes in \(\lambda_{\iota(\mathbf{x})}\) with content equal to \(i\bmod 2n\). The content of Proposition 3.4 is that \(\lambda_{i}=\left\langle\frac{1}{2}\iota(\mathbf{x})-(\omega_{i}^{A})^{\vee}, \iota(\mathbf{x})\right\rangle\). Thus, for all \(1\leq i<n\), we have \[\mathsf{size}^{\vee}_{i}(\mathbf{x}) =\left\langle\frac{c_{i}}{2}\mathbf{x}-\omega_{i}^{\vee}, \mathbf{x}\right\rangle\] \[=\left\langle\mathbf{x}-\omega_{i}^{\vee},\mathbf{x}\right\rangle\] \[=\left\langle\iota(\mathbf{x})-(\omega_{i}^{A})^{\vee}-(\omega_{2 n-i}^{A})^{\vee},\iota(\mathbf{x})\right\rangle\] \[=\left\langle\frac{1}{2}\iota(\mathbf{x})-(\omega_{i}^{A})^{\vee },\iota(\mathbf{x})\right\rangle+\left\langle\frac{1}{2}\iota(\mathbf{x})-( \omega_{2n-i}^{A})^{\vee},\iota(\mathbf{x})\right\rangle\] \[=\lambda_{i}+\lambda_{2n-i},\] and for \(i=0\) or \(i=n\) the calculation is similar, but because \(c_{i}=1\), we obtain \[\mathsf{size}^{\vee}_{i}(\mathbf{x})=\left\langle\frac{1}{2}\mathbf{x}-\omega _{i}^{\vee},\mathbf{x}\right\rangle=\left\langle\frac{1}{2}\iota(\mathbf{x})- (\omega_{i}^{A})^{\vee},\iota(\mathbf{x})\right\rangle=\lambda_{i}.\] Finally, summing over all \(i\) yields the claim for \(\mathsf{size}^{\vee}\). ### Type B The simple roots for \(B_{n}\) are \(\alpha_{i}:=e_{i}-e_{i+1}\) for \(1\leq i<n\) and \(\alpha_{n}:=e_{n}\). Hence, the coroot lattice is \[\mathcal{Q}^{\vee}_{B_{n}}=\left\{\mathbf{x}\in\mathbb{Z}^{n}:\sum_{i=1}^{n}x _{i}\equiv 0\operatorname{mod}2\right\}.\] The action of \(\widetilde{W}(B_{n})\) on \(\mathcal{Q}^{\vee}_{B_{n}}\) is given explicitly by \[s_{i}(x_{1},\ldots,x_{i},x_{i+1},\ldots,x_{n}) =(x_{1},\ldots,x_{i+1},x_{i},\ldots,x_{n})\text{ for }1\leq i<n,\] \[s_{n}(x_{1},\ldots,x_{n}) =(x_{1},\ldots,-x_{n}),\text{ and }\] \[s_{0}(x_{1},x_{2},\ldots,x_{n}) =(1-x_{2},1-x_{1}\ldots,x_{n}).\] We may embed the \(B_{n}\) coroot lattice into the coroot lattice for \(\widetilde{\mathfrak{S}}_{2n}\) using essentially the same \(\iota\) as for \(C_{n}\), namely \(\iota:(x_{1},\ldots,x_{n})\mapsto(x_{1},\ldots,x_{n},-x_{n},\ldots,-x_{1})\). However, because we no longer have the normalization factor, this \(\iota\) fails to be an isometry; rather, \(\left\langle\iota(\mathbf{x}),\iota(\mathbf{y})\right\rangle=2\langle \mathbf{x},\mathbf{y}\rangle\). Nevertheless, it is again straightforward to mimic the action of \(\widetilde{W}(B_{n})\) using \(\widetilde{\mathfrak{S}}_{2n}\) by \[s_{i} \mapsto s_{i}^{A}s_{2n-i}^{A}\text{ for }1\leq i<n,\text{ while}\] \[s_{n} \mapsto s_{n}^{A},\text{ and}\] \[s_{0} \mapsto s_{0}^{A}s_{1}^{A}s_{2n-1}^{A}s_{0}^{A}.\] We thus obtain a combinatorial model for \(\mathcal{Q}_{B_{n}}^{{}^{\vee}}\). **Theorem 5.3**.: _The map \(\mathbf{x}\mapsto\lambda_{\iota(\mathbf{x})}\) is a \(\widetilde{W}(B_{n})\)-equivivarant bijection between the \(B_{n}\) coroot lattice and self-conjugate \(2n\)-cores with an even number of boxes on the main diagonal. For \(\mathbf{x}\in\mathcal{Q}_{B_{n}}^{{}^{\vee}}\),_ \[\mathsf{size}_{i}^{{}^{\vee}}(\mathbf{x})=\frac{1}{2}\begin{cases}\mathsf{ size}_{i}^{{}^{\vee}}(\lambda_{\iota(\mathbf{x})})+\mathsf{size}_{2n-i}^{{}^{ \vee}}(\lambda_{\iota(\mathbf{x})})&\text{if }1<i\leq n\\ \mathsf{size}_{1}^{{}^{\vee}}(\lambda_{\iota(\mathbf{x})})+\mathsf{size}_{2n -1}^{{}^{\vee}}(\lambda_{\iota(\mathbf{x})})-\mathsf{size}_{0}^{{}^{\vee}}( \lambda_{\iota(\mathbf{x})})&\text{if }i=1\\ \mathsf{size}_{0}^{{}^{\vee}}(\lambda_{\iota(\mathbf{x})})&\text{if }i=0\end{cases},\] _and hence \(\mathsf{size}^{{}^{\vee}}(\mathbf{x})=\frac{1}{2}\left(\mathsf{size}^{{}^{ \vee}}(\lambda_{\iota(\mathbf{x})})-\mathsf{size}_{0}^{{}^{\vee}}(\lambda_{ \iota(\mathbf{x})})\right)\)._ Proof.: As in Theorem 5.2 we have that \(\lambda_{\iota(\mathbf{x})}\) is self-conjugate. Moreover, for any partition \(\lambda\), the side length of its Durfee square is the number of black beads that lie below the line in its abacus diagram that witnesses the fact that it is balanced. By definition of \(\lambda_{\iota(\mathbf{x})}\), this must be \(|x_{1}|+|x_{2}|+\cdots+|x_{n}|\), and since \(\mathbf{x}\in\mathcal{Q}_{B_{n}}^{{}^{\vee}}\), this number must be even. Thus \(\lambda_{\iota(\mathbf{x})}\) has an even number of elements along its main diagonal, as desired. This shows that the map is well-defined. Bijectivity and equivariance follow in a manner analogous to Theorem 5.2. It remains to prove that it has the claimed effect on \(\mathsf{size}^{{}^{\vee}}\). We begin as before with the following observation: \[\iota(\omega_{i}^{{}^{\vee}})=((\omega_{i}^{A})^{{}^{\vee}}+(\omega_{2n-i}^{A })^{{}^{\vee}}),\quad 1\leq i\leq n,\qquad\text{and}\qquad\iota(\omega_{0}^{{}^{ \vee}})=(\omega_{0}^{A})^{{}^{\vee}}.\] Write \(\lambda_{i}\) for the number of boxes in \(\lambda_{\iota(\mathbf{x})}\) with content equal to \(i\bmod 2n\). By Proposition 3.4, this is \(\left\langle\frac{1}{2}\iota(\mathbf{x})-(\omega_{i}^{A})^{{}^{\vee}},\iota( \mathbf{x})\right\rangle\). Thus, for all \(2\leq i<n\), \[\mathsf{size}_{i}^{{}^{\vee}}(\mathbf{x}) =\left\langle\mathbf{x}-\omega_{i}^{{}^{\vee}},\mathbf{x}\right\rangle\] \[=\frac{1}{2}\left\langle\iota(\mathbf{x})-(\omega_{i}^{A})^{{}^{ \vee}}-(\omega_{2n-i}^{A})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle\] \[=\frac{1}{2}\left\langle\frac{1}{2}\iota(\mathbf{x})-(\omega_{i}^{ A})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle+\frac{1}{2}\left\langle\frac{1}{2} \iota(\mathbf{x})-(\omega_{2n-i}^{A})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle\] \[=\frac{\lambda_{i}+\lambda_{2n-i}}{2}.\] We obtain the calculations for \(\mathsf{size}_{0}^{{}^{\vee}}(\mathbf{x})\) and \(\mathsf{size}_{n}^{{}^{\vee}}(\mathbf{x})\) in a manner analogous to type \(C_{n}\). But when \(i=1\), we observe somewhat different behavior because \(c_{1}=1\): \[\mathsf{size}_{i}^{{}^{\vee}}(\mathbf{x}) =\frac{1}{2}\left\langle\frac{1}{2}\iota(\mathbf{x})-(\omega_{1}^{ A})^{{}^{\vee}}-(\omega_{2n-1}^{A})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle\] \[=\frac{1}{2}\left(\left\langle\frac{1}{2}\iota(\mathbf{x})-( \omega_{1}^{A})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle+\left\langle\frac{1} {2}\iota(\mathbf{x})-(\omega_{2n-1}^{A})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle\] \[\qquad\qquad-\left\langle\frac{1}{2}\iota(\mathbf{x})-(\omega_{ 0}^{A})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle\right)\] \[=\frac{\lambda_{1}+\lambda_{2n-1}-\lambda_{0}}{2}.\] As usual, the last equality follows by applying Proposition 3.4. By comparing the results of these calculations to the definition of \(\mathsf{size}_{i}^{{}^{\vee}}(\lambda_{\iota(\mathbf{x})}\) and summing over \(i\), we obtain the desired equalities. ### Type D The simple roots for \(D_{n}\) are \(\alpha_{i}:=e_{i}-e_{i+1}\) for \(1\leq i<n\) and \(\alpha_{n}:=e_{n-1}+e_{n}\). The highest root is \(\widetilde{\alpha}:=e_{1}+e_{2}\), and the coroot lattice is the same as for \(B_{n}\), namely \[\mathcal{Q}^{{}^{\vee}}_{D_{n}}=\left\{\mathbf{x}\in\mathbb{Z}^{n}:\sum_{i=1}^ {n}x_{i}=0\,\mathrm{mod}\,2\right\}.\] The action of \(\widetilde{W}(D_{n})\) on \(\mathcal{Q}^{{}^{\vee}}_{D_{n}}\) is given explicitly by \[s_{i}^{D}(x_{1},\ldots,x_{i},x_{i+1},\ldots,x_{n}) =(x_{1},\ldots,x_{i+1},x_{i},\ldots,x_{n})\text{ for }1\leq i<n,\] \[s_{n}^{D}(x_{1},\ldots,x_{n-1},x_{n}) =(x_{1},\ldots,-x_{n},-x_{n-1}),\text{ and }\] \[s_{0}^{D}(x_{1},x_{2},\ldots,x_{n}) =(1-x_{2},1-x_{1},\ldots,x_{n}).\] We again embed the \(D_{n}\) coroot lattice into the coroot lattice for \(\widetilde{\mathfrak{S}}_{2n}\) using the same \(\iota\) as in type \(B\) and mimic the action of \(\widetilde{W}(D_{n})\) using \(\widetilde{\mathfrak{S}}_{2n}\) by \[s_{i} \mapsto s_{i}^{A}s_{2n+1-i}^{A}\text{ for }1\leq i<n,\text{ while }\] \[s_{n} \mapsto s_{n}^{A}s_{n-1}^{A}s_{n+1}^{A}s_{n}^{A},\text{ and }\] \[s_{0} \mapsto s_{0}^{A}s_{1}^{A}s_{2n-1}^{A}s_{0}^{A}.\] We obtain a combinatorial model for \(\mathcal{Q}^{{}^{\vee}}_{D_{n}}\). **Theorem 5.4**.: _The map \(\mathbf{x}\mapsto\lambda_{\iota(\mathbf{x})}\) is a \(\widetilde{W}(D_{n})\)-equivurant bijection between the \(D_{n}\) coroot lattice and self-conjugate \(2n\)-cores with an even number of boxes on the main diagonal. For \(\mathbf{x}\in\mathcal{Q}^{{}^{\vee}}_{D_{n}}\),_ \[\mathsf{size}^{{}^{\vee}}_{i}(\mathbf{x})=\frac{1}{2}\begin{cases}\mathsf{ size}^{{}^{\vee}}_{i}(\lambda_{\iota(\mathbf{x})})+\mathsf{size}^{{}^{\vee}}_{2n-i}( \lambda_{\iota(\mathbf{x})})&\text{if }1<i<n-1\\ \mathsf{size}^{{}^{\vee}}_{n-1}(\lambda_{\iota(\mathbf{x})})+\mathsf{size}^{{ }^{\vee}}_{n+1}(\lambda_{\iota(\mathbf{x})})-\mathsf{size}^{{}^{\vee}}_{n}( \lambda_{\iota(\mathbf{x})})&\text{if }i=n-1\\ \mathsf{size}^{{}^{\vee}}_{1}(\lambda_{\iota(\mathbf{x})})+\mathsf{size}^{{} ^{\vee}}_{2n-1}(\lambda_{\iota(\mathbf{x})})-\mathsf{size}^{{}^{\vee}}_{0}( \lambda_{\iota(\mathbf{x})})&\text{if }i=1\\ \mathsf{size}^{{}^{\vee}}_{i}(\lambda_{\iota(\mathbf{x})})&\text{if }i=0,n \end{cases},\] _and hence \(\mathsf{size}^{{}^{\vee}}(\mathbf{x})=\frac{1}{2}\left(\mathsf{size}^{{}^{ \vee}}(\lambda_{\iota(\mathbf{x})})-\mathsf{size}^{{}^{\vee}}_{0}(\lambda_{ \iota(\mathbf{x})})-\mathsf{size}^{{}^{\vee}}_{n}(\lambda_{\iota(\mathbf{x}) })\right)\)._ Proof.: Since \(\mathcal{Q}^{{}^{\vee}}_{B}=\mathcal{Q}^{{}^{\vee}}_{D}\), the first statement is automatic from Theorem 5.3. As usual, to prove the map has the claimed effect on \(\mathsf{size}^{{}^{\vee}}\), we begin with the following observation: \[\iota(\omega^{{}^{\vee}}_{i}) =(\omega^{A}_{i})^{{}^{\vee}}+(\omega^{A}_{2n-i})^{{}^{\vee}} 1\leq i<n-1\] \[\iota((\omega^{A}_{n-1})^{{}^{\vee}}) =(\omega^{A}_{n-1})^{{}^{\vee}}-(\omega^{A}_{n})^{{}^{\vee}}+( \omega^{A}_{n+1})^{{}^{\vee}}\] \[\iota(\omega^{{}^{\vee}}_{n}) =(\omega^{A}_{n})^{{}^{\vee}}.\] Therefore, the computation of \(\mathsf{size}^{{}^{\vee}}_{i}\) is identical to the type \(B_{n}\) for all \(i\) except for \(i=n-1,n\). When \(i=n\) it is analogous to the type \(C_{n}\) computation, and when \(i=n-1\) we compute: \[\mathsf{size}^{{}^{\vee}}_{n-1}(\mathbf{x}) =\frac{1}{2}\left\langle\frac{1}{2}\iota(\mathbf{x})-(\omega^{A}_ {n-1})^{{}^{\vee}}+(\omega^{A}_{n})^{{}^{\vee}}-(\omega^{A}_{n+1})^{{}^{\vee}}, \iota(\mathbf{x})\right\rangle\] \[=\frac{1}{2}\left(\left\langle\frac{1}{2}\iota(\mathbf{x})-( \omega^{A}_{n-1})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle-\left\langle\frac{1 }{2}\iota(\mathbf{x})-(\omega^{A}_{n})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle\] \[\qquad\qquad+\left\langle\frac{1}{2}\iota(\mathbf{x})-(\omega^{A} _{n+1})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle\right)\] \[=\frac{\lambda_{n-1}-\lambda_{n}+\lambda_{n+1}}{2}.\] where as usual \(\lambda_{i}\) is the number of boxes in \(\lambda_{\iota(\mathbf{x})}\) with content \(i\) mod \(2n\) ### Type \(\mathbf{G_{2}}\) Following the usual construction, we consider \(G_{2}\) as acting on the orthogonal complement of the line \(\operatorname{span}_{\mathbb{R}}(1,1,1)\) in \(\mathbb{R}^{3}\). The simple roots for \(G_{2}\) can be taken to be \[\alpha_{1}:=(1,-1,0)\text{ and }\alpha_{2}:=\tfrac{1}{3}(-1,2,-1).\] With these conventions, the coroot lattice is \(\mathcal{Q}_{G_{2}}^{{}^{\vee}}=\mathcal{Q}_{3}^{{}^{\vee}}\). That is, the type \(G_{2}\) coroot lattice coincides with the coroot lattice for \(\widetilde{\mathfrak{S}}_{3}\). Therefore, the map \(\mathbf{x}\mapsto\lambda_{\mathbf{x}}\) gives a bijection between \(\mathcal{Q}_{G_{2}}^{{}^{\vee}}\) and \(3\)-cores. The action of \(\widetilde{W}(G_{2})\) on \(\mathcal{Q}_{G_{2}}^{{}^{\vee}}\) is given explicitly by \[s_{1}(x_{1},x_{2},x_{3}) =(x_{2},x_{1},x_{3}),\] \[s_{2}(x_{1},x_{2},x_{3}) =(-x_{3},-x_{2},-x_{1}),\text{ and}\] \[s_{0}(x_{1},x_{2},x_{3}) =(x_{3}+1,x_{2},x_{1}-1).\] We may therefore emulate the action of \(\widetilde{G}_{2}\) using \(\widetilde{\mathfrak{S}}_{3}\) by \[s_{1}(\mathbf{x}) =s_{1}^{A}(\mathbf{x}),\] \[s_{2}(\mathbf{x}) =\mathbf{x}^{\intercal},\text{ and}\] \[s_{0}(\mathbf{x}) =s_{0}^{A}(\mathbf{x}).\] As in [1], we obtain a combinatorial model for \(\mathcal{Q}_{G_{2}}^{{}^{\vee}}\). **Theorem 5.5**.: _The map \(q\mapsto\lambda_{q}\) is a \(\widetilde{W}(G_{2})\)-equivarant bijection between the \(G_{2}\) coroot lattice and \(3\)-cores: \(s_{1}^{G}\) acts on a 3-core by adding or removing all boxes of content \(1\), \(s_{0}^{G}\) acts similarly on boxes of content \(0\), and \(s_{2}^{G}\) acts by conjugation. For \(\mathbf{x}\in\mathcal{Q}_{G_{2}}^{{}^{\vee}}\),_ \[\mathsf{size}_{i}^{{}^{\vee}}(\mathbf{x})=\begin{cases}\mathsf{size}_{0}^{{}^{ \vee}}(\lambda_{\mathbf{x}})&\text{if }i=0\\ \mathsf{size}_{0}^{{}^{\vee}}(\lambda_{\mathbf{x}})+\mathsf{size}_{1}^{{}^{ \vee}}(\lambda_{\mathbf{x}})+\mathsf{size}_{2}^{{}^{\vee}}(\lambda_{\mathbf{ x}})&\text{if }i=1\\ 3\cdot\mathsf{size}_{0}^{{}^{\vee}}(\lambda_{\mathbf{x}})-\mathsf{size}_{0}^{{}^{ \vee}}(\lambda_{\mathbf{x}})&\text{if }i=2\end{cases},\] _and hence \(\mathsf{size}_{i}^{{}^{\vee}}(q)=\mathsf{size}^{{}^{\vee}}(\lambda_{q})+3 \cdot\mathsf{size}_{2}^{{}^{\vee}}(\lambda_{q})\)._ Proof.: The bijectivity and equivariance of the map is the content of Theorem 3.2. It remains to prove its effect on \(\mathsf{size}^{{}^{\vee}}\). Because \[\omega_{1}^{{}^{\vee}}=(\omega_{1}^{A})^{{}^{\vee}}+(\omega_{2}^{A})^{{}^{ \vee}}\qquad\text{and}\qquad\omega_{2}^{{}^{\vee}}=3(\omega_{2}^{A})^{{}^{ \vee}},\] we have \[\mathsf{size}_{0}^{{}^{\vee}}(\mathbf{x}) =\left\langle\frac{1}{2}\mathbf{x}-\omega_{0}^{{}^{\vee}},\mathbf{ x}\right\rangle=\left\langle\frac{1}{2}\mathbf{x},\mathbf{x}\right\rangle =\lambda_{0}\] \[\mathsf{size}_{1}^{{}^{\vee}}(\mathbf{x}) =\left\langle\frac{3}{2}\mathbf{x}-\omega_{1}^{{}^{\vee}},\mathbf{ x}\right\rangle=\left\langle\frac{3}{2}\iota(\mathbf{x})-(\omega_{1}^{A})^{{}^{ \vee}}-(\omega_{2}^{A})^{{}^{\vee}},\iota(\mathbf{x})\right\rangle =\lambda_{1}+\lambda_{2}+\lambda_{0}\] \[\mathsf{size}_{2}^{{}^{\vee}}(\mathbf{x}) =\left\langle\mathbf{x}-\omega_{2}^{{}^{\vee}},\mathbf{x}\right\rangle =\left\langle\iota(\mathbf{x})-3(\omega_{2}^{A})^{{}^{\vee}},\iota( \mathbf{x})\right\rangle =3\lambda_{2}-\lambda_{0},\] where \(\lambda_{i}\) is as usual the number of boxes in \(\lambda_{\mathbf{x}}\) with content \(i\) mod \(3\). ## 6. Simultaneous Cores Recall that an \((a,b)\)-core is a partition which is both \(a\)-core and \(b\)-core. In [11, Lemma 3.1], Johnson showed that among the set of \(a\)-cores, the \((a,b)\)-cores are precisely those that satisfy certain simple inequalities on the heights of their runners. Thus, when considering them as elements of \(\mathcal{Q}_{A_{a-1}}^{{}^{\vee}}\), these inequalities imply that they are lattice points of a polytope in \(\mathbb{R}^{a}\). In fact this polytope is a simplex, previously been considered by Sommers [15], which we describe now. Recall that \(\Phi=\Phi(X_{n})\) is a root system with irreducible Cartan type \(X_{n}\) and Coxeter number \(h\). For \(1\leq i<h\), write \(\Phi_{i}\) to denote the set of (positive) roots of height \(i\). **Definition 6.1**.: For \(b\) coprime to \(h\), write \(b=t_{b}h+r_{b}\) with \(t_{b},r_{b}\in\mathbb{Z}_{\geq 0}\) and \(0<r_{b}<h\). We define the \(b\)-_Sommers region_ \[\mathcal{S}_{X_{n}}(b):=\left\{x\in V:\begin{array}{cc}\langle x,\alpha \rangle\geq-t_{b}&\text{for all }\alpha\in\Phi_{r_{b}}\text{ and }\\ \langle x,\alpha\rangle\leq t_{b}+1&\text{for all }\alpha\in\Phi_{h-r_{b}} \end{array}\right\}.\] (We write \(\mathcal{S}(b)\) when the root system is clear from context.) As in Equation (5), a natural generalization of \(\mathsf{core}(a,b)\) to any affine Weyl group is the intersection of the coroot lattice \(\mathcal{Q}^{\vee}_{X_{n}}\) with \(\mathcal{S}_{X_{n}}(b)\), so that \(\mathsf{core}(a,b)=\mathsf{core}(A_{a-1},b)\). ### The Sommers Region and the Fundamental Alcove We would like to perform the \(\mathsf{size}^{\vee}\)-weighted enumeration of \(\mathsf{core}(X_{n},b)\) using Ehrhart theory. Unfortunately, the family \[\{\mathcal{S}_{X_{n}}(b):\gcd(b,h)=1\}\] does not consist of dilations of a fixed polytope--but this difficulty can be circumvented. Define, for any \(x\in V\), the statistic \[\mathsf{size}^{(b)}(x):=\frac{h}{2}\left(\left\|x-\frac{b\rho^{\vee}}{h} \right\|^{2}-\left\|\frac{\rho^{\vee}}{h}\right\|^{2}\right). \tag{7}\] Notice that when \(q\in\mathcal{Q}^{\vee}\), \(w\in W\), and \(b=1\), we have \(\mathsf{size}^{(1)}(w(q))=\mathsf{size}^{\vee}(w(q))=\mathsf{size}^{\vee}(wt_ {q})\) (recalling Theorem 4.5). We recall from [17, SS4] that there is a unique element \(\widetilde{w}_{b}\in\widetilde{W}\) such that \(\frac{b}{h}\rho^{\vee}=\widetilde{w}_{b}(\frac{\rho^{\vee}}{h})\), and that left-multiplication by this element maps \(\mathcal{S}(b)\) onto the \(b\)-fold dilation of the fundamental alcove \(\mathcal{A}:=\{x\in V:\langle x,\alpha\rangle\geq 0\text{ for all }\alpha\in\Delta\text{ and }\langle x,\tilde{\alpha} \rangle\leq 1\}\). It also respects the lattice points in the following sense: **Theorem 6.2**.: _For \(b\) coprime to \(h\), the following holds as an equality of multisets:_ \[\left\{\mathsf{size}^{\vee}(q):q\in\mathsf{core}(X_{n},b)\right\}=\left\{ \mathsf{size}^{(b)}(q):q\in b\mathcal{A}\cap\mathcal{Q}^{\vee}_{X_{n}}\right\}.\] Proof.: We first note that since \(\widetilde{w}_{b}\) maps \(\mathcal{S}(b)\) onto \(b\mathcal{A}\), and also \(\widetilde{w}_{b}\in\widetilde{W}\) and thus is a \(\mathcal{Q}^{\vee}\)-preserving bijection, it restricts to a bijection \(\mathsf{core}(\widetilde{W},b)\to b\mathcal{A}\cap\mathcal{Q}^{\vee}\). Write \(\widetilde{w}_{b}=wt_{q_{0}}\); then \(\widetilde{w}_{b}=t_{\frac{b\rho^{\vee}}{h}}wt_{-\frac{\rho^{\vee}}{h}}\). Since \(\|\cdot\|\) is \(W\)-invariant, for \(q\in\mathsf{core}(\widetilde{W},b)\): \[\mathsf{size}^{(b)}(\widetilde{w}_{b}(q)) =\frac{h}{2}\left(\left\|t_{\frac{b}{h}\rho^{\vee}}wt_{\frac{1}{h} \rho^{\vee}}(q)-\frac{b\rho^{\vee}}{h}\right\|^{2}-\left\|\frac{\rho^{\vee}}{ h}\right\|^{2}\right)\] \[=\frac{h}{2}\left(\left\|w\left(q-\frac{\rho^{\vee}}{h}\right) \right\|^{2}-\left\|\frac{\rho^{\vee}}{h}\right\|^{2}\right)=\mathsf{size}^{ \vee}(q).\qed\] ### Maximum Size Theorem 6.2 is a primary tool in computing the expected size of simultaneous cores in the next section. As a simpler application, we proceed as in [17] to determine the maximum \(\mathsf{size}^{\vee}\) of a simultaneous core (extending that result to the non-simply-laced types). **Theorem 6.3**.: _For \(\widetilde{W}\) an irreducible affine Weyl group with \(\gcd(h,b)=1\),_ \[\max_{q\in\mathcal{S}(b)\cap\mathcal{Q}^{\vee}}(\mathsf{size}^{\vee}(q))=\frac {rg^{\vee}}{h}\frac{n(b^{2}-1)(h+1)}{24}.\] _Moreover, this maximum is attained by a unique point \(q_{*}\in\mathcal{S}(b)\)._ Proof.: We claim that the maximum is obtained at \(q_{*}=\widetilde{w}_{b}^{-1}(0)\), where \(\widetilde{w}_{b}\in\widetilde{W}\) is the same element as used in Theorem 6.2. First note that since \(\widetilde{w}_{b}\) maps \(\mathcal{S}(b)\) bijectively to \(b\mathcal{A}\), this \(q_{*}\) is indeed in \(\mathsf{core}(\widetilde{W},b)=\mathcal{S}(b)\cap\mathcal{Q}\). Since \(\widetilde{w}_{b}\) maps \(\mathsf{size}^{\vee}\) to \(\mathsf{size}^{(b)}\), we will show the equivalent statement that \(0\) is the unique element of \(b\mathcal{A}\cap\mathcal{Q}^{\vee}\) of maximum \(\mathsf{size}^{(b)}\). Since \(\left\|\frac{\rho^{\vee}}{h}\right\|^{2}\) is a constant, it suffices to maximize \(Q(\mathbf{x})=\left\|x-b\frac{\rho^{\vee}}{h}\right\|^{2}\). But the fact that \(0\) maximizes \(Q(\mathbf{x})\) over \(b\mathcal{A}\) is known; it follows for instance from the "very strange" formula of Kac (cf. [12, Equation (0.9)]). Moreover, \(Q(\mathbf{x})\) is a strictly convex function, so it can only be maximized at a vertex of the convex polytope \(b\mathcal{A}\). However, no other vertices of \(b\mathcal{A}\) are in \(\mathcal{Q}^{\vee}\), which implies that \(0\) is the only point in \(b\mathcal{A}\cap\mathcal{Q}^{\vee}\) of maximum \(\mathsf{size}^{(b)}\). Moreover, we may explicitly compute \(\mathsf{size}(q_{*})\) as follows. \[\mathsf{size}^{(b)}(0)=\frac{h}{2}\left\|b\frac{\rho^{\vee}}{h}\right\|^{2}- \frac{h}{2}\left\|\frac{\rho^{\vee}}{h}\right\|^{2}=(b^{2}-1)\frac{1}{2h} \left\langle\rho^{\vee},\rho^{\vee}\right\rangle.\] The desired formula then follows from the explicit computation of \(\left\langle\rho^{\vee},\rho^{\vee}\right\rangle\), a dual version of the "strange formula" of Freudenthal and de Vries: **Theorem 6.4** (see [15, Section 4]).: _For \(\widetilde{W}\) an irreducible affine Weyl group,_ \[\left\langle\rho^{\vee},\rho^{\vee}\right\rangle=rg^{\vee}\frac{n(h+1)}{12}.\qed\] Although Theorem 6.3 proves that \(\mathsf{size}^{\vee}(\widetilde{w}_{b}^{-1}(0))\) is the maximum that the statistic \(\mathsf{size}^{\vee}\) can take on \(\mathsf{core}(\widetilde{W},b)\), more is true in type \(A_{n}\), where J. Vandehey shows that the largest \((a,b)\)-core contains all other \((a,b)\)-cores as subdiagrams (see [14, 15]). Previously the second and third authors conjectured [13, Conjecture 6.14] that the inversion set of \(\widetilde{w}_{b}\) contains the inversion sets of all other affine elements corresponding to elements of \(\mathsf{core}(\widetilde{W},b)\). We here extend that conjecture to the non-simply-laced types as well. **Conjecture 6.5**.: The element \(\widetilde{w}_{b}\) is maximal in the weak order on \(\widetilde{W}/W\) among all dominant elements \(\{\widetilde{w}\in\widetilde{W}/W:\widetilde{w}^{-1}(0)\in\mathcal{S}(b)\}\). ### Simultaneous Combinatorial Models One of the major advantages to our improvement from simply-laced to general affine Weyl groups is the ability to incorporate type \(C_{n}\), whose cores lying in \(\mathcal{S}_{C_{n}}(b)\) also have a natural combinatorial model: **Theorem 6.6**.: _The map \(\mathbf{x}\mapsto\lambda_{\iota(\mathbf{x})}\) of Theorem 5.2 restricts to a bijection from \(\mathcal{S}_{C_{n}}(b)\cap\mathcal{Q}_{C_{n}}^{\vee}\) to self-conjugate \((2n,b)\)-cores._ Proof.: Recall that a \(2n\)-core is self-conjugate if and only if \(q_{\lambda}=(q_{\lambda})^{\intercal}\). In type \(C_{n}\), the roots of height \(r=2k-1\) are \(\frac{1}{\sqrt{2}}(e_{i}-e_{i+r})\) for all \(i\leq n-r\), as well as \(\frac{1}{\sqrt{2}}(e_{n-i+1}+e_{n-r+i})\) for \(1\leq i\leq k\). Write \(\iota(\mathbf{x})=(y_{1},\ldots y_{2n})\). Thus \(\left\langle\mathbf{x},\alpha\right\rangle\geq-t\) for all \(\alpha\in\Phi_{r}(C_{n})\) if and only if \[y_{i}-y_{i+r} \geq-t 1\leq i\leq n-r\] \[-y_{2n+1-i}+y_{2n+1-i-r} \geq-t 1\leq i\leq n-r\] \[y_{n-i+1}+y_{n-r+i} \geq-t 1\leq i\leq\frac{r+1}{2}\] \[-y_{n+i}-y_{n+1+r-i} \geq-t 1\leq i\leq\frac{r+1}{2}\] By definition of \(\iota_{C}\) we have that \(y_{j}=-y_{2n+1-j}\) for all \(j\). Thus under the assumption that \(\mathbf{y}=\mathbf{y}^{\intercal}\), the above inequalities are precisely the system \(y_{i}-y_{i+r}\geq-t\) for all \(1\leq i\leq 2n-r\), which is to say, \(\left\langle\iota(\mathbf{x}),\alpha\right\rangle\geq-t\) for all \(\alpha\in\Phi_{r}(A_{2n-1})\). In a similar way, under the same assumption \(\left\langle\mathbf{x},\alpha\right\rangle\leq t+1\) for all \(\alpha\in\Phi_{n-r}(C_{n})\) is equivalent to \(\left\langle\iota_{C}(\mathbf{x}),\alpha\right\rangle\leq t+1\) for all \(\alpha\in\Phi_{n-r}(A_{2n-1})\). Therefore, the target of our map, restricted to \(\mathcal{S}_{C_{n}}(b)\cap\mathcal{Q}_{C_{n}}^{\vee}\), is indeed \(\mathcal{S}_{A_{n}}(b)\cap\mathcal{Q}_{A_{2n-1}}^{\vee}\). Moreover, the map \((y_{1},\ldots,y_{2n})\mapsto\frac{1}{\sqrt{2}}(y_{1},\ldots,y_{n})\) is a well-defined inverse. The above theorem permits us to understand the left-hand side of Theorem 6.3 and Theorem 1.5 as the maximum and expected \(\mathsf{size}^{\vee}\) of self-conjugate simultaneous cores. In particular, this thus recovers "half" of the Chen-Huang-Wang result (that is, the case when \(a\) is even). Unfortunately, this combinatorial interpretation appears to be limited to type \(C_{n}\). Even for the other classical types, the maps \(\iota\) from Section 4 do not map the Sommers regions into type-\(A\) Sommers regions. Since there are many possible embeddings of \(\mathcal{Q}^{\vee}_{X_{n}}\) into \(\mathcal{Q}^{\vee}_{m}\) for various \(m\), it is possible that such a deficiency may be overcome. It would already be interesting to understand combinatorial conditions on the \(3\)-cores that do lie in the image of the \(G_{2}\) Sommers region. ## 7. Expected Size of Simultaneous Cores We are now ready to prove Theorem 1.5: **Theorem 1.5**.: _For \(X_{n}\) an irreducible rank \(n\) Cartan type with root system \(\Phi\),_ \[\operatorname*{\mathbb{E}}_{q\in\mathsf{core}(X_{n},b)}(\mathsf{size}^{\vee} (q))=\frac{rg^{\vee}}{h}\frac{n(b-1)(h+b+1)}{24},\] _where \(h\) is the Coxeter number of \(X_{n}\), \(g^{\vee}\) is the dual Coxeter number for \(\Phi^{\vee}\), and \(r\) is the ratio of the length of a long root to the length of a short root in \(\Phi\)._ We do this by computing the left-hand side explicitly for each type \(X_{n}\), which by Theorem 6.2 is \[\operatorname*{\mathbb{E}}_{q\in\mathsf{core}(X_{n},b)}(\mathsf{size}^{\vee} (q))=\frac{1}{|\mathsf{core}(X_{n},b)|}\sum_{q\in\mathsf{core}(X_{n},b)} \mathsf{size}^{\vee}(q)=\frac{1}{|b\mathcal{A}\cap\mathcal{Q}^{\vee}|}\sum_{q \in b\mathcal{A}\cap\mathcal{Q}^{\vee}}\mathsf{size}^{(b)}(q).\] The denominator was explicitly and nearly-uniformly calculated by Haiman [10]. To compute the sum, we first record the vertices of the fundamental alcove \(\mathcal{A}\): they are \(\Gamma:=\{0\}\cup\left\{\frac{\omega_{i}^{\vee}}{c_{i}}:1\leq i\leq n\right\}\), where we recall that the \(c_{i}\) are defined by \(\widetilde{\alpha}=\sum_{i=1}^{n}c_{i}\alpha_{i}\). As in [12], we proceed by translating the problem to the coweight lattice. Define the _extended affine Weyl group_ by \(\widetilde{W}_{\text{ex}}:=W\ltimes\Lambda^{\vee}\), and write the group of automorphisms for \(b\mathcal{A}\) as \(b\Omega:=\{\widetilde{w}\in\widetilde{W}_{\text{ex}}:\widetilde{w}(b \mathcal{A})=b\mathcal{A}\}\). These groups are isomorphic for all \(b\), and in particular have constant order that we denote by \(f\). **Proposition 7.1** ([12, Theorem 2.5 & Lemma 6.11]).: _For any \(b\) coprime to the Coxeter number \(h\) in type \(X_{n}\):_ 1. _The action of_ \(b\Omega\) _on_ \(b\mathcal{A}\cap\Lambda^{\vee}\) _is free._ 2. _Each_ \(b\Omega\) _orbit of_ \(b\mathcal{A}\cap\Lambda^{\vee}\) _contains exactly one element of_ \(b\mathcal{A}\cap\mathcal{Q}^{\vee}=\mathsf{core}(X_{n},b)\)_._ 3. _For any_ \(\widetilde{w}\in b\Omega\) _and any_ \(\omega\in\Lambda^{\vee}\)_,_ \(\mathsf{size}^{(b)}(\omega)=\mathsf{size}^{(b)}(\widetilde{w}\cdot\omega)\)__ Using this, we finish the translation from \(\mathcal{Q}^{\vee}\) to \(\Lambda^{\vee}\): \[\operatorname*{\mathbb{E}}_{q\in\mathsf{core}(X_{n},b)}(\mathsf{size}^{\vee} (q))=\frac{1}{|b\mathcal{A}\cap\mathcal{Q}^{\vee}|}\cdot\frac{1}{f}\sum_{q\in b \mathcal{A}\cap\Lambda^{\vee}}\mathsf{size}^{(b)}(q). \tag{8}\] We now recall the relevant Ehrhart-theoretic tools. For any degree-\(r\) polynomial \(F:\mathbb{R}^{n}\to\mathbb{R}\), its _weighted lattice point enumerator_ over \(\Lambda^{\vee}\) is \(\mathcal{A}_{F}(b):=\sum\limits_{q\in b\mathcal{A}\cap\Lambda^{\vee}}F(q)\). This \(\mathcal{A}_{F}(b)\) is a quasipolynomial in \(b\), of degree \(n+r\) and period \(c=c(\widetilde{X}_{n}):=\mathsf{lcm}(c_{1},\dots,c_{n})\), where the \(c_{i}\) are again the denominators of the vertices of \(\mathcal{A}\). As \(\mathsf{size}^{(b)}\) changes with \(b\), Ehrhart theory appears to be inapplicable--however, a judicious rewriting shows that this is not the case. **Proposition 7.2**.: _The weighted lattice point enumerator \(\mathcal{A}_{\mathsf{size}^{(b)}}(b)\) is a quasipolynomial in \(b\) of degree \(n+2\) and period \(c(\widetilde{X}_{n}):=\mathsf{lcm}(c_{1},\dots,c_{n})\)._ Proof.: Notice that \(\mathsf{size}_{b}^{\vee}(x)=\frac{h}{2}\left\lVert x\right\rVert^{2}-b\langle x, \rho^{\vee}\rangle+(b^{2}-1)\frac{\left\lVert\bar{\rho}\right\rVert^{2}}{2h}.\) Thus we find that \[\mathcal{A}_{\mathsf{size}^{(b)}}(b)=\tfrac{h}{2}\mathcal{A}_{\|\cdot\|^{2}}(b)- b\mathcal{A}_{\langle\cdot,\rho^{\vee}\rangle}(b)+(b^{2}-1)\mathcal{A}_{\frac{ \left\lVert\rho^{\vee}\right\rVert^{2}}{2h}}(b)\] is a quasipolynomial in \(b\) of degree \(n+2\) and period \(c(\widetilde{X}_{n})\). Therefore, to complete the proof of Theorem 1.5, we may compute the quasipolynomial on for all components that contain a residue \(b\operatorname{mod}c_{X_{n}}\) that is coprime to \(h\). For the exceptional types, this is already a finite and computationally feasible calculation. Types \(A\) and \(D\) are simply-laced, and so their calculation was already completed in [17]. Thus, we proceed along similar lines for types \(B\) and \(C\). In what follows, we write \(\mathcal{A}_{\mathsf{size}^{(b)}}(b)_{i}\) to mean the polynomial which agrees with \(\mathcal{A}_{\mathsf{size}^{(b)}}(b)\) for all \(b\equiv i\operatorname{mod}c(\widetilde{X}_{n})\). Relevant data to complete these computations for the irreducible root systems is provided in Figure 5. ### Types B and C For either \(X_{n}=B_{n}\) or \(X_{n}=C_{n}\) we have \(c(\widetilde{X}_{n})=2\) and exponents \(1,3,\ldots,2n-1\). Since all \(b\) in the theorem statement are coprime to \(h\), they must be odd, and so it suffices to compute only the polynomial \(\mathcal{A}_{\mathsf{size}^{(b)}}(b)_{1}\). This polynomial has degree \(n+2\). Recall the fact that the interior of \(e_{i}\mathcal{A}\) contains no points of \(\Lambda_{X_{n}}^{\vee}\) for all exponents \(e_{i}\), because they are strictly less than \(h\) (see e.g. [17, Section 7.4]). Thus, by Ehrhart reciprocity we conclude that \(\mathcal{A}_{\mathsf{size}^{(b)}}(-e_{i})=0\), and because in types \(B_{n}\) and \(C_{n}\) the exponents are the odd integers \(1,3,\ldots,2n-1\), we thus we have \(n\) roots of our desired polynomial. Moreover, it is easy to compute that \(\mathcal{A}_{\mathsf{size}}(1)=0\), and therefore (c.f. [17, Proposition 7.5]) we also have \(\mathcal{A}_{\mathsf{size}^{(b)}}(-h-1)=0\). Hence we know all \(n+2\) of our desired polynomial's roots, that is: \[\mathcal{A}_{\mathsf{size}^{(b)}}(b)_{1}=\kappa(b-1)(b+2n+1)\prod_{j=1}^{n}(b+ e_{j}).\] where the leading coefficient \(\kappa\) depends only on \(n\) and whether we are in types \(B_{n}\) or \(C_{n}\). Importantly, this formula holds for any odd \(b\), not merely those \(b\) which are coprime to \(h=2n\). Thus we can determine it by explicitly calculating \(\mathcal{A}_{\mathsf{size}^{(b)}}(b)\) at \(b=3\), as follows: \[\kappa=\frac{\mathcal{A}_{\mathsf{size}^{(3)}}(3)}{(3-1)(3+2n+1)\prod_{j=1}^{n }(3+(2j-1))}=\frac{1}{2^{n+2}(n+2)!}\mathcal{A}_{\mathsf{size}^{(3)}}(3).\] Figure 5. The type \(X_{n}\), Coxeter number \(h\), dual Coxeter number \(g^{\vee}\), exponents \(e_{i}\), coefficients of the highest root \(c_{i}\), index of connection \(f\), ratio of long to short root \(r\), and gcd of the \(c_{i}\) for the irreducible root systems. From here, we recall Haiman's enumeration [10]: \(|b\mathcal{A}\cap\mathcal{Q}^{\vee}|=\frac{1}{|W|}\prod_{j=1}^{n}(b+e_{j})\), whenever \(b\) and \(h\) are coprime. Thus, we rewrite Equation (8) as the quadratic polynomial: \[\operatorname*{\mathbb{E}}_{q\in\operatorname{\mathsf{core}}(X_{n },b)}(\mathsf{size}^{\vee}(q)) =\frac{2^{n}n!\kappa}{f}(b-1)(b+2n+1)\] \[=\frac{2^{n}n!}{2\cdot 2^{n+2}(n+2)!}\mathcal{A}_{\mathsf{size}^{ (3)}}(3)\cdot(b-1)(b+h-1)\] \[=\mathcal{A}_{\mathsf{size}^{(3)}}(3)\cdot\frac{(b-1)(b+h-1)}{8(n +1)(n+2)}\] Thus, to prove the formula from Theorem 1.5, what remains to be verified is that \[\mathcal{A}_{\mathsf{size}^{(3)}}(3)=\frac{rg^{\vee}}{3h}\cdot n(n+1)(n+2)= \begin{cases}\frac{1}{3}(n+1)^{2}(n+2)&\text{in type }B_{n},\\ \frac{1}{3}(n+1)(n+2)(2n-1)&\text{in type }C_{n}.\end{cases}\] #### 7.1.1. Type B Beginning in type \(B_{n}\), observe that since \(\sum_{j=1}^{n}a_{j}\omega_{j}^{\vee}=(\sum_{j=1}^{n}a_{j},\sum_{i=2}^{n}a_{j}, \ldots,a_{n})\) and \(\widetilde{\alpha}=(1,1,0,0,\ldots,0)\), the coweight points in \(3\mathcal{A}\) are those points for which \(a_{1}+2(\sum_{j=2}^{n}a_{j})\leq 3\), which are the \(2n+2\) points \[\{0,\omega_{1}^{\vee},2\omega_{1}^{\vee},3\omega_{1}^{\vee}\}\cup\{\omega_{j} ^{\vee},\omega_{1}^{\vee}+\omega_{j}^{\vee}\}_{j=2}^{n}.\] We compute that for \(2\leq j\leq n\) \[\mathsf{size}^{(3)}(\omega_{j}^{\vee}) =\frac{n}{(2n)^{2}}\left(\left(\|2n\omega_{j}^{\vee}-3\rho^{\vee} \|\right)^{2}-\|\rho^{\vee}\|^{2}\right)\] \[=\frac{1}{4n}\left(\sum_{i=1}^{j}\left(2n-3(n+1-i)\right)^{2}+ \sum_{i=j+1}^{n}\left(3(n+1-i)\right)^{2}-\frac{n(n+1)(2n+1)}{6}\right)\] \[=\frac{(2n+2-3j)(2n+1-3j)}{6}.\] A similar computation also shows that \(\mathsf{size}^{(3)}(\omega_{1}^{\vee}+\omega_{j}^{\vee})=\frac{(2n+2-3j)(2n+1 -3j)}{6}\) and therefore \[\sum_{j=2}^{n}\mathsf{size}^{(3)}(\omega_{j}^{\vee})+\mathsf{size}^{(3)}( \omega_{1}^{\vee}+\omega_{j}^{\vee})=\frac{(n-2)(n-1)^{2}}{3}.\] The remaining points give contributions of the following form: \[\mathsf{size}^{(3)}(k\omega_{1}^{\vee}) =\frac{n}{(2n)^{2}}\left(\left(2kn-3n\right)^{2}+\sum_{i=2}^{n}( 3(n+1-i))^{2}-\sum_{i=1}^{n}(n+1-i)^{2}\right)\] \[=\frac{1+3n-9kn+3k^{2}n+2n^{2}}{3}.\] We thus compute that \(\sum_{k=0}^{3}\mathsf{size}^{(3)}(k\omega_{1}^{\vee})=\frac{4(1+2n^{2})}{3}\). Putting these together, we obtain \[\mathcal{A}_{\mathsf{size}^{(3)}}(3)=\frac{n-2n-1^{2}}{3}+\frac{4(1+2n^{2})}{3 }=\frac{1}{3}(n+1)^{2}(n+2)\] which concludes the proof of Theorem 1.5 for type \(B_{n}\). #### 7.1.2. Type C Turning to type \(C_{n}\), we have \(\widetilde{\alpha}=\frac{2}{\sqrt{2}}e_{1}\) and \[\sum_{j=1}^{n}a_{j}\omega_{j}^{{}^{\vee}}=\sqrt{2}\left(\frac{a_{n}}{2}+\sum_{j =1}^{n-1}a_{j},\ \frac{a_{n}}{2}+\sum_{j=2}^{n}a_{j},\ldots,\ \frac{a_{n}}{2}+a_{n-1},\ \frac{a_{n}}{2}\right)\] So the coweight points in \(3\mathcal{A}\) are those points for which \(2(\sum_{j=1}^{n-1}a_{j})+a_{n}\leq 3\), which are the \(2n+2\) points \[\{0,\omega_{n}^{{}^{\vee}},2\omega_{n}^{{}^{\vee}},3\omega_{n}^{{}^{\vee}}\} \cup\{\omega_{j}^{{}^{\vee}},\omega_{n}^{{}^{\vee}}+\omega_{j}^{{}^{\vee}}\}_ {j=1}^{n-1}.\] Using the fact that \(\rho^{{}^{\vee}}=\sum_{j=1}^{n}\omega_{j}^{{}^{\vee}}=\frac{\sqrt{2}}{2}(2n-1,2n-3,2n-5,\ldots,1)\), we compute that for \(1\leq j\leq n-1\) \[\mathsf{size}^{(3)}(\omega_{j}^{{}^{\vee}}) =\frac{n}{(2n)^{2}}\left(\left(\left\|2n\omega_{j}^{{}^{\vee}}-3 \rho^{{}^{\vee}}\right\|\right)^{2}-\left\|\rho^{{}^{\vee}}\right\|^{2}\right)\] \[=\frac{1}{4n}\left(\sum_{i=1}^{j}2\big{(}2n-\frac{3}{2}(2n+1-2i) \big{)}^{2}+\sum_{i=j+1}^{n}\frac{\left(3(2n+1-2i)\right)^{2}}{2}-\frac{4n^{3} -n}{6}\right)\] \[=\frac{(2n-3j)^{2}-1}{3}.\] A similar computation also shows that \(\mathsf{size}^{(3)}(\omega_{n}^{{}^{\vee}}+\omega_{j}^{{}^{\vee}})=\frac{(n-3 j)^{2}-1}{3}\) and therefore \[\sum_{j=1}^{n-1}\mathsf{size}^{(3)}(\omega_{j}^{{}^{\vee}})+\mathsf{size}^{(3) }(\omega_{n}^{{}^{\vee}}+\omega_{j}^{{}^{\vee}})=\frac{(n-2)(n-1)(2n+1)}{3}.\] The remaining points give contributions of the following form: \[\mathsf{size}^{(3)}(k\omega_{n}^{{}^{\vee}}) =\frac{n}{(2n)^{2}}\left(\sum_{i=1}^{n}2\left(kn-\frac{3}{2}(2n+ 1-2i)\right)^{2}-\sum_{i=1}^{n}\frac{(2n+1-i)^{2}}{2}\right)\] \[=\frac{-2+8n^{2}-9kn^{2}+3k^{2}n^{2}}{6}.\] We thus compute that \(\sum_{k=0}^{3}\mathsf{size}^{(3)}(k\omega_{1}^{{}^{\vee}})=\frac{2(5n^{2}-2)} {3}\). Putting these together, we obtain \[\mathcal{A}_{\mathsf{size}^{(b)}}(b)=\frac{1}{3}(n+1)(n+2)(2n-1),\] which concludes the proof of Theorem 1.5 for type \(C_{n}\). ### Types \(\mathbf{F_{4}}\) and \(\mathbf{G_{2}}\) Finally, we return to the exceptional types. Since we had previously handled the simply-laced types \(E_{6},E_{7}\), and \(E_{8}\)[17, Section 7.6], it remains only to confirm our formula for \(F_{4}\) and \(G_{2}\). Since \(\mathcal{A}_{\mathsf{size}^{(b)}}(b)\) is a polynomial in each residue class mod \(m(\widetilde{X}_{n})\) (here, \(12\) for \(F_{4}\) and \(6\) for \(G_{2}\)) of degree \(n+2\), we can simply compute \(\mathcal{A}_{\mathsf{size}^{(b)}}(b)\) for enough values of \(b\) in each relevant residue class and perform Lagrange interpolation. For all relevant residue classes in both groups there is exactly one exponent \(e_{j}\) and so "enough" values of \(b\) means \(6\) for \(F_{4}\), and \(4\) for \(G_{2}\). When performing this computation in SAGE, we see that the polynomials \(\mathcal{A}_{\mathsf{size}^{(b)}}(b)_{i}\) coincide for the relevant residue classes. Namely: \[\mathcal{A}_{\mathsf{size}^{(b)}}(b)=\begin{cases}\frac{1}{18432}(b-1)(b+1)(b+5 )(b+7)(b+11)(b+13)&\text{in type $F_{4}$},\\ \frac{1}{144}(b-1)(b+1)(b+5)(b+7)&\text{in type $G_{2}$}.\end{cases}\] for all \(b\) coprime to \(h\) (that is, in both cases, for \(b\equiv 1,5\bmod 6\)). Dividing these polynomials by \(f\) and \(|b\mathcal{A}\cap t_{b}^{\vee}|\) as in Equation (8), we obtain a formula for the expected \(\mathsf{size}^{\vee}\) of a core that agrees with \[\operatorname*{\mathbb{E}}_{q\in\mathsf{core}(X_{n},b)}(\mathsf{size}^{\vee} (\lambda_{q}))=\frac{rg^{\vee}}{h}\frac{n(b-1)(h+b+1)}{24}.\] This completes the proof of Theorem 1.5 for \(F_{4}\) and \(G_{2}\), and thus for all types. ## Acknowledgements We thank Benjamin Cotton for help drawing Figure 2. The first author was partially supported by NSF grant 1601961, NSF grant 1745638, and Czech Science Foundation grant 21-00420M. The third author was partially supported by NSF grant 2246877.
2309.17188
Analytic regularity of global solutions for the b-equation
In this paper, we delve into the $b$-family of equations and explore regularity properties of its global solutions. Our findings reveal that, irrespective of the real choice of the constitutive parameter, when the initial datum is confined to an analytic Gevrey function the resulting global solution is analytic in both temporal and spatial variables.
Priscila Leal da Silva
2023-09-29T12:35:40Z
http://arxiv.org/abs/2309.17188v1
# Analytic regularity of global solutions for the \(b\)-equation ###### Abstract In this paper, we delve into the \(b\)-family of equations and explore regularity properties of its global solutions. Our findings reveal that, irrespective of the real choice of the constitutive parameter, when the initial datum is confined to an analytic Gevrey function the resulting global solution is analytic in both temporal and spatial variables. **Keywords:** Global analytic solutions, Gevrey spaces, Holm-Staley equation, \(b\)-equation **2020 Mathematics Subject Classification:** 35A01; 35A02; 35A20. ## 1 Introduction This paper is concerned with the \(b\)-family of equations \[u_{t}=-uu_{x}-\partial_{x}\Lambda^{-2}\left(\frac{b}{2}u^{2}+\frac{3-b}{2}u_{ x}^{2}\right), \tag{1.1}\] where \(b\) is a real number and \(\Lambda^{-2}\) denotes the inverse of the Helmholtz operator \(\Lambda^{2}=1-\partial_{x}^{2}\) acting on Sobolev spaces, and the analytic regularity of its global solutions. Equation (1.1) was first considered in [8, 13, 14] with applications in hydrodynamics for \(b\neq-1\) and its most prominent members are the Camassa-Holm [3] equation (\(b=2\)) and the Degasperis-Procesi [7] equation (\(b=3\)), the only two cases where there exists an infinite number of linearly independent conserved vectors. The classical theory for global well-posedness of non-local equations of the type (1.1) can be attributed to Constantin and Escher [4]. Their work demonstrated the global well-posedness of the Camassa-Holm equation in \(C([0,\infty),H^{3}(\mathbb{R}))\) for an initial datum \(u_{0}(x):=u(0,x)\in H^{3}(\mathbb{R})\), subject to the condition that the initial momentum \(m_{0}(x):=u_{0}(x)-u_{0}^{\prime\prime}(x)\) does not change sign. Their proof heavily relied on the conservation of energy expressed in terms of the usual norm in \(H^{1}(\mathbb{R})\). For the Degasperis-Procesi equation, Liu and Yin [16] provided a different proof that does not rely on conservation of energy, as its solutions do not conserve the \(H^{1}(\mathbb{R})\)-norm. Instead, they accomplished a similar result for \(u_{0}\in H^{s}(\mathbb{R})\), where \(s>3/2\), by estimating the \(L^{\infty}(\mathbb{R})\) norm. For arbitrary choices of \(b\), however, the conservation of energy does not hold. In fact, one of the few conserved quantities is the \(L^{1}(\mathbb{R})\)-norm of \(m_{0}(x)\) when it is either non-negative or non-positive. However, relying on this norm is often insufficient for establishing well-posedness and extending local solutions of (1.1) to a global scope, presenting a significant challenge. In the papers [11, 16], the unique global strong solutions of equation (1.1) were proven to exist for an initial datum \(u_{0}(x)\in H^{s}(\mathbb{R})\), with \(s>3/2\), but only for the case when \(b>0\). Global well-posedness for the scenario where \(b=0\) was obtained in [6]. Moreover, in [9], Escher and Yin extended the global solutions to the case where \(b=-1/2n\), where \(n\) is any positive integer. Recent contributions by Freire in [10] further extended the analysis of global solutions to encompass any \(b\in\mathbb{R}\), thus providing a complete and comprehensive understanding of the existence and uniqueness of global solutions of (1.1). Building upon [5] and incorporating the recent findings from [10], we investigate (1.1) with initial data belonging to the Gevrey class \(G^{\sigma,s}(\mathbb{R})\) of functions \(f\in L^{2}(\mathbb{R})\) such that the norm \[\|f\|_{G^{\sigma,s}}=\left(\int_{\mathbb{R}}e^{2\sigma|\xi|}(1+\xi^{2})^{s}| \hat{f}(\xi)|^{2}d\xi\right)^{1/2} \tag{1.2}\] remains finite, with \(\sigma>0\) and \(s\in\mathbb{R}\). Here, \(\hat{f}(\xi)\) denotes the Fourier transform of \(f\), and for \(s\geq 0\), the continuous embedding \(G^{\sigma,s}(\mathbb{R})\hookrightarrow H^{\infty}(\mathbb{R})\) holds, where \(H^{\infty}(\mathbb{R}):=\bigcap\limits_{s\geq 0}H^{s}(\mathbb{R})\). In [5], the author showed that when the initial data lies in the Gevrey space \(G^{1,s}(\mathbb{R})\) of analytic functions, with \(s>5/2\), the solution of (1.1) with \(b=0\) is globally analytic in both variables. The primary objective of this paper is to extend this crucial result to encompass the entire family (1.1) for \(b\in\mathbb{R}\) by establishing the following theorem. **Theorem 1.1**.: _Given \(u_{0}(x)\in G^{1,s}(\mathbb{R})\), for \(s>3/2\), such that \(m_{0}(x)\geq 0\) or \(m_{0}(x)\leq 0\), there exists a unique global analytic solution \(u\in C^{\omega}([0,\infty)\times\mathbb{R})\) of (1.1)._ For this purpose, we will prove some auxiliary local well-posedness results for analytic initial datum and combine these results with the global well-posedness proved in [10] and the Kato-Masuda machinery [15] in order to complete the proof. Theorem 1.1 represents a notable extension of the existing literature. While [2] has established the analytic regularity of solutions for the Camassa-Holm equation, the current theorem transcends the limitations imposed by the (in general) lack of conservation laws and rendering it independent of the specific choice of \(b\) in (3.1). This, in turn, presents the challenge of enhancing the radius of spatial analyticity (as elaborated in Section 3) with a polynomial lower bound for it. The paper is structured as follows: * in Section 2, we present the auxiliary spaces and results required to establish (analytic in time) local well-posedness in Gevrey and Himonas-Misiolek [12] spaces; * Section 3 utilizes the Kato-Masuda [15] machinery to obtain spatial analytic solutions, along with determining the radius of spatial analyticity; * Lastly, in Section 4, we employ space embeddings and the results from Sections 2-3 to finalize the proof of Theorem 1.1. ## 2 Function spaces and local well-posedness In this section we present the remaining function spaces needed and also the auxiliary local results. Besides Gevrey spaces, we will consider the Himonas-Misiolek space [12] \[E^{\sigma,m}(\mathbb{R})=\{f\in C^{\infty}(\mathbb{R});\|f\|_{E^{\sigma,m}}= \sup\limits_{j\in\mathbb{Z}_{+}}\frac{\sigma^{j}(j+1)^{2}}{j!}\|\partial_{x}^ {j}f\|_{H^{2m}}<\infty\}\] for \(\sigma>0\) and \(m\) being a positive integer. Let \(F(u)=-uu_{x}-\partial_{x}\Lambda^{-2}\left(\frac{b}{2}u^{2}+\frac{3-b}{2}u_{x}^{ 2}\right)\) represent the right-hand side of equation (1.1) and consider initial data \(u_{0}\in G^{\sigma_{0},s}(\mathbb{R})\) and \(v_{0}\in E^{\sigma_{0},m}(\mathbb{R}),\) where \(s>3/2\), \(m\geq 2\) and \(0<\sigma_{0}\leq 1\). If \(\sigma\in(0,\sigma_{0})\), \(u_{i}\in G^{\sigma,s}(\mathbb{R})\) and \(v_{i}\in E^{\sigma,m}(\mathbb{R})\), for \(i=1,2\), are such that \[\|u_{i}-u_{0}\|_{G^{\sigma,m}}<\|u_{0}\|_{G^{\sigma_{0},s}},\quad\|v_{i}-v_{0} \|_{E^{\sigma,m}}<\|v_{0}\|_{E^{\sigma_{0},m}},\forall i,\] then by employing a standard argument with the use of algebra properties of Gevrey (Lemma 3 in [1]) and Himonas-Misiolek (Theorem 2.1 in [12]) spaces, along with their corresponding derivatives estimates, see Lemma 2.4 in [12], Lemma 2 in [1] and Lemma 2.2 in [5], we can derive the following estimates: \[\|F(u_{0})\|_{G^{\sigma_{0},s}}\leq\frac{M_{1}(\|u_{0}\|_{G^{ \sigma_{0},s}},b)}{\sigma-\sigma_{0}},\quad\|F(v_{0})\|_{E^{\sigma_{0},s}} \leq\frac{M_{2}(\|v_{0}\|_{E^{\sigma_{0},m}},b)}{\sigma-\sigma_{0}}\|v_{0}\|_ {E^{\sigma,m}}\] \[\|F(u_{1})-F(u_{2})\|_{G^{\sigma_{0},s}}\leq\frac{L_{1}(\|u_{0}\| _{G^{\sigma_{0},s}},b)}{\sigma-\sigma_{0}}\|u_{1}-u_{2}\|_{G^{\sigma,s},},\] \[\|F(v_{1})-F(v_{2})\|_{E^{\sigma_{0},s}}\leq\frac{L_{2}(\|v_{0}\| _{E^{\sigma_{0},m}},b)}{\sigma-\sigma_{0}}\|v_{1}-v_{2}\|_{E^{\sigma,m}},\] for some positive \(M_{1},M_{2},L_{1}\) and \(L_{2}\) depending only on \(b\), the respective initial data and parameters of the spaces under consideration. From the Autonomous Ovsyanikov Theorem (see Theorem 1 in [1]), we prove the following two results: **Proposition 2.1**.: _Given \(u_{0}\in G^{\sigma_{0},s}(\mathbb{R})\), with \(s>3/2\) and \(\sigma_{0}\in(0,1]\), there exists \(T=T(\|u_{0}\|_{G^{\sigma_{0},s}})>0\) such that for every \(\sigma\in(0,\sigma_{0})\) the initial value problem (1.1) has a unique solution \(u\in C^{\omega}([0,T(1-\sigma)),G^{\sigma,s}(\mathbb{R}))\)._ **Proposition 2.2**.: _Given \(u_{0}\in E^{\sigma_{0},m}(\mathbb{R})\), with \(\sigma_{0}\in(0,1]\) and \(m\geq 2\), then there exists \(\epsilon=\epsilon(\|u_{0}\|_{E^{\sigma_{0},m}})>0\) and a unique solution \(u\in C^{\omega}([0,\epsilon],E^{\sigma,m}(\mathbb{R}))\) of (1.1) for any \(\sigma\in(0,\sigma_{0}).\)_ While Proposition 2.2 will be used for general values of \(\sigma_{0}\), our only interest in Proposition 2.1 is regarding the case \(\sigma_{0}=1\) as the corresponding Gevrey spaces consist of analytic functions. Moreover, observe that in Proposition 2.2 we can extend the existence interval to \([-\epsilon,\epsilon]\) by making a reflection of \(t\) and \(x\). ## 3 Radius of spatial analyticity In this section we will prove spatial analyticity of the solution by making use of the Kato-Masuda machinery, see Theorem 1 in [12]. But before we need to build the settings in which we will work. In [10], the author showed that for any choice of \(b\in\mathbb{R}\), given an initial datum \(u_{0}(x)\in H^{n+2}(\mathbb{R})\), for any positive integer \(n\in\mathbb{N}_{0}:=\{0,1,2,3,\dots\}\), such that \(m_{0}(x)\geq 0\) or \(m_{0}(x)\leq 0\), then there exists a unique global solution \(u\in C([0,\infty),H^{n+2}(\mathbb{R}))\) of (1.1). On the one hand, since \(G^{1,s}(\mathbb{R})\hookrightarrow H^{\infty}(\mathbb{R})\), then any initial datum \(u_{0}\in G^{1,s}(\mathbb{R})\), with \(s>3/2\) is in \(H^{n+2}(\mathbb{R})\) for any \(n\), guaranteeing the existence of global solutions \(u\in C([0,\infty),\bigcap\limits_{n\in\mathbb{N}_{0}}H^{n+2}(\mathbb{R}))\). On the other hand, fixed \(n\in\mathbb{Z}\), we have \(H^{n+1}(\mathbb{R})\hookrightarrow H^{s}(\mathbb{R})\hookrightarrow H^{n}( \mathbb{R})\) for any real number \(s\in[n,n+1]\). This means that if \(u(t,\cdot)\in\bigcap\limits_{n\in\mathbb{N}_{0}}H^{n}(\mathbb{R})\), then \(u(t,\cdot)\in H^{\infty}(\mathbb{R})\). In the particular case \(n=1,\) the solution \(u(t,\cdot)\) belongs to \(H^{3}(\mathbb{R})\subset H^{2}(\mathbb{R})\subset H^{1}(\mathbb{R})\subset L^{2 }(\mathbb{R})\) and then we conclude that \(u(t,\cdot)\in\bigcap\limits_{n\in\mathbb{N}_{0}}H^{n}(\mathbb{R})\), leading to \(u(t,\cdot)\in H^{\infty}(\mathbb{R})\). This summarises the following result. **Proposition 3.1**.: _Given an initial datum \(u_{0}(x)\in G^{1,s}(\mathbb{R})\), with \(s>3/2\), such that \(m_{0}(x)\in L^{1}(\mathbb{R})\cap H^{1}(\mathbb{R})\) is either non-negative or non-positive, then there exist a unique solution \(u\in C([0,\infty),H^{\infty}(\mathbb{R}))\) of (1.1)._ We now proceed with the Kato-Masuda theorem. Given \(u_{0}\in G^{1,s}(\mathbb{R})\), with \(s>3/2\), let \(u\) be the solution whose existence is guaranteed by Proposition 3.1 and define \(X=H^{m+2}(\mathbb{R}),Z=H^{m+5}(\mathbb{R})\), with \(m\geq 2\), so that \(F:Z\to X\) is continuous. For \(T>0\) fixed, let \(\mu=1+\max\{\|u\|_{H^{2}},t\in[0,T]\}\) and \(O=\{v\in Z;\|v\|_{H^{2}}<\mu\}\). For \(m\geq 0\) and \(\sigma\in\mathbb{R}\) to be better specified, let \[\Phi_{\sigma,m}(v)=\frac{1}{2}\|v\|_{\sigma,2,m}^{2}:=\frac{1}{2}\sum_{j=0}^{ m}\frac{1}{(j!)^{2}}e^{2\sigma j}\|\partial_{x}^{j}u\|_{H^{2}}^{2}, \tag{3.1}\] for \(v\in O\). With this norm, we define the Kato-Masuda space \(A(r)\), for \(r>0\), as the set of functions that can be analytically extended to a function on a strip of width \(r\), endowed with the norm \(\|v\|_{\sigma,2}=\lim_{m\to\infty}\|v\|_{\sigma,2,m}\) for every \(\sigma\in\mathbb{R}\) such that \(e^{\sigma}<r\). We have a similar and useful embedding \(G^{\sigma,s}(\mathbb{R})\hookrightarrow A(\sigma)\hookrightarrow H^{\infty}( \mathbb{R})\) for the Kato-Masuda space which will be considered in the next section. For \(D\Phi_{\sigma}(v)F(v):=\left<F(v)\,,\,D\Phi_{\sigma}(v)\right>_{H^{2}}\), where \(D\) denotes the Frechet derivative and \(v\in O\), we can use the triangle inequality to write \[|D\Phi_{\sigma,m}(v)F(v)|\leq \left|\sum_{j=0}^{m}\frac{e^{2\sigma j}}{(j!)^{2}}\langle\partial _{x}^{j}v\,,\,\partial_{x}^{j}(vv_{x})\rangle_{H^{2}}\right|+\frac{|b|}{2} \left|\sum_{j=0}^{m}\frac{e^{2\sigma j}}{(j!)^{2}}\langle\partial_{x}^{j}v\,, \,\partial_{x}^{j+1}\Lambda^{-2}v^{2}\rangle_{H^{2}}\right|\] \[+\frac{|3-b|}{2}\left|\sum_{j=0}^{m}\frac{e^{2\sigma j}}{(j!)^{2 }}\langle\partial_{x}^{j}v\,,\,\partial_{x}^{j+1}\Lambda^{-2}v^{2}_{x} \rangle_{H^{2}}\right|.\] By using the estimates (6.14)-(6.16) of [2], we conclude that for \(A(p)=(32+16|b|+64|3-b|)p\) and \(B(p,q)=(64+32|b|+256|3-b|)(1+p)q^{1/2}\) we can bound the previous term by \[|D\Phi_{\sigma,m}(v)F(v)|\leq A(\|v\|_{H^{2}})\Phi_{\sigma,m}(v)+B(\|v\|_{H^{2}},\Phi_{\sigma,m}(v)) \partial_{\sigma}\Phi_{\sigma,m}(v).\] For \(K=A(\mu),\beta(p)=Kp\) and \(\alpha(p)=B(\mu,p)\), we conclude that \[|D\Phi_{\sigma,m}(v)F(v)|\leq \beta(\Phi_{\sigma,m}(v))+\alpha(\Phi_{\sigma,m}(v))\partial_{ \sigma}\Phi_{\sigma,m}(v),\] and then Kato-Masuda Theorem (see Theorem 1 in [15]) yields that for \(t\in[0,T]\) the unique solution \(u(t)\) belongs to \(A(r(t))\), where \(r(t)=e^{\sigma(t)}\) and \(\sigma(t)=\gamma-\lambda(e^{A(\mu)t/2}-1)\), with \(\gamma<0\) fixed and \(\lambda\) depending only \(b\), the initial datum and \(\sigma_{0}\). This completes the proof of the following result: **Proposition 3.2**.: _Given \(u_{0}\in G^{1,s}(\mathbb{R})\), with \(s>3/2\), suppose that \(m_{0}\in L^{1}(\mathbb{R})\cap H^{1}(\mathbb{R})\) does not change sign and let \(u\in C([0,\infty);H^{\infty}(\mathbb{R}))\) be the unique solution to the initial value problem of (1.1). Then for every \(T>0\) there exists \(r(T)>0\) such that \(u\in C([0,T];A(r(T)))\)._ We can now proceed with the proof of Theorem 1.1. ## 4 Proof of Theorem 1.1 Once we have proved Propositions 2.1-3.2, the proof of Theorem 1.1 reduces to the space embeddings considered previously [2, 5]. From now on fix an arbitrary initial datum \(u_{0}(x)\in G^{1,s}(\mathbb{R})\), with \(s>3/2\), and let \(u\in C([0,\infty),H^{\infty}(\mathbb{R}))\) be the unique global solution obtained through Propositions 3.1-3.2. * **Proving that \(u\in C^{\omega}([0,T],A(\sigma(T)))\) for some \(T>0\) and \(\sigma(T)\):** Given an initial datum \(u_{0}(x)\in G^{1,s}(\mathbb{R})\), with \(s>3/2\), let \(\tilde{u}\in C^{\omega}([0,\tilde{T}(1-\sigma)),G^{\sigma,s}(\mathbb{R}))\) be the unique solution obtained from Proposition 2.1, where \(\sigma\in(0,1)\) and \(\tilde{T}>0\). By defining \(T=\tilde{T}(1-\sigma)/2\), we obtain \(\sigma(T)=1-2T/\tilde{T}\), while the embeddings \(G^{\sigma(T),s}(\mathbb{R})\hookrightarrow A(\sigma(T))\hookrightarrow H^{ \infty}(\mathbb{R})\) yield \(\tilde{u}\in C^{\omega}([0,T],A(\sigma(T)))\subset C^{\omega}([0,T],H^{\infty }(\mathbb{R}))\). Consequently, Proposition 3.2 guarantees that \(\tilde{u}=u\) for \(t\in[0,T]\) and \(u\in C^{\omega}([0,T],A(\sigma(T)))\). With this, we proved that the solution \(u\) is locally analytic in time and is \(C^{\infty}(\mathbb{R})\) in space. * **Proving that the analytic lifespan cannot be finite:** In the interval \([0,T]\) constructed in the previous item, let \[T^{*}=\sup\{T>0,u\in C^{\omega}([0,T],A(\sigma(T))),\text{ for some }\sigma(T)>0\}\] and assume that \(T^{*}\) is finite. As a consequence of the embedding \(A(\sigma(T))\hookrightarrow H^{\infty}(\mathbb{R})\), Proposition 3.2 establishes that the initial datum \(u(T^{*})\) lies within \(A(r)\). By choosing \(\sigma_{0}<\min\{1,r/e\}\), the inverse of Lemma 5.1 from [2] shows that \(u(T^{*})\) belongs to \(E_{\sigma_{0},m}(\mathbb{R})\) for \(m\geq 2\) and \(\sigma_{0}\in(0,1]\). From Proposition 3.1, we deduce the existence of a unique solution \(\tilde{u}\in C^{\omega}([0,\epsilon],E_{\delta,m}(\mathbb{R}))\) for \(0<\delta<\sigma_{0}\), satisfying the initial condition \(\tilde{u}(0)=u(T^{*})\). Moreover, utilizing the embeddings \(E_{\delta,m}(\mathbb{R})\hookrightarrow A(\delta)\hookrightarrow H^{\infty}( \mathbb{R})\), we can conclude that \(\tilde{u}\in C([0,\epsilon];H^{\infty}(\mathbb{R}))\), implying \(\tilde{u}(t)=u(T^{*}+t)\) for \(t\in[0,\epsilon]\). If we let \(s:=T^{*}+t\), then \(u(s)=\tilde{u}(s-T^{*}),\) for \(s\in[T^{*},T^{*}+\epsilon]\), that is, \[u\in C^{\omega}([T^{*},T^{*}+\epsilon];E_{\delta,m}(\mathbb{R}))\subset C^{ \omega}([T^{*},T^{*}+\epsilon];A(\delta)).\] Let \(T>0\) be a real number satisfying \(T^{*}-\epsilon<T<T^{*}\) so that the solution \(u\) belongs to \(C^{\omega}([0,T];A(\sigma(T)))\) for some \(\sigma(T)>0\). By defining \(\tilde{\sigma}=\min\{\delta,\sigma(T)\}\), then \[u\in C^{\omega}([0,T];A(\tilde{\sigma})),\quad\text{and}\quad u\in C^{\omega} ([T^{*}-\epsilon,T^{*}+\epsilon];A(\tilde{\sigma})),\] which consequently says that \(u\in C^{\omega}([0,T^{*}+\epsilon];A(\tilde{\sigma}))\) and \(T^{*}\) cannot be the supremum. As a result of the contradiction, \(T^{*}\) must be infinite and, for every \(T>0\), there exists \(r(T)>0\) such that \(u\in C^{\omega}([0,T];A(r(T)))\). * **Conclusion of the proof:** The proof of Theorem 1.1 is concluded by using a result proved by Barosctihi, Himonas and Petronilho in [2], see page 752, stating that the previous item suffices to prove that \(u\in C^{\omega}([0,\infty)\times\mathbb{R})\). ## Acknowledgements This work was supported by the Royal Society under a Newton International Fellowship (reference number 201625) and by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (process number 308884/2022-1). Finally, the author would like to thank Professor Igor Leite Freire for all the fruitful discussions on the topic.
2309.16296
Production properties of deuterons, helions and tritons via an analytical nucleon coalescence method in Pb-Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
We improve a nucleon coalescence model to include the coordinate-momentum correlation in nucleon joint distributions, and apply it to Pb-Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV to study production properties of deuterons ($d$), helions ($^3$He) and tritons ($t$). We give formulas of the coalescence factors $B_2$ and $B_3$, and naturally explain their behaviors as functions of the collision centrality and the transverse momentum per nucleon $p_T/A$. We reproduce the transverse momentum spectra, averaged transverse momenta and yield rapidity densities of $d$, $^3$He and $t$, and find the system effective radius obtained in the coalescence production of light nuclei behaves similarly to Hanbury Brown-Twiss interferometry radius. We particularly give expressions of yield ratios $d/p$, $^3$He$/d$, $t/p$, $^3$He$/p$, $d/p^{2}$, $^3$He$/p^3$, $t/^3$He and argue their nontrivial behaviors can be used to distinguish production mechanisms of light nuclei.
Rui-Qin Wang, Yan-Hao Li, Jun Song, Feng-Lan Shao
2023-09-28T09:46:47Z
http://arxiv.org/abs/2309.16296v1
Production properties of deuterons, helions and tritons via an analytical nucleon coalescence method in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV ###### Abstract We improve a nucleon coalescence model to include the coordinate-momentum correlation in nucleon joint distributions, and apply it to Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV to study production properties of deuterons (\(d\)), helions (\({}^{3}\)He) and tritons (\(t\)). We give formulas of the coalescence factors \(B_{2}\) and \(B_{3}\), and naturally explain their behaviors as functions of the collision centrality and the transverse momentum per nucleon \(p_{T}/A\). We reproduce the transverse momentum spectra, averaged transverse momenta and yield rapidity densities of \(d\), \({}^{3}\)He and \(t\), and find the system effective radius obtained in the coalescence production of light nuclei behaves similarly to Hanbury Brown-Twiss interferometry radius. We particularly give expressions of yield ratios \(d/p\), \({}^{3}\)He/\(d\), \(t/p\), \({}^{3}\)He/\(p\), \(d/p^{2}\), \({}^{3}\)He/\(p^{3}\), \(t/^{3}\)He and argue their nontrivial behaviors can be used to distinguish production mechanisms of light nuclei. pacs: 25.75.-q, 25.75.Dw, 27.10.+h ## I Introduction In ultra-relativistic heavy ion collisions, light nuclei such as deuterons (\(d\)), helions (\({}^{3}\)He) and tritons (\(t\)) are a special group of observables [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. They are composite clusters and their production mechanisms are still under debate so far. As most of them are formed at the late stage of the system evolution, light nuclei are considered as sensitive probes of the fireball freeze-out properties [1; 2; 3; 4; 5]. The study of light nuclei production can help understand many fundamental issues in relativistic heavy ion collision physics, e.g., the hadronization mechanism [6], the structure of the quantum chromodynamics phase diagram [7; 8; 9; 10; 11] and the search for diharyons and other molecular states [12; 13], etc. In recent decades, the production of light nuclei in ultra-relativistic heavy ion collisions has always attracted much attention both in experiment [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24] and in theory [25; 26; 27; 28; 29; 30; 31]. The STAR experiment at the BNL Relativistic Heavy Ion Collider (RHIC) and the ALICE experiment at the CERN Large Hadron Collider (LHC) have collected a wealth of data on light nuclei production. These data exhibit some fascinating features [18; 19; 20; 21; 22; 23; 24]. In theory two production mechanisms, the thermal production mechanism [32; 33; 34; 35; 36; 37] and the coalescence mechanism [37; 28; 36; 38; 39; 40; 41; 42; 43; 44], have proved to be successful in describing light nuclei formation. In addition, transport scenario [45; 46; 47; 48; 49; 50; 51] is employed to study how light nuclei evolve and survive during the hadronic system evolution. The coalescence mechanism, in which light nuclei are assumed to be produced by the coalescence of the jacent nucleons in the phase space, possesses its unique characteristics. In order to see whether, if so, to what extent, these characteristics depend on the particular coalescence model used in obtaining these characteristics, we in our previous works [52; 53; 54] developed an analytic description for the production of different species of light nuclei in the coalescence picture with the assumption of the coordinate-momentum factorization. The obtained analytic formulas clearly show the relationships of light nuclei with primordial nucleons and effects of different factors on light nuclei production such as the whole hadronic system scale as well as the sizes of the formed light nuclei. In Refs. [52; 53], we applied the analytic coalescence model to Au-Au collisions at RHIC energies to successfully explain the transverse momentum spectra, yield rapidity densities, averaged transverse momenta and yield correlations of different light nuclei. We also applied it to pp, p-Pb and Pb-Pb collisions at LHC to study the behavior of the coalescence factor \(B_{A}\)[54], and found it can naturally explain the relatively weak \(p_{T}\) dependence of \(B_{A}\) in pp and p-Pb collisions. In Pb-Pb collisions it gave qualitative growth of \(B_{A}\) against \(p_{T}\), but growth extent was underestimated. It is urgently necessary to give quantitative explanations for \(B_{A}\) and further explore intrinsic properties of light nuclei production in heavy ion collisions with such high collision energy at the LHC. In this work, we extend the nucleon coalescence model to include the coordinate-momentum correlation originating possibly from the collective flows [55], the temperature gradients [56], etc., and apply it to Pb-Pb collisions at LHC to study the production of light nuclei. One main goal of this article is to bring to light the characteristics originating from the nucleon coalescence itself and to discriminate influences of different factors in heavy ion collisions with so high collision energy on light nuclei production. We study coalescence factors (\(B_{2}\) and \(B_{3}\)), transverse momentum (\(p_{T}\)) spectra, averaged transverse momenta (\(\langle p_{T}\rangle\)), yield rapidity densities (\(dN/dy\)) and yield ratios of different species of light nuclei. We find the nucleon coalescence model including the coordinate-momentum correlation can well describe the light nuclei production in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. We also find the system effective radius obtained in the coalescence production of light nuclei behaves similarly to Hanbury Brown-Twiss (HBT) interferometry radius. The paper is organized as follows. In Sec. II we give an introduction to the nucleon coalescence model. In Sec. III we study coalescence factors \(B_{2}\) and \(B_{3}\), and discuss their behaviors as functions of the collision centrality and the transverse momentum per nucleon. In Sec. IV, we study the \(p_{T}\) spectra, averaged transverse momenta, yield rapidity densities and yield ratios of \(d\), \({}^{3}\)He and \(t\). In Sec. V we give our summary. ## II The nucleon coalescence model In this section we extend the nucleon coalescence model in our previous works [52; 53; 54] to include the coordinate-momentum correlation in nucleon joint distributions. We present formulism of two nucleons coalescing into \(d\) and that of three nucleons coalescing into \({}^{3}\)He and \(t\). For \(t\), the deduction process is the same as that of \({}^{3}\)He and we do not repeat the display and only give the final formula. We start from a hadronic system produced at the final stage of the evolution of high energy heavy ion collision and suppose light nuclei are formed via the nucleon coalescence. The three-dimensional momentum distribution of the produced deuterons \(f_{d}(\mathbf{p})\) and that of relations \(f_{{}^{3}\text{He}}(\mathbf{p})\) are \[f_{d}(\mathbf{p}) = N_{pn}\int d\mathbf{x}_{1}d\mathbf{x}_{2}d\mathbf{p}_{1}d\mathbf{p}_{2}f_{pn}^{(n)}(\mathbf{x}_{1 },\mathbf{x}_{2};\mathbf{p}_{1},\mathbf{p}_{2}) \tag{1}\] \[\times\mathcal{R}_{d}(\mathbf{x}_{1},\mathbf{x}_{2};\mathbf{p}_{1},\mathbf{p}_{2 },\mathbf{p}),\] \[f_{{}^{3}\text{He}}(\mathbf{p}) = N_{ppn}\int d\mathbf{x}_{1}d\mathbf{x}_{2}d\mathbf{x}_{3}d\mathbf{p}_{1}d\mathbf{p}_{2 }d\mathbf{p}_{3}\] (2) \[\times f_{ppn}^{(n)}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3};\mathbf{p}_{1}, \mathbf{p}_{2},\mathbf{p}_{3})\] \[\times\mathcal{R}_{{}^{3}\text{He}}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_ {3};\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p}).\] Here \(f_{pn}^{(n)}(\mathbf{x}_{1},\mathbf{x}_{2};\mathbf{p}_{1},\mathbf{p}_{2})\) is the normalized joint coordinate-momentum distribution of proton-neutron pairs and \(f_{ppn}^{(n)}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3};\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3})\) is that of three-nucleon clusters. \(N_{pn}=N_{p}N_{n}\) is the number of all possible \(pn\)-pairs and \(N_{ppn}=N_{p}(N_{p}-1)N_{n}\) is that of all possible \(ppn\)-clusters. \(N_{p}\) is the proton number and \(N_{n}\) is the neutron number in the considered hadronic system. \(\mathcal{R}_{d}(\mathbf{x}_{1},\mathbf{x}_{2};\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p})\) and \(\mathcal{R}_{{}^{3}\text{He}}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3};\mathbf{p}_{1},\mathbf{p }_{2},\mathbf{p}_{3},\mathbf{p})\) are kernel functions. Here and from now on we use boldface type to distinguish three-dimensional vectors. Taking into account constraints from the momentum conservation and intrinsic quantum numbers of light nuclei, we rewrite kernel functions in the following forms as in Refs. [52; 53; 54; 57] \[\mathcal{R}_{d}(\mathbf{x}_{1},\mathbf{x}_{2};\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p })=g_{d}\mathcal{R}_{d}^{(x,p)}(\mathbf{x}_{1},\mathbf{x}_{2};\mathbf{p}_{1},\mathbf{p}_{2}) \delta(\sum_{i=1}^{2}\mathbf{p}_{i}-\mathbf{p}), \tag{3}\] \[\mathcal{R}_{{}^{3}\text{He}}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}; \mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p})=g_{{}^{3}\text{He}}\] \[\times\mathcal{R}_{{}^{3}\text{He}}^{(x,p)}(\mathbf{x}_{1},\mathbf{x}_{2}, \mathbf{x}_{3};\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3})\delta(\sum_{i=1}^{3}\mathbf{p}_{i}- \mathbf{p}), \tag{4}\] where the spin degeneracy factors \(g_{d}=3/4\) and \(g_{{}^{3}\text{He}}=1/4\). The Dirac \(\delta\) functions guarantee the momentum conservation in the coalescence process. The remaining \(\mathcal{R}_{d}^{(x,p)}(\mathbf{x}_{1},\mathbf{x}_{2};\mathbf{p}_{1},\mathbf{p}_{2})\) and \(\mathcal{R}_{{}^{3}\text{He}}^{(x,p)}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3};\mathbf{p}_{1 },\mathbf{p}_{2},\mathbf{p}_{3})\) can be solved from the Wigner transformation as adopting the wave function of a spherical harmonic oscillator as in Refs. [58; 59]. They are as follows \[\mathcal{R}_{d}^{(x,p)}(\mathbf{x}_{1},\mathbf{x}_{2};\mathbf{p}_{1},\mathbf{p} _{2})=8e^{-\frac{(\mathbf{x}_{1}^{\prime}-\mathbf{x}_{2}^{\prime})^{2}}{\sigma_{d}^{2} }}e^{-\frac{\mathbf{x}_{1}^{\prime}(\mathbf{x}_{1}^{\prime}+\mathbf{x}_{2}^{\prime})^{2}}{ \sigma_{d}^{2}\sigma_{d}^{2}}}, \tag{5}\] \[\mathcal{R}_{{}^{3}\text{He}}^{(x,p)}(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x} _{3};\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3})=82^{-\frac{(\mathbf{x}_{1}^{\prime}-\mathbf{x}_ {2}^{\prime})^{2}}{\sigma_{d}^{2}\sigma_{d}^{2}}}e^{-\frac{(\mathbf{x}_{1}^{\prime} +\mathbf{x}_{2}^{\prime}-\mathbf{x}_{2}^{\prime})^{2}}{\sigma_{d}^{3}\sigma_{\text{He} }}}\] \[\times e^{-\frac{\sigma_{{}^{3}\text{He}}^{(x_{1}^{\prime}-\mathbf{x}_ {2}^{\prime})^{2}}}{2\sigma_{d}^{2}\sigma_{d}^{2}}}e^{-\frac{\mathbf{x}_{1}^{ \prime}-\mathbf{x}_{2}^{\prime}(\mathbf{x}_{1}^{\prime}+\mathbf{x}_{2}^{\prime})^{2}}{ \sigma_{d}^{3}\sigma_{\text{He}}^{2}}}. \tag{6}\] The superscript \({}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime} have \[f_{\rm He}(\mathbf{p})=8^{2}g_{\rm 3He}N_{ppn}\left(\frac{\hbar^{2}c^{2} \pi}{\sqrt{3}\sigma_{\rm 3He}^{2}}\right)^{3}\gamma^{2}\times\] \[\int d\mathbf{x}_{1}d\mathbf{x}_{2}d\mathbf{x}_{3}f_{ppn}^{(n)}(\mathbf{x}_{1}, \mathbf{x}_{2},\mathbf{x}_{3};\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3})e^{- \frac{\sigma_{1}^{2}-\sigma_{2}^{2}}{\sigma_{3He}^{2}}}e^{-\frac{\sigma_{1}^{ 2}+\sigma_{2}^{2}-\sigma_{3He}^{2}}{\sigma_{3He}^{2}}}. \tag{10}\] Changing coordinate integral variables in Eq. (9) to be \(\mathbf{X}=\frac{\mathbf{x}_{1}+\mathbf{x}_{2}}{2}\) and \(\mathbf{r}=\mathbf{x}_{1}-\mathbf{x}_{2}\), and those in Eq. (10) to be \(\mathbf{Y}=(\mathbf{x}_{1}+\mathbf{x}_{2}+\mathbf{x}_{3})/\sqrt{3}\), \(\mathbf{r}_{1}=(\mathbf{x}_{1}-\mathbf{x}_{2})/\sqrt{2}\) and \(\mathbf{r}_{2}=(\mathbf{x}_{1}+\mathbf{x}_{2}-2\mathbf{x}_{3})/\sqrt{6}\), we have \[f_{d}(\mathbf{p})=8g_{d}N_{ppn}\left(\frac{\hbar c\sqrt{\pi}}{ \sigma_{d}}\right)^{3}\gamma\int dXd\mathbf{r}f_{ppn}^{(n)}(\mathbf{X},\mathbf{r},\frac{ \mathbf{p}}{2},\frac{\mathbf{p}}{2})e^{-\frac{\sigma_{2}^{2}}{\sigma_{3He}^{2}}}, \tag{11}\] \[f_{\rm He}(\mathbf{p})=8^{2}g_{\rm 3He}N_{ppn}\left(\frac{\hbar^{2}c^{2} \pi}{\sqrt{3}\sigma_{\rm 3He}^{2}}\right)^{3}\gamma^{2}\] \[\times\int d\mathbf{Y}d\mathbf{r}_{1}d\mathbf{r}_{2}f_{ppn}^{(n)}(\mathbf{Y},\bm {r}_{1},\mathbf{r}_{2};\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3})e^{- \frac{\sigma_{2}^{2}}{\sigma_{3He}^{2}}}e^{-\frac{\sigma_{2}^{2}}{\sigma_{3He }^{2}}}. \tag{12}\] Considering the nucleon strong interaction and the nucleon coalescence are local, we neglect the effect of collective motion on the center of mass coordinate and assume it is factorized in nucleon joint distributions, i.e., \[f_{pn}^{(n)}(\mathbf{X},\mathbf{r},\frac{\mathbf{p}}{2},\frac{\mathbf{p}}{2})=f_{ pn}^{(n)}(\mathbf{X})f_{pn}^{(n)}(\mathbf{r},\frac{\mathbf{p}}{2},\frac{\mathbf{p}}{2}), \tag{13}\] \[f_{ppn}^{(n)}(\mathbf{Y},\mathbf{r}_{1},\mathbf{r}_{2};\frac{\mathbf{p}}{3}, \frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3})=f_{ppn}^{(n)}(\mathbf{Y})f_{ppn}^{(n)}(\mathbf{r}_{ 1},\mathbf{r}_{2};\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3}). \tag{14}\] Then we have \[f_{d}(\mathbf{p})=8g_{d}N_{ppn}\left(\frac{\hbar c\sqrt{\pi}}{ \sigma_{d}}\right)^{3}\gamma\int d\mathbf{r}f_{pn}^{(n)}(\mathbf{r},\frac{\mathbf{p}}{2}, \frac{\mathbf{p}}{2})e^{-\frac{\sigma_{2}^{2}}{\sigma_{2}^{2}}}, \tag{15}\] \[f_{\rm He}(\mathbf{p})=8^{2}g_{\rm 3He}N_{ppn}\left(\frac{\hbar^{2}c^{2} \pi}{\sqrt{3}\sigma_{\rm 3He}^{2}}\right)^{3}\gamma^{2}\] \[\times\int d\mathbf{r}_{1}d\mathbf{r}_{2}f_{ppn}^{(n)}(\mathbf{r}_{1},\mathbf{r} _{2};\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3})e^{-\frac{\sigma_{2} ^{2}}{\sigma_{3He}^{2}}}e^{-\frac{\sigma_{2}^{2}}{\sigma_{3He}^{2}}}. \tag{16}\] We adopt the frequently-used gaussian form for the relative coordinate distribution as in such as Ref. [61], i.e., \[f_{pn}^{(n)}(\mathbf{r};\frac{\mathbf{p}}{2},\frac{\mathbf{p}}{2})=\frac{1}{ \left[\pi CR_{f}^{2}(\mathbf{p})\right]^{3/2}}e^{-\frac{\mathbf{r}^{2}}{\sigma_{3}^{2} \sigma_{3}^{2}}}f_{pn}^{(n)}(\mathbf{p},\frac{\mathbf{p}}{2},\frac{\mathbf{p}}{2}), \tag{17}\] \[f_{ppn}^{(n)}(\mathbf{r}_{1},\mathbf{r}_{2};\frac{\mathbf{p}}{3},\frac{\mathbf{p} }{3},\frac{\mathbf{p}}{3})=\frac{1}{\left[\pi^{2}C_{1}C_{2}R_{f}^{4}(\mathbf{p}) \right]^{3/2}}-\frac{\frac{\sigma_{1}^{2}}{C_{1}R_{f}^{2}\sigma_{3}^{2}}}{ \times e^{-\frac{\sigma_{2}^{2}}{C_{2}R_{f}^{2}\sigma_{3}^{2}}}}\] \[\times e^{-\frac{\sigma_{2}^{2}}{C_{2}R_{f}^{2}\sigma_{3}^{2}}}f_{ ppn}^{(n)}(\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3}). \tag{18}\] Here \(R_{f}(\mathbf{p})\) is the effective radius of the source system at the light nuclei freeze-out, and it generally depends on the momentum of the light nuclei [62; 63; 64]. Considering relations between \(\mathbf{r}\), \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) with \(\mathbf{x}_{1}\), \(\mathbf{x}_{2}\) and \(\mathbf{x}_{3}\), \(C_{1}\) equals to \(C/2\) and \(C_{2}\) equals to \(2C/3\). So there is only one distribution width parameter \(C\) to be determined, and it is set to be 4 the same as that in Ref. [61]. With instantaneous coalescence in the rest frame of \(pn\)-pair or \(ppn\)-cluster, i.e., \(\Delta t^{\prime}=0\), we get the Lorentz transformation \[\mathbf{r}=\mathbf{r}^{\prime}+(\gamma-1)\frac{\mathbf{r}^{\prime}\cdot\mathbf{ \beta}}{\beta^{2}}\mathbf{\beta}, \tag{19}\] where \(\mathbf{\beta}\) is the three-dimensional velocity vector of the center-of-mass frame of \(pn\)-pair or \(ppn\)-cluster in the laboratory frame. Substituting Eqs. (17) and (18) into Eqs. (15) and (16) and using Eq. (19) to integrate from relative coordinate variables, we can obtain \[f_{d}(\mathbf{p})=\frac{8g_{d}(\sqrt{\pi}hc)^{3}\gamma}{\left[CR_{f}^{ 2}(\mathbf{p})+\sigma_{d}^{2}\right]\sqrt{C[R_{f}(\mathbf{p})/\gamma]^{2}+\sigma_{d}^{2}}}f _{pn}(\frac{\mathbf{p}}{2},\frac{\mathbf{p}}{2}), \tag{20}\] \[f_{\rm He}(\mathbf{p})=\frac{8^{2}g_{\rm 3He}(\pi\hbar^{2}c^{2})^{3} \gamma^{2}}{3\sqrt{\left[\frac{c}{2}R_{f}^{2}(\mathbf{p})+\sigma_{\rm 3He}^{2}\right]\sqrt{\frac{c}{2}[R_{f}(\mathbf{p})/\gamma]^{2}+\sigma_{\rm 3He}^{2}}}}\] \[\times\frac{\left[\frac{2C}{3}R_{f}^{2}(\mathbf{p})+\sigma_{\rm 3He}^{2}\right]\sqrt{\frac{2C}{3}[R_{f}(\mathbf{p})/\gamma]^{2}+\sigma_{\rm 3He}^{2}}}{ \times f_{ppn}(\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3},\frac{\mathbf{p}}{3}). \tag{21}\] Ignoring correlations between protons and neutrons, we have the three-dimensional momentum distributions of light nuclei as \[f_{d}(\mathbf{p})=\frac{8g_{d}(\sqrt{\pi}hc)^{3}\gamma}{\left[CR_{f}^{2}(\mathbf{p})+ \sigma_{d}^{2}\right]\sqrt{C[R_{f}(\mathbf{p})/\gamma]^{2}+\sigma_{d}^{2}}}f_{p}( \frac{\mathbf{p}}{2})f_{n}(\frac{\mathbf{p}}{2}),\] (22) \[f_{\rm He}(\mathbf{p})=\frac{8^{2}g_{\rm He}(\pi\hbar^{2}c^{2})^{3} \gamma^{2}}{3\sqrt{\left[\frac{c}{2}R_{f}^{2}(\mathbf{p})+\sigma_{\rm 3He}^{2}\right]\sqrt{\frac \[\times\frac{1}{\sqrt{\frac{C}{2}[R_{f}(p_{T})/\gamma]^{2}+\sigma_{{}_{ \rm He}}^{2}}\sqrt{\frac{2C}{3}[R_{f}(p_{T})/\gamma]^{2}+\sigma_{{}_{\rm He}}^{ 2}}}\] \[\times f_{p}^{(inv)}(\frac{p_{T}}{3})f_{p}^{(inv)}(\frac{p_{T}}{3})f_{ n}^{(inv)}(\frac{p_{T}}{3}). \tag{25}\] Here \(m_{d}\) is the mass of the \(d\) and \(m_{{}^{2}{\rm He}}\) is that of the \({}^{3}\)He. For tritons, we similarly have \[f_{t}^{(inv)}(p_{T})=\frac{192\sqrt{3}g_{t}(\pi\hbar^{2}c^{2})^{ 3}}{m_{t}^{2}\left[\frac{C}{2}R_{f}^{2}(p_{T})+\sigma_{t}^{2}\right]\left[\frac {2C}{3}R_{f}^{2}(p_{T})+\sigma_{t}^{2}\right]}\] \[\times\frac{1}{\sqrt{\frac{C}{2}[R_{f}(p_{T})/\gamma]^{2}+\sigma_ {t}^{2}}\sqrt{\frac{2C}{3}[R_{f}(p_{T})/\gamma]^{2}+\sigma_{t}^{2}}}\] \[\times f_{p}^{(inv)}(\frac{p_{T}^{T}}{3})f_{n}^{(inv)}(\frac{p_{T }}{3})f_{n}^{(inv)}(\frac{p_{T}}{3}), \tag{26}\] where \(g_{t}=1/4\) and \(\sigma_{t}=R_{t}=1.7591\) fm [60]. \(m_{t}\) is the mass of the \(t\). Eqs. (24-26) show relationships of light nuclei with primordial nucleons in momentum space in the laboratory frame. They can be used to calculate coalescence factors, yield rapidity densities and \(p_{T}\) spectra of light nuclei in high energy collisions, especially in heavy ion collisions at the LHC where the coupling effect of coordinate and momentum may be intenser due to stronger collective motions and larger temperature gradients. We will show their applications in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV in the following sections. ## III Results of coalescence factors The coalescence factor \(B_{A}\) is defined as \[B_{A}=f_{d,{}^{3}{\rm He},f}^{(inv)}(p_{T})\left[\left(f_{p}^{(inv)}(\frac{p_ {T}}{A})\right)^{2}\left(f_{n}^{(inv)}(\frac{p_{T}}{A})\right)^{A-Z}\right], \tag{27}\] where \(A\) is the mass number and \(Z\) is the charge of the light nuclei. \(B_{A}\) is a key link between the formed light nuclei and the primordial nucleons, and folds important kinetic and dynamical information of the coalescence process. Intuitively unfolding \(B_{A}\) and a quantitative explanation for its centrality and \(p_{T}\)-dependent behaviors in heavy ion collisions at the LHC are necessary. Substituting Eqs. (24-26) into Eq. (27), we respectively have for \(d\), \({}^{3}\)He and \(t\) \[B_{2}(p_{T})=\frac{32g_{d}(\sqrt{\pi}\hbar c)^{3}}{m_{d}\left[ CR_{f}^{2}(p_{T})+\sigma_{d}^{2}\right]\sqrt{C[R_{f}(p_{T})/\gamma]^{2}+\sigma_{d}^{2}}}, \tag{28}\] \[B_{3}(p_{T})=\frac{192\sqrt{3}g_{t}(\pi\hbar^{2}c^{2})^{3}}{m_{{} ^{2}{\rm He}}^{2}\left[\frac{C}{2}R_{f}^{2}(p_{T})+\sigma_{{}_{\rm He}}^{2} \right]\left[\frac{2C}{3}R_{f}^{2}(p_{T})+\sigma_{{}_{\rm He}}^{2}\right]}\] \[\times\frac{1}{\sqrt{\frac{C}{2}[R_{f}(p_{T})/\gamma]^{2}+\sigma_ {{}_{\rm He}}^{2}}\sqrt{\frac{2C}{3}[R_{f}(p_{T})/\gamma]^{2}+\sigma_{{}_{\rm He }}^{2}}},\] (29) \[B_{3}(p_{T})=\frac{192\sqrt{3}g_{t}(\pi\hbar^{2}c^{2})^{3}}{m_{t }^{2}\left[\frac{C}{3}R_{f}^{2}(p_{T})+\sigma_{t}^{2}\right]\left[\frac{2C}{ 3}R_{f}^{2}(p_{T})+\sigma_{t}^{2}\right]}\] \[\times\frac{1}{\sqrt{\frac{C}{2}[R_{f}(p_{T})/\gamma]^{2}+\sigma_ {t}^{2}}\sqrt{\frac{2C}{3}[R_{f}(p_{T})/\gamma]^{2}+\sigma_{t}^{2}}}. \tag{30}\] The above equations clearly show that \(B_{2}\) and \(B_{3}\) depend on the masses \(m_{d,{}^{3}{\rm He},f}\), the spin degeneracy factors \(g_{d,{}^{3}{\rm He},f}\) and the sizes of light nuclei via \(\sigma_{d,{}^{3}{\rm He},f}\). The Lorentz contraction factor \(\gamma\), resulting from setting nucleon coalescence criteria in the rest frame of the nucleon pair or three-nucleon cluster rather than in the laboratory frame, affects the \(p_{T}\)-dependent behaviors of \(B_{2}\) and \(B_{3}\). This has been studied in Ref. [54]. The other influencing factor for \(p_{T}\)-dependent behaviors of \(B_{2}\) and \(B_{3}\) is the \(R_{f}(p_{T})\), which is also closely related with centrality-dependent behaviors of \(B_{2}\) and \(B_{3}\). To further compute \(B_{2}\) and \(B_{3}\), the specific form of \(R_{f}(p_{T})\) is necessary. In heavy ion collisions at CERN-SPS energies, it has been found that \(R_{f}(p_{T})\) adopted as the femtoscopic radius can describe the \(d\) production well [42]. If this still holds at LHC energies, the dependence of \(R_{f}(p_{T})\) on centrality and \(p_{T}\) should factorize into a linear dependence on the cube root of the pseudorapidity density of charged particles (\(dN_{ch}/d\eta)^{1/3}\) and a power-law dependence on the transverse mass of the formed light nucleus \(m_{T}\)[64]. So we get \[R_{f}(p_{T})=a*\left(\frac{dN_{ch}}{d\eta}\right)^{1/3}*\left(\sqrt{p_{T}^{2}+m _{d,{}^{3}{\rm He},f}^{2}}\right)^{b}, \tag{31}\] where \(a\) and \(b\) are free parameters. Their values in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV are (0.67,-0.25) for \(d\) and Figure 1: The \(B_{2}\) of \(d\) as a function of \(p_{T}/2\) in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Symbols with error bars are experimental data [18; 65] and different solid lines are theoretical results. Different dotted lines are results with the coordinate-momentum factorization assumption in Ref. [54]. (0.60,-0.25) for \({}^{3}\)He and \(t\), which are determined by reproducing the data of the \(p_{T}\) spectra of \(d\) in 0-10% centrality and that of \({}^{3}\)He in 0-20% centrality. Here \(b\) is set to be centrality independent, which is consistent with that in hydrodynamics [66] and that in STAR measurements of two-pion interferometry in central and simi-central Au-Au collisions [67]. \(a\) is also centrality independent. Precise experimental measurements of HBT femtoscopic radius for nucleons in the future can crosscheck the scaling behaviors of \(R_{f}\) as functions of \(dN_{ch}/d\eta\) and \(p_{T}\). We use the data of \(dN_{ch}/d\eta\) in Ref. [68] to get \(R_{f}(p_{T})\), and then compute \(B_{2}\) and \(B_{3}\). Fig. 1 shows \(B_{2}\) of \(d\) as a function of the transverse momentum scaled by the mass number \(p_{T}/2\) in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Symbols with error bars are experimental data [18; 65] and different solid lines are our theoretical results of the current nucleon coalescence model. Different dotted lines are results from Ref. [54] where the assumption of the coordinate-momentum factorization was adopted. From Fig. 1, one can see from central to peripheral collisions, \(B_{2}\) increases. This is due to the decreasing scale of the hadronic system, which makes it easier for a \(pn\)-pair to recombine into a deuteron. For the same centrality, \(B_{2}\) increase as a function of \(p_{T}/2\). This increase behavior results on one hand from the Lorentz contraction factor \(\gamma\)[54]. On the other hand, it results from the decreasing \(R_{f}\) with increasing momentum. The rising behavior of the experimental data as a function of \(p_{T}/2\) from central to peripheral collisions can be quantitatively described by the current nucleon coalescence model. Fig. 2 (a) shows \(B_{3}\) of \({}^{3}\)He as a function of \(p_{T}/3\) in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Symbols with error bars are experimental data [18] and different solid lines are our theoretical results. Different dotted lines are results from Ref. [54] where the assumption of the coordinate-momentum factorization was adopted. Similarly as \(B_{2}\), experimental data of \(B_{3}\) for \({}^{3}\)He also exhibits a rising trend as a function of \(p_{T}/3\), which is reproduced well by the current nucleon coalescence model from central to peripheral collisions. Predictions of \(B_{3}\) for \(t\) in Fig. 2 (b) show similar trend as that of \({}^{3}\)He, which can be tested by future experimental measurements. Compared the current results denoted by solid lines with those in Ref. [54] denoted by dotted lines in Fig. 1 and Fig. 2 (a), one can see the improved nucleon coalescence model can better describe the slopes of \(B_{2}\) and \(B_{3}\). At the end of this section, we want to emphasize that the centrality and momentum dependent behaviors of \(B_{2}\) and \(B_{3}\) in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV are simultaneously explained by the improved nucleon coalescence model. The influencing factors of \(B_{2}\) and \(B_{3}\) are explicitly unfolded, as shown in Eqs. (28-30). Some other models based on transport approach are also used to study behaviors of \(B_{4}\) in heavy ion collisions at the high LHC energies [47; 69; 70; 71]. All the results from these different models can help cross understand production properties of light nuclei from different aspects. Figure 2: The \(B_{3}\) of (a) \({}^{3}\)He and (b) \(t\) as a function of \(p_{T}/3\) in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Symbols with error bars are experimental data [18] and different solid, dashed and dash-dotted lines are our theoretical results. Different dotted lines in panel (a) are results of \({}^{3}\)He with the coordinate-momentum factorization assumption in Ref. [54]. Results of \(p_{T}\) spectra In this section, we use the nucleon coalescence model to study the \(p_{T}\) spectra of light nuclei in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. We first introduce the nucleon \(p_{T}\) spectra. We then compute the \(p_{T}\) spectra of \(d\), \({}^{3}\)He and \(t\). We finally calculate the averaged transverse momenta \(\langle p_{T}\rangle\), the yield rapidity densities \(dN/dy\) and yield ratios of different light nuclei. ### The \(p_{T}\) spectra of primordial nucleons The \(p_{T}\) spectra of primordial nucleons are necessary inputs for computing \(p_{T}\) distributions of light nuclei in the nucleon coalescence model. We here use the blast-wave model to get \(p_{T}\) distribution functions of primordial protons by fitting the experimental data of prompt (anti)protons in Ref. [68]. The blast-wave function [72] is given as \[\frac{d^{2}N}{2\pi p_{T}dp_{T}dy}\propto\int_{0}^{R}rdrm_{T}I_{0} \left(\frac{p_{T}sinh\rho}{T_{kin}}\right)K_{1}\left(\frac{m_{T}cosh\rho}{T_{ kin}}\right), \tag{32}\] where \(r\) is the radial distance in the transverse plane and \(R\) is the radius of the fireball. \(m_{T}\) is the transverse mass of the proton. \(I_{0}\) and \(K_{1}\) are the modified Bessel functions, and the velocity profile \(\rho=tanh^{-1}[\beta_{s}(\frac{r}{R})^{n}]\). The surface velocity \(\beta_{s}\), the kinetic freeze-out temperature \(T_{kin}\) and \(n\) are fitting parameters. Fig. 3 shows the \(p_{T}\) spectra of prompt protons plus antiprotons in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Symbols with error bars are experimental data [68], and different lines are the results of the blast-wave model. The \(p_{T}\) spectra in different centralities are scaled by different factors for clarity as shown in the figure. For the primordial neutron \(p_{T}\) spectra, we adopt the same as those of primordial protons as we focus on light nuclei production at midrapidity at so high LHC energy that the isospin symmetry is well satisfied. We in the following use these nucleon results from the blast-wave model to compute the productions of different light nuclei. ### The \(p_{T}\) spectra of light nuclei With Eq. (24), we first calculate the \(p_{T}\) spectra of deuterons in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV in \(0-10\%\), \(10-20\%\), \(20-40\%\), \(40-60\%\) and \(60-80\%\) centralities. Different lines scaled by different factors for clarity in Fig. 4 are our theoretical results. Symbols with error bars are experimental data from the ALICE collaboration [18]. From Fig. 4, one can see the \(p+n\) coalescence can well reproduce the available data from central to peripheral Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. We then study the \(p_{T}\) spectra of \({}^{3}\)He and \(t\) in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV in \(0-20\%\) and \(20-80\%\) centralities. Different lines in Fig. 5 (a) are our theoretical results of \({}^{3}\)He, which agree with the available data denoted by filled symbols [18] within experimental uncertainties. In low \(p_{T}<2\) GeV/c region where the data are absent, our theoretical results show different trends in different centralities, slight increase in \(0-20\%\) centrality but decrease in \(20-80\%\) centrality. This difference is caused by the competition of the Figure 3: The \(p_{T}\) spectra of prompt protons plus antiprotons in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Symbols with error bars are experimental data [68], and different lines are the results of the blast-wave model. Figure 4: The \(p_{T}\) spectra of deuterons in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Symbols are experimental data [18] and different lines are the theoretical results. \(p_{T}\) distributions of nucleons and \(R_{f}(p_{T})\) in our model. With the increase of the \(p_{T}\), the decreasing nucleon \(p_{T}\) distributions suppress \({}^{3}\)He production while decreasing \(R_{f}(p_{T})\) enhances its production. In central \(0-20\%\) collisions, nucleon \(p_{T}\) distributions decrease very weakly or nearly hold invariant in \(p_{T}<0.6\) GeV/c, so decreasing \(R_{f}(p_{T})\) as the function of \(p_{T}\) makes the \(p_{T}\) spectra of \({}^{3}\)He increase in \(p_{T}<2\) GeV/c. In \(20-80\%\) centrality, although decreasing \(R_{f}(p_{T})\) still makes the \(p_{T}\) spectra of \({}^{3}\)He increase as the function of \(p_{T}\), but obvious decreasing \(p_{T}\) distributions of nucleons in \(p_{T}<0.6\) GeV/c dominate the decreasing behavior of the \(p_{T}\) spectra of \({}^{3}\)He. Future experimental measurements at low \(p_{T}\) area can test the pattern of the \(R_{f}(p_{T})\) and the coalescence production mechanism for \({}^{3}\)He. Dashed line and dash-dotted line in Fig. 5 (b) are predictions for \(t\) in centralities \(0-20\%\) and \(20-80\%\), respectively. ### Averaged transverse momenta and yield rapidity densities of light nuclei We here study the averaged transverse momenta \(\langle p_{T}\rangle\) and yield rapidity densities \(dN/dy\) of \(d\), \({}^{3}\)He and \(t\). Our theoretical results are put in the fourth and sixth columns in Table 1. Experimental data in the third and fifth columns are from Ref. [18]. Theoretical results for \(d\) and \({}^{3}\)He are consistent with the corresponding data within the experimental uncertainties. Predictions for \(t\) are provided for future experimental measurements. A clear decreasing trend for both \(\langle p_{T}\rangle\) and \(dN/dy\) from central to peripheral collisions is observed. This is due to that in more central collisions more energy is deposited in the midrapidity region and collective evolution exists longer. ### Yield ratios of light nuclei Yield ratios of light nuclei are characteristic probes for production mechanisms and contain intrinsic production correlations among different light nuclei. In this subsection, we study three groups of yield ratios. One is two-particle ratios such as \(d/p\), \({}^{3}\)He/\(d\), \(t/p\) and \({}^{3}\)He/\(p\). The second group includes \(d/p^{2}\) and \({}^{3}\)He/\(p^{3}\). They represent the probability of any nucleon-pair coalescing into a \(d\) and that of any \(pp\)-cluster coalescing into a \({}^{3}\)He. The last is \(t/^{3}\)He, which exhibits interesting behaviors as functions of \(p_{T}\) and the collision centrality. From Eqs. (24-26) we approximately have the \(p_{T}\)-integrated yield ratios \[\frac{d}{p} \propto \frac{N_{p}}{\langle R_{f}\rangle^{3}\left(C+\frac{\sigma_{d}^{2 }}{\langle R_{f}\rangle^{2}}\right)\sqrt{\frac{C}{\langle 5\gamma^{2} \rangle}+\frac{\sigma_{d}^{2}}{\langle R_{f}\rangle^{2}}}}, \tag{33}\] \[\frac{{}^{3}\text{He}}{d} \propto \frac{N_{p}\left(C+\frac{\sigma_{d}^{2}}{\langle R_{f}\rangle^{ 2}}\right)\sqrt{\frac{C}{\langle 5\gamma^{2}\rangle}+\frac{\sigma_{d}^{2}}{ \langle R_{f}\rangle^{2}}}}{\langle R_{f}\rangle^{3}\left(\frac{C}{2}+\frac{ \sigma_{3\text{th}}^{2}}{\langle R_{f}\rangle^{2}}\right)\sqrt{\frac{C}{ \langle 2\gamma^{2}\rangle}+\frac{\sigma_{3\text{th}}^{2}}{\langle R_{f}\rangle^{2}}}}\] \[\times \frac{1}{\left(\frac{2C}{3}+\frac{\sigma_{3\text{th}}^{2}}{\langle R _{f}\rangle^{2}}\right)\sqrt{\frac{2C}{\langle 5\gamma^{2}\rangle}+\frac{\sigma_{3 \text{th}}^{2}}{\langle R_{f}\rangle^{2}}}},\] \[\approx \frac{2^{3/2}N_{p}}{\langle R_{f}\rangle^{3}\left(\frac{2C}{3}+ \frac{\sigma_{3\text{th}}^{2}}{\langle R_{f}\rangle^{2}}\right)\sqrt{\frac{2C} {3\langle 5\gamma^{2}\rangle}+\frac{\sigma_{3\text{th}}^{2}}{\langle R_{f}\rangle^{2}}}}\] Figure 5: The \(p_{T}\) spectra of (a) \({}^{3}\)He and (b) \(t\) in different centralities in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV. Symbols are experimental data [18] and different lines are the theoretical results. \[\times\left\{1+\Delta\epsilon^{2}\left[\frac{1}{1+\frac{C(R_{f})^{2}}{( \sqrt{2}\sigma_{\rm{Jh}})^{2}}}+\frac{1/2}{1+\frac{C(R_{f})^{2}/(\gamma)^{2}}{( \sqrt{2}\sigma_{\rm{Jh}})^{2}}}\right]\right\}, \tag{34}\] \[\frac{t}{p}\propto\frac{N_{p}^{2}}{\langle R_{f}\rangle^{6} \left(\frac{C}{2}+\frac{\sigma_{1}^{2}}{\langle R_{f}\rangle^{2}}\right)\left( \frac{2C}{3}+\frac{\sigma_{1}^{2}}{\langle R_{f}\rangle^{2}}\right)}\] (35) \[\times\frac{1}{\sqrt{\frac{C}{2(\gamma)^{2}}}+\frac{\sigma_{1}^{ 2}}{\langle R_{f}\rangle^{2}}\sqrt{\frac{2C}{3(\gamma)^{2}}}+\frac{\sigma_{1}^ {2}}{\langle R_{f}\rangle^{2}}},\] \[\frac{3{\rm He}}{p}\propto\frac{N_{p}^{2}}{\langle R_{f}\rangle^{ 6}\left(\frac{C}{2}+\frac{\sigma_{1}^{2}}{\langle R_{f}\rangle^{2}}\right) \left(\frac{2C}{3}+\frac{\sigma_{1}^{2}}{\langle R_{f}\rangle^{2}}\right)}\] \begin{table} \begin{tabular}{c c c c c c} \hline \hline & \multicolumn{3}{c}{\(\langle p_{T}\rangle\)} & \multicolumn{3}{c}{\(dN/dy\)} \\ \cline{3-6} & \multicolumn{1}{c}{Centrality} & \multicolumn{2}{c}{Data} & \multicolumn{1}{c}{Theory} & \multicolumn{1}{c}{Data} & \multicolumn{1}{c}{Theory} \\ \hline & 0–10\% & \(2.12\pm 0.00\pm 0.09\) & 2.19 & \((9.82\pm 0.04\pm 1.58)\times 10^{-2}\) & \(11.38\times 10^{-2}\) \\ & 10-20\% & \(2.07\pm 0.01\pm 0.10\) & 2.12 & \((7.60\pm 0.04\pm 1.25)\times 10^{-2}\) & \(7.55\times 10^{-2}\) \\ \(d\) & 20–40\% & \(1.92\pm 0.00\pm 0.11\) & 1.95 & \((4.76\pm 0.02\pm 0.82)\times 10^{-2}\) & \(4.28\times 10^{-2}\) \\ & 40-60\% & \(1.63\pm 0.01\pm 0.09\) & 1.62 & \((1.90\pm 0.01\pm 0.41)\times 10^{-2}\) & \(1.71\times 10^{-2}\) \\ & 60-80\% & \(1.29\pm 0.01\pm 0.14\) & 1.28 & \((0.51\pm 0.01\pm 0.14)\times 10^{-2}\) & \(0.42\times 10^{-2}\) \\ \({}^{3}\)He & 0–20\% & \(2.83\pm 0.05\pm 0.45\) & 2.95 & \((2.76\pm 0.09\pm 0.62)\times 10^{-4}\) & \(2.60\times 10^{-4}\) \\ & 20-80\% & \(2.65\pm 0.06\pm 0.45\) & 2.18 & \((5.09\pm 0.24\pm 1.36)\times 10^{-5}\) & \(5.14\times 10^{-5}\) \\ \hline \(t\) & 0–20\% & \(---\) & 2.97 & \(---\) & \(2.77\times 10^{-4}\) \\ & 20-80\% & \(---\) & 2.20 & \(---\) & \(5.84\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Averaged transverse momenta \(\langle p_{T}\rangle\) and yield rapidity densities \(dN/dy\) of \(d\), \({}^{3}\)He and \(t\) in different centralities in Pb-Pb collisions at \(\sqrt{s_{\rm{NN}}}=2.76\) TeV. Experimental data in the third and fifth columns are from Ref. [18]. Theoretical results are in the fourth and sixth columns. Figure 6: Yield ratios (a) \(d/p\), (b) \({}^{3}{\rm He}/d\), (c) \(t/p\) and (d) \({}^{3}{\rm He}/p\) as a function of \(dN_{ch}/d\eta\) in Pb-Pb collisions at \(\sqrt{s_{\rm{NN}}}=2.76\) TeV. Filled circles are experimental data [18] and open circles connected with solid lines to guide the eye are the theoretical results. \[\times\frac{1}{\sqrt{\frac{C}{2\langle\gamma\rangle^{2}}+\frac{\sigma_{\rm {Ith}}^{2}}{\langle R_{f}\rangle^{2}}}\sqrt{\frac{2C}{3\langle\gamma\rangle^{2} }+\frac{\sigma_{\rm{Ith}}^{2}}{\langle R_{f}\rangle^{2}}}. \tag{36}\] The angle brackets denote the averaged values. Note that in the approximately equal sign in Eq. (34), we ignore the difference of \(\langle R_{f}\rangle\) and that of \(\langle\gamma\rangle\) for \(d\) and \({}^{3}\)He and ignore the higher order terms of \(\Delta\epsilon^{2}\), where \(\Delta\epsilon^{2}=[\sigma_{d}^{2}-(\sqrt{2}\sigma_{{}^{3}{\rm He}})^{2}]/( \sqrt{2}\sigma_{{}^{3}{\rm He}})^{2}<1\). Eqs. (33-36) show that centrality-dependent behaviors of these two-particle ratios are closely related with the nucleon density \(N_{p}/\langle R_{f}\rangle^{3}\), \(\sigma_{d}/\langle R_{f}\rangle\) and \(\langle\gamma\rangle\). From peripheral to central collisions, i.e., with the increasing \(dN_{ch}/d\eta\), \(\langle R_{f}\rangle\) and \(\langle\gamma\rangle\) increase. Suppressions on these ratios from \(\sigma_{d}\);\({}_{\rm He,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ The yield ratio \(t/^{3}\)He provides another test of the coalescence productions for light nuclei. With Eqs. (25) and (26), we have its \(p_{T}\)-dependent function as \[\frac{t}{{}^{3}\text{He}}(p_{T}) = \left[\frac{\frac{C}{2}+\frac{\sigma_{\text{3}\text{He}}^{2}}{R_{ j}^{2}(p_{T})}}{\left[\frac{C}{2}+\frac{\sigma_{\text{3}\text{He}}^{2}}{R_{ j}^{2}(p_{T})}\right]}\left[\frac{2C}{3}+\frac{\sigma_{\text{3}\text{He}}^{2}}{R_{ j}^{2}(p_{T})}\right]\right. \tag{39}\] \[\times \frac{\sqrt{\frac{C}{2\gamma^{2}}}+\frac{\sigma_{\text{3}\text{He }}^{2}}{R_{j}^{2}(p_{T})}}{\sqrt{\frac{2C}{3\gamma^{2}}}+\frac{\sigma_{\text{3 }\text{He}}^{2}}{R_{j}^{2}(p_{T})}}\] \[\approx 1+\frac{\Delta\sigma^{2}}{\sigma_{t}^{2}}\left\{\frac{1}{1+ \frac{C}{2\sigma_{t}^{2}}R_{j}^{2}(p_{T})}+\frac{1}{1+\frac{2C}{3\sigma_{t}^{ 2}}R_{j}^{2}(p_{T})}\right.\] \[\left.+\frac{1/2}{1+\frac{C}{2\sigma_{t}^{2}\gamma^{2}}R_{j}^{2} (p_{T})}+\frac{1/2}{1+\frac{2C}{3\sigma_{t}^{2}\gamma^{2}}R_{j}^{2}(p_{T})} \right\}.\] Here \(\Delta\sigma^{2}=\sigma_{{}^{3}\text{He}}^{2}-\sigma_{t}^{2}\) and we ignore the higher order terms for the small quantity \(\Delta\sigma^{2}/\sigma_{t}^{2}\). Eq. (39) shows that \(t/^{3}\)He is always larger than one and approaches to one when \(R_{f}\rightarrow\infty\). The smaller \(R_{f}\), the higher deviation of \(t/^{3}\)He from one. With the increasing \(p_{T}\), \(\gamma\) increases and \(R_{f}\) decreases, so \(t/^{3}\)He should increase. Fig. 8 (a) shows our predictions of \(t/^{3}\)He as the function of \(p_{T}\) in 0-20% and 20-80% centralities, respectively, in Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV, both of which give increasing behaviors. The \(p_{T}\)-integrated yield ratio \(t/^{3}\)He as the function of \(dN_{ch}/d\eta\) is in Fig. 8 (b), which has a decreasing trend. This is because larger \(dN_{ch}/d\eta\), i.e., larger \(R_{f}\), makes \(t/^{3}\)He decrease closer to one. Predictions of \(t/^{3}\)He in the nucleon coalescence model give non-flat behaviors as functions of \(p_{T}\) and \(dN_{ch}/d\eta\). This is due to different relative production suppression between \({}^{3}\)He and \(t\) at different hadronic system scales. This feature is very different from that in the thermal model, where the expectation for this ratio is one [30]. This can be used to distinguish production mechanisms of \({}^{3}\)He and \(t\). ## V Summary To get intuitive understandings of production properties of light nuclei in heavy ion collisions at the LHC, we improved a nucleon coalescence model analytically to include the coordinate-momentum correlation in nucleon joint distributions. We derived the momentum distributions of \(d\), \({}^{3}\)He and \(t\). We obtained relationships of light nuclei with primordial nucleons in momentum space in the laboratory frame. We gave formulas of coalescence factors \(B_{2}\), \(B_{3}\) and yield ratios \(d/p\), \({}^{3}\text{He}/d\), \(t/p\), \({}^{3}\text{He}/p\), \(d/p^{2}\), \({}^{3}\text{He}/p^{3}\), \(t/^{3}\text{He}\). We applied the improved nucleon coalescence model to Pb-Pb collisions at \(\sqrt{s_{NN}}=2.76\) TeV to study productions of different light nuclei. We first investigated \(B_{2}\) and \(B_{3}\) and gave quantitative explanations for their interesting behaviors as functions of the collision centrality and the \(p_{T}/A\). We then studied the centrality dependence of the \(p_{T}\) spectra, yield rapidity densities and averaged transverse momenta of \(d\), \({}^{3}\)He and \(t\) with the \(p_{T}\) distributions of kinetic freeze-out protons obtained from the blast-wave model. We finally studied yield ratios \(d/p\), \({}^{3}\text{He}/d\), \(t/p\), \({}^{3}\text{He}/p\), \(d/p^{2}\), \({}^{3}\text{He}/p^{3}\), \(t/^{3}\text{He}\) and discussed their behaviors as functions of the collision centrality and the \(p_{T}\). We found the nucleon coalescence model including the coordinate-momentum correlation can reproduce the experimental data available well. We furthermore found the system effective radius obtained in the coalescence production of light nuclei exhibited similar behav iors to HBT interferometry radius. We especially argued that nontrivial behaviors of yield ratios were valuable probes of production mechanisms of light nuclei. ## Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Grants No. 12175115 and No. 12375074, the Natural Science Foundation of Shandong Province, China, under Grants No. ZR2020MA097, and Higher Educational Youth Innovation Science and Technology Program of Shandong Province under Grants No. 2020KJJ004.
2306.17719
Detection-Recovery and Detection-Refutation Gaps via Reductions from Planted Clique
Planted Dense Subgraph (PDS) problem is a prototypical problem with a computational-statistical gap. It also exhibits an intriguing additional phenomenon: different tasks, such as detection or recovery, appear to have different computational limits. A detection-recovery gap for PDS was substantiated in the form of a precise conjecture given by Chen and Xu (2014) (based on the parameter values for which a convexified MLE succeeds) and then shown to hold for low-degree polynomial algorithms by Schramm and Wein (2022) and for MCMC algorithms for Ben Arous et al. (2020). In this paper, we demonstrate that a slight variation of the Planted Clique Hypothesis with secret leakage (introduced in Brennan and Bresler (2020)), implies a detection-recovery gap for PDS. In the same vein, we also obtain a sharp lower bound for refutation, yielding a detection-refutation gap. Our methods build on the framework of Brennan and Bresler (2020) to construct average-case reductions mapping secret leakage Planted Clique to appropriate target problems.
Guy Bresler, Tianze Jiang
2023-06-30T15:02:47Z
http://arxiv.org/abs/2306.17719v1
# Detection-Recovery and Detection-Refutation Gaps ###### Abstract Planted Dense Subgraph (PDS) problem is a prototypical problem with a computational-statistical gap. It also exhibits an intriguing additional phenomenon: different tasks, such as detection or recovery, appear to have different computational limits. A _detection-recovery gap_ for PDS was substantiated in the form of a precise conjecture given by Chen and Xu (2014) (based on the parameter values for which a convexified MLE succeeds), and then shown to hold for low-degree polynomial algorithms by Schramm and Wein (2022) and for MCMC algorithms for Ben Arous et al. (2020). In this paper we demonstrate that a slight variation of the Planted Clique Hypothesis with _secret leakage_ (introduced in Brennan and Bresler (2020)), implies a detection-recovery gap for PDS. In the same vein, we also obtain a sharp lower bound for refutation, yielding a detection-refutation gap. Our methods build on the framework of Brennan and Bresler (2020) to construct average-case reductions mapping secret leakage Planted Clique to appropriate target problems. A 20236th Annual Conference on Learning Theory A 1951-40, 2023 A 2023 ## 1 Introduction The last decade has witnessed a dramatic shift in our understanding of the fundamental limits of high-dimensional statistics problems. Rather than the _statistical limit_ being the most relevant quantity governing the minimum amount of data or signal strength needed to solve a problem, it has emerged that for many problems of central importance there is a distinct and often much larger _computational limit_ at which computationally efficient algorithms begin to succeed. Berthet and Rigollet (2013) showed how a _statistical-computational gap_ for a binary variant of sparse PCA follows via reduction from the planted clique hardness conjecture (Conjecture 1), spurring intense research activity (see, e.g., Brennan and Bresler (2020) and references therein). In this paper we investigate how the computational complexity of different tasks, including detection, recovery, and refutation, can vary even for the same statistical model. The phenomena of interest are exemplified by the Planted Dense Subgraph (PDS) problem, defined next. Planted Dense Subgraph (PDS).A sample from the distribution \(\mathsf{PDS}(n,k,p,q)\) is obtained by: 1. Sample \(G\sim G(n,q)\) an Erdos-Renyi graph with edge density \(q\). 2. Select a subset \(S\) of vertices uniformly among the \(\binom{n}{k}\) subsets of size \(k\). 3. Re-sample edges with both endpoints in \(S\) independently, including each with probability \(p>q\). The _detection_ (or decision) problem is to decide, given a graph \(G\), between the two hypotheses \[H_{0}:G\sim G(n,q)\quad\text{and}\quad H_{1}:G\sim\mathsf{PDS}(n,k,p,q). \tag{1}\] The _recovery_ problem is to (exactly) find the planted support \(S\). (Weaker notions of recovery can be found in Section1.4.) The special case of \(\mathsf{PDS}\) where \(p=1\) is known as the Planted Clique (\(\mathsf{PC}\)) problem. Let \(G(n,k,p)=\mathsf{PDS}(n,k,1,p)\). We denote by \(\mathsf{PC}_{D}(n,k,p)\) the problem of deciding between \[H_{0}:G\sim G(n,p)\quad\text{and}\quad H_{1}:G\sim G(n,k,p)\,.\] Both detection and recovery have efficient (polynomial-time) algorithms whenever \(k=\Omega(\sqrt{n})\)(Alon et al. (1998)), but a growing body of evidence (Barak et al. (2019); Feldman et al. (2017)) suggests that these problems become hard for clique size \(k=n^{\beta}\) with \(\beta<1/2\). **Conjecture 1** (PC Conjecture): _Fix constant \(p\in(0,1)\). Suppose that \(\{A_{n}\}\) is a sequence of randomized polynomial time algorithms \(A_{n}:G_{n}\to\{0,1\}\) and \(k_{n}\) is a sequence of positive integers satisfying that \(\limsup_{n\to\infty}\log_{n}k_{n}<\frac{1}{2}\). If \(G\) is an instance of \(\mathsf{PC}_{D}(n,k,p)\), then_ \[\liminf_{n\to\infty}\left(\mathbb{P}_{H_{0}}\left[A_{n}(G)=1\right]+\mathbb{P }_{H_{1}}\left[A_{n}(G)=0\right]\right)\geq 1.\] In our work, we will use a (stronger) variation of this assumption proposed by Brennan and Bresler (2020) where some structural information of the planted clique is assumed (the _secret leakage_). See Section4 and the associated discussion. ### Computational feasibility of \(\mathsf{PDS}\) \(\mathsf{PDS}\) Detection.Feasibility of detection in \(\mathsf{PDS}\) is described by a _phase diagram_ (see Fig.1) indicating for each possible parameter choice whether the problem is: (1) information-theoretically impossible, (2) solvable in principle but computationally hard, or (3) solvable in polynomial time. Complete phase diagrams were shown by reduction from \(\mathsf{PC}_{D}\) in the regime \(q=\Theta(1)\) by Ma and Wu (2015)1, for the sparse regime \(p=cq\) for constant \(c\) and \(q=1/\mathrm{poly}(n)\) by Hajek et al. (2015a), and by Brennan et al. (2018) for a general regime interpolating between the two. Despite the similarity between \(\mathsf{PC}_{D}\) and \(\mathsf{PDS}_{D}\), it is non-trivial to construct reductions that are tight against algorithms, since \(\mathsf{PDS}_{D}\) exhibits a trade-off between subgraph size and signal strength. Footnote 1: In the regime \(q=\Theta(1)\), \(\mathsf{PDS}\) is easily seen to be computationally equivalent up to log factors in the parameter values to the Gaussian matrix model with corresponding means. In all of the above parameter regimes, whenever \(k=\omega(\sqrt{n})\) the optimal polynomial-time test \(T_{\mathrm{sum}}\) simply compares the total number of edges to a threshold. A second moment calculation shows that \[T_{\mathrm{sum}}\text{ succeeds w.h.p. if }\quad\frac{k^{4}(p-q)^{2}}{n^{2}q(1-q )}=\omega(1)\,.\] By its nature, success of the sum statistic yields no information whatsoever about the location of the planted dense subgraph. What can be said about recovery? PDS Recovery.The best currently-known algorithms (such as spectral, semi-definite programming, and low-degree polynomials) for _recovery_ turn out to require a dramatically higher signal strength (Chen and Xu (2014); Hajek et al. (2016)). The following conjecture posits that this signal strength is optimal for the recovery problem (Chen and Xu (2014); Hajek et al. (2015)). **Conjecture 2** (PDS recovery conjecture): _Suppose \(G_{n}\sim\mathsf{PDS}(n,k_{n},p_{n},q_{n})\). If \(k=\omega(\sqrt{n})\) and_ \[\limsup\log_{n}\frac{k^{2}(p-q)^{2}}{nq(1-q)}<0,\] _then no polynomial algorithm \(\mathcal{A}:G\to\binom{[n]}{k}\) can achieve exact recovery of \(\mathsf{PDS}\) asymptotically._ The lower bound in this conjecture has been shown for restricted classes of algorithms: in Schramm and Wein (2022) for low-degree polynomials and Ben Arous et al. (2020) for Markov Chain Monte Carlo algorithms. **Recovery lower bound via reduction?** Lower bounds have been shown for a wide variety of detection problems via reduction from \(\mathsf{PC}\), and for the majority of these problems recovery is algorithmically feasible in the same parameter regime (to within a constant factor) in which detection is algorithmically feasible. Yet for problems where recovery seems strictly harder than detection, demonstrating a detection-recovery gap via reduction from \(\mathsf{PC}\) has remained elusive. Attempts in this direction include those of Cai et al. (2017) showing hardness for a matrix model with highly correlated entries (different from the independent edges in \(\mathsf{PDS}^{*}\)), and Brennan and Bresler (2020) showed that the conjectured recovery lower bound follows from the \(\mathsf{PC}\) conjecture for a _semirandom_ variant of \(\mathsf{PDS}\) where an adversary may "helpfully" remove edges outside of the dense subgraph. The main question motivating our work is: _Can a detection-recovery gap be shown for Planted Dense Subgraph via reduction?_ A first conceptual challenge is that, as shown by Alon et al. (2007), detection and recovery for \(\mathsf{PC}\) are _equivalent_. What this means is that the detection-recovery gap appearing in \(\mathsf{PDS}\) is inherent to \(\mathsf{PDS}\), and in particular, we cannot simply map from \(\mathsf{PC}\) detection and \(\mathsf{PC}\) recovery separately. In fact, our reductions will still map to detection problems (with implications to recovery). But we cannot simply map to the \(\mathsf{PDS}\) detection hypotheses \(\mathsf{PDS}_{D}\): Otherwise, we would be mapping a conjecturally hard instance of \(\mathsf{PC}\) to an easy instance of \(\mathsf{PDS}_{D}\)! Our goal in this paper is considerably more modest than to refute the planted clique conjecture, so we must find another way. ### Contributions In this work, we will utilize the insight that by constructing different statistical models with similar underlying properties, tailored to corresponding inference tasks, we can go beyond the simple detection boundary to prove tighter results. Our main contributions are: * We present the first reduction-based evidence of a computational _detection-recovery gap_ (Corollary 11) for recovering the hidden community in planted dense subgraph, via an average-case reduction from Planted Clique with secret leakage (Conjecture 4). * We show how detection hardness for the two-community Imbalanced Stochastic Block Model (ISBM), shown by reduction from Conjecture 4 by Brennan and Bresler (2020), can be used to obtain a log-optimal lower bound on refuting dense \(k\)-subgraphs in \(G(n,p)\) and Gaussian principal submatrix with large mean. This matches the algorithm-specific results of Barak et al. (2019) and Jones et al. (2021) and shows a reduction-based _detection-refutation gap_. * Combining our results with existing reductions yields analogous results also for other average-case planted models such as Gaussian biclustering and biased sparse PCA. This yields detection-recovery gaps for these problems and answers a question from Brennan et al. (2018). * Finally, we also give insight into the relationships between the statistical boundaries for the above problems, including showing a nearly sharp limit on refuting densest \(k\)-subgraphs in Erdos Renyi Graphs via a novel reduction from recovery. ### Reductions and Other Evidence for Hardness Average-case Reductions.We will define an (average-case) reduction (in total variation) from two source distributions \(P_{0},P_{1}\) to target pair \(Q_{0},Q_{1}\) as a (random) polynomial-time computable map \(\Phi\) such that the pushforward \(d_{\mathrm{TV}}(\Phi(P_{i}),Q_{i})=o(1)\) for \(i=1,2\). The implication is that if \(P_{0},P_{1}\) are computationally hard to distinguish, then the same holds true for \(Q_{0}\) versus \(Q_{1}\): any poly-time algorithm \(\mathcal{A}\) for the latter task would yield a poly-time algorithm \(\mathcal{A}\circ\Phi\) for the former by composing with the reduction, contradicting the presumed hardness of \(P_{0}\) versus \(P_{1}\). While reductions form the bread and butter of complexity theory, there is a general sentiment in the community that average-case reductions are notoriously delicate. Such reductions must not only map to a valid problem instance, they must precisely map entire probability distributions. The upside is that reductions can give the strongest possible evidence for computational hardness, and moreover, they demonstrate a connection between two formerly disparate problems which is often of interest independent of hardness. We refer to Brennan and Bresler (2020) for a review of the reductions literature. Figure 1: The pictures above (left: Detection vs Refutation; right: Detection vs Recovery) concerns \(\mathsf{PDS}(n,k,p,q)\) when \(p,q\) are bounded away from 0 and 1, and \(k\in\widetilde{\Theta}(n^{\beta}),D_{KL}(p\|q)\in\widetilde{\Theta}(n^{-\alpha})\), where E denotes easy, H (computationally) hard, and I (statistically) intractable, hence the orange EH (computationally easy to detect but hard to refute/recover) is our main results. Our statistical EI and computational EH characterization of refutation (left) in this density regime are both novel. The orange-white region in the right denotes the conjectural EH regime, where we close for orange and leave white open. Algorithm-specific hardnessThere have been numerous results showing lower bounds for classes of algorithms and we mention a few of the results that relate to PDS. In Barak et al. (2019), a lower bound for refutation of large cliques in \(G(n,\frac{1}{2})\) was shown for the Sum-of-Squares hierarchy. Schramm and Wein (2022); Rush et al. (2022) sharply characterize the power of low-degree polynomials for recovery in PDS. The overlap gap framework, introduced in Gamarnik and Sudan (2014), connects algorithmic infeasibility with properties of the solution space geometry (see also Gamarnik (2021); Gamarnik and Zadik (2019)). Other relevant results include that of Feldman et al. (2017) on the statistical query model analyzed in the case of a bipartite "samples" version of PC and Brennan et al. (2021) relating the power of low-degree polynomials and the statistical query model. ### Inference tasks beyond decision Denote by \(H_{0}\) the null hypothesis (usually an Erdos-Renyi Graph), and \(H_{1}\) is a graph with a planted structure with support \(v\in\{0,1\}^{n}\). Consider a valuation function \(\mathsf{val}\) on graphs such that: \(\mathbb{P}_{H_{0}}(\mathsf{val}(G)<\delta-\epsilon)\) and \(\mathbb{P}_{H_{1}}(\mathsf{val}(G)>\delta+\epsilon)\) are both \(1-o_{n}(1)\). In the case of PDS, \(\mathsf{val}\) is the densest-\(k\)-subgraph density. Consider the following: **Refutation** A refutation algorithm with success probability \(p\) is a (randomized) algorithm \(\mathcal{A}\) supported on all graphs of size \(n\): * If \(\mathsf{val}(G)>\delta+\epsilon\), then \(\mathcal{A}(G)=1\). * For \(G\sim\mathbb{P}_{H_{0}}(\,\cdot\,|\mathsf{val}(G)<\delta-\epsilon)\), output \(\mathcal{A}(G)=0\) with probability at least \(p\). **Recovery** Let \(\pi\) be a distribution over size \(k\) planted supports \(v\in\{0,1\}^{n}\), and for each \(v\) let \(P_{v}\) be a distribution over planted graphs. Let \(G\sim P=\mathbb{E}_{v\sim\pi}P_{v}\). A recovery blackbox \(\mathcal{A}:G\to\{0,1\}^{n}\) is said to achieve 1. _Partial recovery:_ If \(\mathbb{E}[v^{T}\mathcal{A}(G)]=\Omega(||v||_{1})\). 2. _Weak recovery:_ If \(\mathbb{E}[v^{T}\mathcal{A}(G)]=\|v\|_{1}-o(||v||_{1})\). 3. _Exact (precise) recovery:_ If \(\mathbb{P}[\mathcal{A}(G)=v]=\Omega(1)\). In most models we consider, these variants of recovery only differ in sub-polynomial factors (via reduction in C). We further remark that a refutation algorithm is only evaluated on the input distribution \(H_{0}\), whereas a recovery algorithm is only evaluated on the distribution \(H_{1}\). The latter fact was leveraged by Schramm and Wein (2022) and both will be crucial to our proofs. **Lemma 3** (Informal, see Lemma 22): _For any \(\widetilde{H}_{0}\) that does not have a \(k\)-subgraph with density above \(\frac{p+q}{2}\) with high probability, weak recovery oracles nontrivially distinguish \(\widetilde{H}_{0},H_{1}=\mathsf{PDS}\)._ ### Planted Clique and Secret Leakage We require a slight modification of the planted clique conjecture, proposed by Brennan and Bresler (2020): Instead of a uniformly located clique, the clique is sampled according to some distribution \(\rho\) over the \(\binom{n}{k}\) possible clique positions. One may interpret this as a form of _secret leakage_, whereby some information about the clique position has been revealed to the algorithm. The form of secret leakage we will use in our reductions is \(k\)-PC\({}_{D}(n,k,p)\), where there is some fixed (known) partition \(E\) of \([n]\) into \(k\) equally-sized subsets, and under \(H_{1}\) the planted set is obtained by selecting exactly one node uniformly from each part. We refer to the corresponding hardness assumption as the \(k\)-PC conjecture. **Conjecture 4** (\(k\)-PC Conjecture): _Fix constant \(p\in(0,1)\). Suppose that \(\{A_{n}\}\) is a sequence of randomized polynomial time algorithms \(A_{n}:G_{n}\to\{0,1\}\) and \(k_{n}\) is a sequence of positive integers satisfying that \(\limsup_{n\to\infty}\log_{n}k_{n}<\frac{1}{2}\). Then if \(G\) is an instance of \(k\)-PC\({}_{D}(n,k,p)\), it holds that_ \[\liminf_{n\to\infty}\left(\mathbb{P}_{H_{0}}\left[A_{n}(G)=1\right]+\mathbb{P }_{H_{1}}\left[A_{n}(G)=0\right]\right)\geq 1.\] We refer to Brennan and Bresler (2020) for a general leakage PC conjecture and supporting evidence. When the amount of leaked information is small enough, both low-degree polynomials and statistical query algorithms succeed only above the same \(\sqrt{n}\) clique size as in ordinary PC. **Remark 5** (Binomial planted set): _In the literature it is sometimes assumed that the planted set is of fixed size \(k\), and other times it is of binomial size (where each node is planted with probability \(k/n\) independently). We use a fixed size \(k\) and note that all of our (hardness) results extend to corresponding binomial versions by virtue of closeness of the hypergeometric and binomial distributions in appropriate parameter regimes (which can be understood as an instance of a finite de Finetti type theorem Diaconis and Freedman (1980)). In particular, one may carry out a reduction by keeping a random \(o(n)\) sized fraction of the nodes and discarding the rest._ ## 2 Reduction Techniques Overview ### Selecting hypotheses As discussed in Section1.1, we cannot map to the standard two PDS hypotheses. A key insight from Section1.4 is that while detection concerns both \(H_{0}\) and \(H_{1}\), all other tasks deal with only one of the two hypotheses. Specifically, for any pair of hypotheses with distributions satisfying the val criteria, recovery algorithms are only evaluated on an input distributed according to \(H_{1}\) and not \(H_{0}\). To this end, we are free to select qualifying "quiet" hypothesis \(\widetilde{H}_{0}\) that is not Erdos-Renyi such that it has a harder decision task and imply stronger recovery lower bounds. Similarly, for refutation we may map to \(\widetilde{H}_{1}\) that is different from the standard \(H_{1}\). Now, suppose that we want to map from the two hypotheses in PC to \(\widetilde{H}_{0},H_{1}\) in a target graph such that \(H_{1}\) is PDS (so that a recovery blackbox enables us to test between \(\widetilde{H}_{0}\) and \(H_{1}\)). We have the following naturally competing constraints: 1. For a recovery blackbox to achieve detection, \(\mathsf{val}(G)|_{H_{0}}\) has to be small with high probability, suggesting the fact that \(\widetilde{H}_{0}\) has to be _far_ from \(H_{1}\), with respect to some metric. 2. We need to construct a reduction. From a data-processing inequality perspective, this means that \(\widetilde{H}_{0}\) has to be _closer_ to \(H_{1}\) than the distance between source hypotheses. It turns out that for recovery, the correct \(\widetilde{H}_{0}\) is extremely hard to find (Appendix B in Schramm and Wein (2022)), and even for good \(\widetilde{H}_{0}\) candidates, constructing a tight reduction seems challenging. However, we will show that by changing \(H_{0}\) to simply match the first-moment in \(H_{1}\), one can achieve a \(\widetilde{H}_{0}\) realizing a non-trivial gap between detection and recovery in PDS, while still being feasible for us to map to from PC. Changing \(H_{0}\) as we do here was also analyzed for the case of low-degree polynomials by Schramm and Wein (2022). Note that Brennan and Bresler (2020) in their result on semirandom PDS modified \(H_{1}\), rather than \(H_{0}\), and we will use this same reduction to demonstrate a detection-refutation gap. We define the following models: Mean-corrected Planted Dense Subgraph (\(\mathsf{PDS}^{*}\)).Consider \(\mathsf{PDS}\) with \(H_{0}\) modified to prevent success of the obvious first moment test. Consider edge strengths \(q<p_{0}<p\) and size \(k\) such that \[p_{0}=q+\gamma=p-\Big{(}\frac{n^{2}}{k^{2}}-1\Big{)}\gamma\] and define \(\mathsf{PDS}^{*}(n,k,p,q)\) as hypothesis testing between \[H_{0}:G\sim G(n,p_{0})\quad\text{and}\quad H_{1}:G\sim\mathsf{PDS}(n,k,p,q). \tag{2}\] Imbalanced Stochastic Block Model (ISBM).Consider a two-community Stochastic Block Model \(\text{ISBM}(n,k,P_{11},P_{12},P_{22})\) to be the graph model generated by sampling \(S_{1}\sim\binom{[n]}{k}\) and \(S_{2}=[n]\setminus S_{1}\). Connect nodes \(u\in S_{i},v\in S_{j}\) with probability \(P_{ij}=P_{ji}\). Moreover, we force the degree constraints _on each node_ \[n\cdot P_{0}=k\cdot P_{11}+(n-k)\cdot P_{12}=k\cdot P_{12}+(n-k)\cdot P_{22}\] and formulate the decision problem \(\text{ISBM}_{D}\) as (let \(r=n/k\)): \[H_{0}:G\sim G(n,P_{0}),\quad H_{1}:G\sim\text{ISBM}(n,r,P_{11},P_{12},P_{22}). \tag{3}\] This model can be considered as a mean-field analogue of recovering a first community in a general balanced \(r\)-block SBM model (keeping one block while averaging out the rest). 2 Footnote 2: Note that both models contain a dense subgraph (high val), and \(\mathsf{PDS}^{*}\) is just a translated \(\mathsf{PDS}\). ### Signal transformation We start our reduction by viewing our problem as a _planted bits_ problem, which is simply a vector \(v\sim\mathrm{Bern}(q)^{\otimes n}\) with planted bits \(v_{I}\sim\mathrm{Bern}(p)\) at the index set \(I\subseteq[n]\) with a different bias. Concretely, because of the one clique vertex per partition assumption of \(k\)-\(\mathsf{PC}\), each \(\frac{n}{k}\times\frac{n}{k}\) block of the adjacency matrix has _a single_ planted 1 entry. All of the reductions we consider can be viewed as mapping a set of planted bits to another desired target set of planted bits with a larger planted size and specific biases. The difficulty at the core is thus the following: _how to transform the planted bits distribution with unknown location to a desired target distribution while not losing signal-to-noise ratio_ (measured by the KL-divergence) between planted and null bits and the size of planted location \(I\)(Brennan et al. (2019)), so that the target instance remains at the threshold of algorithmic feasibility. As in Brennan and Bresler (2020), we will use Gaussian distributions as intermediate steps in transforming from \(k\)-\(\mathsf{PC}\). While Bernoulli data are challenging to non-trivially transform without signal loss, we will leverage the nice behavior of Gaussians under linear maps, enabling us to carefully control the added noise within the transformation (as discussed in the next subsection). To see the approximate equivalence between Gaussians and Bernoulli variables, we note that a Gaussian \(\mathcal{N}(\mu,1)\) can be readily mapped to \(\mathrm{Bern}(\Phi(\mu))\), where \(\Phi\) is the Gaussian CDF, by thresholding at 0. If \(\mu\ll 1\), the KL-divergence decreases only by a numerical constant factor independent of \(\mu\). In the other direction, a rejection sampling procedure can map a pair of Bernoulli variables to a pair of Gaussians with little information loss3: **Lemma 6** (Gaussian Rejection Kernels - Ma and Wu (2015); Brennan et al. (2018)): _Let \(R\) be a parameter and suppose that \(0<q<p\leq 1\), \(\min(q,1-q,p-q)=\Omega(1)\). Suppose that \(\mu<\left(1\wedge\frac{\delta}{2\sqrt{6\log R+2\log(p-q)^{-1}}}\right)\) where \(\delta=\min\left\{\log\left(\frac{p}{q}\right),\log\left(\frac{1-q}{1-p} \right)\right\}\), then there exist map \(\texttt{rk}(\cdot)\) can be computed in \(\text{poly}(R)\) time such that the push-forward maps satisfy_ \[d_{\mathrm{TV}}\big{(}\texttt{rk}(\text{Bern}(p)),\mathcal{N}(\mu,1)\big{)}=O \left(R^{-3}\right)\quad\text{and}\quad d_{\mathrm{TV}}\big{(}\texttt{rk}( \text{Bern}(q)),\mathcal{N}(0,1)\big{)}=O\left(R^{-3}\right).\] Now that we have a Gaussian signal with planted mean, we apply a _rotation_ (treating the entire matrix as a vector). Specifically, in Brennan and Bresler (2020) the following process Bern-Rotations was introduced to transform an instance of \(\mathcal{N}(v,I_{\ell})\) where \(v\in\mathbb{R}^{\ell}\) contains signal. 1. We right-multiply the Gaussian vector by a _design matrix_\(A\in\mathbb{R}^{\ell\times m}\), which yields \(vA+\mathcal{N}(0,AA^{T})\). Denote the square of the top-singular value of \(A\) to be \(\lambda=\sigma^{2}(A)\). 2. On the re-scaled result vector \(\mathcal{N}(\lambda^{-1/2}vA,AA^{T}/\lambda)\), we can add a Gaussian noise \(\mathcal{N}(0,I-AA^{T}/\lambda)\) independent of \(\mu\) to get exactly \(N(\frac{vA}{\sigma(A)},I_{n})\), which has unit variance. In short, _we transform signals as mean vectors of isotropic Gaussian distributions by rotating the space and paying an extra whitening noise to produce an isotropic distribution again_. **Lemma 7** (Dense Bernoulli Rotations - Lemma 8.1 in Brennan and Bresler (2020)): _Let \(m\) and \(\ell\) be positive integers and let \(A\in\mathbb{R}^{\ell\times m}\) be a matrix with singular values all at most \(\lambda>0\). Let \(R\), \(0<q<p\leq 1\) and \(\mu\) be as in Lemma 6. Let \(\mathcal{A}\) denote Bern-Rotations applied with rejection kernel parameter \(R\), Bernoulli biases \(0<q<p\leq 1\), output dimension \(m\), matrix \(A\) with singular value upper bound \(\lambda\) and mean parameter \(\mu\). Then \(\mathcal{A}\) runs in \(\text{poly}(\ell,R)\) time and_ \[d_{\mathrm{TV}}\left(\mathcal{A}\left(\text{\rm PB}(\ell,i,p,q) \right),\,\mathcal{N}\left(\mu\lambda^{-1}\cdot A_{i},I_{m}\right)\right) =O\left(\ell\cdot R^{-3}\right)\] \[d_{\mathrm{TV}}\left(\mathcal{A}\left(\text{\rm Bern}(q)^{ \otimes\ell}\right),\,\mathcal{N}\left(0,I_{m}\right)\right) =O\left(\ell\cdot R^{-3}\right)\] _for all \(i\in[\ell]\), where \(A_{i}\) is the \(i\)th row of \(A\) and \(\text{\rm PB}(\ell,i,p,q)\) is the distribution on \(\{0,1\}^{\otimes\ell}\) where the \(i\)th bit is sampled from \(\mathrm{Bern}(p)\) and all others from \(\mathrm{Bern}(q)\) independently._ As noted earlier, with the \(k\)-PC constraint we have \(r^{2}\) different blocks, given by the partition, where each block has exactly one planted bit. This allows us to view the entire \(k\)-PC matrix as a collection of PB problems and apply Bern-rotations on each \((n/k)\times(n/k)\) matrix (\(\ell=n^{2}/k^{2}\)). There are two remaining things to consider. Firstly, how to get from Gaussians back to Bernoulli and the final output, and secondly, what criteria does our design matrix \(A\in\mathbb{R}^{k^{2}\times k^{2}}\) have to follow. For the first step, as noted above, transforming \(\mathcal{N}(0,1),\mathcal{N}(\nu,1)\) to two Bernoullis by thresholding at 0 will not lose too much information measured by \(d_{\mathrm{TV}}\) when \(\mu\) is small, and the transformed signal will be approximately \(\mathrm{Bern}(0.5)\) and \(\mathrm{Bern}(0.5+\frac{\mu}{\sqrt{2\pi}})\) since the Normal CDF is continuous. To deal with the other part, we need each row of \(A\) to _map directly to the edge density parameter_ of output. Specifically, for any (unknown) input PB instance, it gets mapped to an unknown row of \(A\), which then becomes the output \(\mathsf{PDS}\) mean. Our _design_ in \(A\) is thus formulated as: how to find a suitable \(A\) such that each row of \(A\) corresponds to a possible mean adjacency matrix in target \(\mathsf{PDS}\) ### Design matrices We first remark that the key factor in Bern-Rotations is the added noise \(\mathcal{N}(0,I-AA^{T}/\lambda)\), which will in fact be the only part of our reduction process that may introduce irreversible signal loss. Consequently, we want to construct matrix \(A\) such that \(I-AA^{T}/\lambda\) is as small as possible: \(A\)_has to be close to an isometry._ Let us first assume that \(\sigma(A)=1\) for simplicity. As an example, suppose one wants to map from \(k\)-PC to the Gaussian version of PDS (i.e., \(\mathcal{N}(\gamma,1)^{\otimes k\times k}\) planted in \(\mathcal{N}(0,1)^{\otimes n\times n}\)) with tight recovery boundary such that \(k^{2}\gamma^{2}\sim n\). As \(k\)-PC contains at most \(n\) planted bits yet the squared \(\ell_{2}\) norm of the target mean matrix is exactly \(k^{2}\gamma^{2}=\Omega(n)\), the sum of squared \(\ell_{2}\) norms of the \(n\) column vectors \(A_{i}\) being mapped to should be at least \(\Omega(n)\), which (informally) implies that the design matrix \(A\) has to be an almost perfect isometry given \(\sigma(A)=1\). Having independently generated random columns would allow to apply random matrix spectral bounds. For example, a matrix with i.i.d entries from some fixed distribution was used by Brennan and Bresler (2020). They proved that this methods achieves the desired spectral bound, but each column has a random number of planted bits resulting in binomial planted size rather than the desired fixed size (c.f., Remark 5). Viewing the design matrix structure as the adjacency matrix of some graph, where i.i.d. matrices corresponds to Erdos-Renyi graphs, a natural alternative is regular graphs. These satisfy our fixed size constraint. Moreover, considering the tight recovery reduction again, one also needs all rows to have squared norms of \(O(1)\) while summing up to \(\Omega(n)\), making it an implicit regularization in our construction that all row norms have \(\Theta(1)\) norm. This fact provides a crucial motivation into directed _regular graph_ models for generating matrices such that the row norms and column norms align, and the columns are roughly independent (i.e. perpendicular). ### Singular value from recentering We will now focus on what happens in each sub-block with size \(m=n/k\) given by partitioning \(k\)-PC, and treat it as our main target.4 A line of works (Tikhomirov and Youssef (2019); Le et al. (2015)) have given high probability bounds on the spectral norm \(\|A-\mathbb{E}(A)\|_{op}\) of adjacency matrix \(A\) for a random graph \(G\) with given degree distributions (planted signal). Here we consider when \(A\) is the adjacency matrix of a directed \(d\)-regular graph (each node has out-degree and in-degree exactly \(d\)). In this case the operator norm of concentration can be expressed with the second largest singular value of \(A\). In Tikhomirov and Youssef (2019), a (tight) high probability upper bound on the said quantity has been proven when \(m^{\alpha}<d<m/2\) we have \(|s_{2}(A)|\leq C_{\alpha,m}\sqrt{d}\) with high probability. With this result, we can establish the following lemma that will lead to the ultimate design matrix by taking the (translated) Kronecker product to make it \(m^{2}\times m^{2}\): Footnote 4: With a slight abuse of notations, we note this is different from the target planted size in PDS. **Lemma 8** (Random matrix with regular constraints): _Given constant \(\alpha>0\), there exists a constant \(C_{\alpha}\), such that for a \(m\times m\) (random) matrix \(R=R_{m,1/r}\) where \(r<m^{1-\alpha}\) is an even divisor of \(m\), with entries sampled from the following procedure:_ 1. _Sample_ \(G\) _uniformly from all directed_ \(m/r\)_-regular graphs with size_ \(m\)_._ 2. \(R_{ij}=\frac{-1}{\sqrt{mr}}+1_{e_{ij}\in E_{G}}\cdot\sqrt{\frac{r}{m}}\) _for_ \(j\neq i\) _off diagonal,_ \(R_{ii}=\frac{-1}{\sqrt{mr}}\) _on the diagonal._ _Then with probability \(1-o_{m}(1)\) this matrix satisfies \(\|R\|_{op}\leq C_{\alpha}\)._ This (centered) matrix has a nice property in that it is an approximate isometry, where each row has \(\frac{r-1}{r}\) fraction of \(-\gamma=-1/\sqrt{mr}\) and \(1/r\) fraction of \((r-1)\gamma\) with norm 1. However, it is not yet in the form we target (recall that we want to each column to map to the mean of the \(m\times m\) adjacency matrix of a graph). It is natural to view the target \(\mathsf{PDS}\) density as a translated rank-1 product of vectors (since it has one \(k\times k\) elevated submatrix with uniform signal). Therefore we will simply take the _Kronecker product_ to result in a \(m^{2}\times m^{2}\) matrix, which creates at the \((i,j)\)th columns \(R_{i}^{T}R_{j}\) where \(R_{i}\) are the rows of the \(m\times m\) matrix. However, the canonical rank-1 \(\mathsf{PDS}\) formulation is _not centered_, having zeroes everywhere outside of the planted submatrix and elevated ones signal inside. To map to this instance, we first need to transform the centered signal \(R\to\frac{1}{\gamma}(R+\gamma)\) so that we get \(\frac{m}{r}\) ones in an all-zero vector for each row in \(R\) before taking the rank-1 product to get a \(\frac{m}{r}\times\frac{m}{r}\) submatrix of ones inside \(m\times m\) zeroes. This would make sure that the design matrix has exactly two different values. Unfortunately, doing so results in a product matrix that is guaranteed to have a large operator norm (since the output \(\mathsf{PDS}_{D}\) is easy), explained intuitively because now our matrix is not centered. To obtain a tighter spectral radius, it is natural for us to recenter the product matrix so that it has zero mean per column, corresponding to exactly \(\mathsf{PDS}^{*}\). This provides a justification from a design matrix perspective of why \(\mathsf{PDS}^{*}\) is probably harder than \(\mathsf{PDS}\): _re-centering the design matrix decreases spectral norm, which results in a higher signal strength at the output._ **Lemma 9** (Construction of (fixed size) random \(K_{m}^{1/r}\)): _For given \(\alpha\), exist absolute constant \(C_{\alpha}>0\), such that for every \(m>r>2\) where \(r<m^{1-\alpha}\) divides \(m\), there exist \(m\) subsets \(A_{1},A_{2},\ldots,A_{m}\) of \([m]\) such that \(|A_{i}|=\frac{m}{r}\), and that the \(m^{2}\times m^{2}\) matrix \(K_{(ij),(kl)}:i,j,k,l\in[m]\) defined as \(K_{(ij),(kl)}=\mu\sqrt{\frac{r}{m}}\cdot(1_{k\in A_{i}\text{ and }l\in A_{j}}\cdot\frac{r}{m}-\frac{1}{mr})\) has largest singular value at most \(1\). Specifically,_ \[K_{m}^{1/r}:=K=\mu\sqrt{\frac{r}{m}}\left[(R+\frac{1}{\sqrt{mr}}J)\otimes(R+ \frac{1}{\sqrt{mr}}J)-\frac{1}{mr}J\otimes J\right]\] _where \(J\) is the all-one matrix and \(R\) satisfies the criteria from the previous lemma (\(\mu=(C_{\alpha}+1)^{-2}\in\Theta_{m}(1)\)). With probability \(1-o_{m}(1)\) we can find a satisfying assignment in polynomial time._ ## 3 Hardness of Detection in Mean-corrected Null We are now ready to state hardness for the degree-1 corrected null hypothesis testing problem \(\mathsf{PDS}^{*}\) by constructing an average case mapping. We refer to Figure 2 (Theorem 27) for the full reduction. **Theorem 10** (Lower bounds for efficient \(\mathsf{PDS}^{*}\) detection): _Consider hypothesis testing \(\mathsf{PDS}^{*}\) for \(H_{0}:G(n,p_{0})\) versus \(H_{1}:\mathsf{PDS}(n,k,q,p)\) where \(p_{0}=p-(\frac{n^{2}}{k^{2}}-1)\gamma=q+\gamma\). Let parameters \(p_{0}\in(0,1)\), \(\alpha\in[0,2),\beta\in(0,1)\) and \(\beta<\frac{1}{2}+\frac{2}{3}\alpha\). There exists a sequence \(\{(N_{n},K_{n},p_{n},q_{n})\}\) of parameters such that:_ * _The parameters are in the regime_ \(p-q\in\widetilde{\Theta}(N^{-\alpha})\)_,_ \(K\in\widetilde{\Theta}(N^{\beta})\)_. Formally,_ \[\lim_{n\to\infty}\frac{\log p_{n}-q_{n}}{\log N_{n}}=-\alpha,\ \ \ \lim_{n\to\infty}\frac{K_{n}}{N_{n}}=\beta,\ \ \ \lim_{n\to\infty}\frac{\log(p_{n}-q_{n})^{-1}}{\log N_{n}}=\alpha.\] * _For any sequence of (randomized) polynomial-time tests_ \(\phi_{n}:\mathcal{G}_{N_{n}}\to\{0,1\}\)_, the asymptotic Type I+II error of_ \(\phi_{n}\) _on_ \(\mathsf{PDS}^{*}(N_{n},K_{n},p_{n},q_{n})\) _is at least 1 assuming the_ \(k\)-PC _conjecture._ Furthermore, we note that there exists a matching upper bound for \(\mathsf{PDS}^{*}_{D}\) based on the empirical variance of degrees (see Proposition26 for the precise result). Recall that as discussed in Section1.4, any recovery oracle (on \(H_{1}=\mathsf{PDS}^{*}_{H_{1}}\), which is the same as \(\mathsf{PDS}_{H_{1}}\)) detects between \(H_{0}:G(n,p_{0})\) versus \(H_{1}:\mathsf{PDS}(n,k,q,p)\) on its supported parameters, implying a natural upper bound on the decision problem. Combining Theorem10 with Lemma3, we obtain our main (lower bound) result for the signal strength required for recovery. **Corollary 11** (Recovery Hardness for PDS): _Let parameters \(p_{0}\in(0,1)\), \(\alpha\in[0,2),\beta\in(0,1)\) and \(\alpha<\beta<\frac{1}{2}+\frac{2}{3}\alpha\),. Then for any \(p_{0}\in(0,1)\) there exists a sequence \(\{(N_{n},K_{n},p_{n},q_{n})\}\) of parameters such that the following holds:_ * _The parameters are in the regime_ \(\gamma:=|p-q|\in\widetilde{\Theta}(N^{-\alpha})\)_,_ \(K\in\widetilde{\Theta}(N^{\beta})\)_._ * _For any sequence of (randomized) polynomial-time algorithm_ \(\phi_{n}:\mathcal{G}_{N_{n}}\to\binom{|N_{n}|}{K_{n}}\)_,_ \(\phi_{n}\) _cannot achieve asymptotic exact recovery on_ \(\mathsf{PDS}(N_{n},K_{n},p_{n},q_{n})\) _assuming_ \(k\)-PC_._ We remark that the constraint \(\alpha<\beta\) comes from the fact that recovery is statistical impossible at \(\alpha\geq\beta\) (see Theorem24). For completeness, we refer to the appendix (Theorem31) for an extended discussion on the statistical boundaries associated. Moreover, we remark that in light of our recovery to detection reduction framework, the detection-recovery gap can in fact be viewed as a detection (\(\mathsf{PDS}\)) - detection (\(\mathsf{PDS}^{*}\)) gap. ## 4 Hardness of Refutation ### Detection hardness for ISBM As before, given that a refutation blackbox only operates on \(H_{0}\), we want to find some "quiet" distribution \(\widetilde{H}_{1}\), such that it has the correct valuation but is hard to distinguish from a null instance. We will propose the ISBM model (3) in this section as a qualifying planted distribution. Due to the rank-1 nature of its bias structure, it is easy to construct design matrices by just taking the percolumn rank-1 product from Lemma8, hence hardness result can be proven similar to Theorem27 with a reduction. As in the proof of reduction to \(\mathsf{PDS}^{*}\), we can then generalize to the complete boundary in ISBM detection, leading to refutation hardness. This is an extension of Theorem3.2 in Brennan and Bresler (2020) where their (deterministic rotation kernel) reduction only works with a number-theoretic constraint restricting the parameters. Our results extend to the full boundary line by the regular concentration lemma on random matrices. **Theorem 12** (Hardness of detection in ISBM): _Consider hypothesis testing \(\mathrm{ISBM}_{D}\) (3) where \(k=n/r\) is the planted size. Let parameters \(p_{0}\in(0,1)\), \(\alpha\in[0,2),\beta\in(0,1)\) and \(\beta>\frac{1}{2}-\alpha\). There exist a sequence \(\{(N_{n},R_{n},P^{(n)}_{11},P^{(n)}_{12},P^{(n)}_{22})\}\) of parameters such that:_ * _The parameters are in the regime_ \(|P_{11}-P_{22}|\in\widetilde{\Theta}(N^{-\alpha})\)_,_ \(R\in\widetilde{\Theta}(N^{\beta})\)_._ * _For any sequence of (randomized) polynomial-time tests_ \(\phi_{n}:\mathcal{G}_{N_{n}}\to\{0,1\}\)_, the asymptotic Type I+II error of_ \(\phi_{n}\) _on the decision problems_ \(\mathrm{ISBM}_{D}(N_{n},R_{n},P^{(n)}_{11},P^{(n)}_{12},P^{(n)}_{22})\) _will be at least 1 assuming the_ \(k\)-PC_._ ### Refutation hardness for planted dense subgraph in \(G(n,p)\) Equipped with the hardness results in ISBM, which has a large dense subgraph and thus can be used as a candidate \(\widetilde{H}_{1}\) in refutation, we obtain the formal refutation hardness results similar in how we showed recovery hardness from a reduction with refutation (recovery) oracle: **Theorem 13** (Hardness in refutation of PDS in the dense regime): _Consider the refutation problem for \(H_{0}:G(n,p_{0})\) and val function \(v(G)\) defined as the edge density of the largest \(k-\)subgraph. Let parameters \(p_{0}\in(0,1)\), \(\alpha\in[0,2),\beta\in(0,1)\) and \(\beta>\frac{1}{2}-\alpha\). Then for any sequence of parameters \(\{(N_{n},K_{n},p_{1}^{(n)}\}\) satisfying:_ * _The parameters are in the regime_ \(p_{1}-p_{0}\in\widetilde{\Theta}(N^{-\alpha})\)_,_ \(K\in\widetilde{\Theta}(N^{\beta})\)_._ * _No sequence of (randomized) polynomial-time algorithms_ \(\phi_{n}\) _can achieve refutation with asymptotic successful probability strictly above_ \(0\)_._ Finally, we note that a matching (computational) upper bound can be constructed via a semi-definite programming relaxation (see appendix). Moreover, we also show that the statistical boundary for refutation lies exactly as that for recovery from applying a reduction to (statistical) recovery. **Theorem 14** (Statistical bounds for refutation): _Consider refutation problem for \(G\sim G(n,p_{0})\) and val function \(v(G)\) defined as the edge density of the largest \(k-\)subgraph. Assuming that \(p_{0}\) is bounded away from 0 and 1, and \(k\in\widetilde{\Theta}(n^{\gamma})\) for some \(\gamma\in(0.5,1)\), then:_ * _When_ \(kD_{KL}(p\|p_{0})\in\widetilde{\omega}(1)\)_, the densest_ \(k\) _subgraph_ \(\mathsf{val}(G)\leq p\) _with probability_ \(\to 1\)_._ * _When_ \(kD_{KL}(p\|p_{0})\in\widetilde{o}(1)\)_, the densest_ \(k\) _subgraph_ \(\mathsf{val}(G)\geq p\) _with probability_ \(\to 1\)_._ **Remark 15**: _The problem of densest-\(k\)-subgraph in \(G(n,\frac{1}{2})\) was very recently solved in Cheairi and Gamarnik (2022) with deep techniques from Bernoulli Disorder. However, here we can derive a log-optimal result using statistical reductions from recovery boundaries._ ## 5 Biclustering and Biased Sparse PCA We point out a couple of other random models that have a detection hardness gap as an implication of PDS hardness guarantees. Those connections were first observed in Cai et al. (2017); Brennan et al. (2018) but under the conjectural tight hardness bound and Schramm and Wein (2022) with low-degree polynomials. Bi-clusteringThis model is planting a \(k\times k\) (not necessarily principal) submatrix and can be formulated as the following Gaussian detection problem: \[H_{0}:Z\sim\mathcal{N}(0,1)^{\otimes n\times n},\hskip 14.226378ptH_{1}:Z\sim \mathcal{N}(0,1)^{\otimes n\times n}+\lambda uv^{T} \tag{4}\] where \(u,v\sim\mathrm{Bern}(k/n)^{\otimes n}\) (or uniform from all subsets of size \(k\)) independently. The recovery problem is to localize the latent vectors \(u,v\) given an instance \(Z\sim\mathcal{N}(0,1)^{\otimes n\times n}+\lambda uv^{T}\), and the refutation is to refute submatrices with large mean. Biased SPCAConsider the _spiked covariance model_ where \(v\) is a \(k\)-sparse unit vectors with non-zero entries equal to \(\pm\frac{1}{\sqrt{k}}\): \[H_{0}:X_{1},X_{2},\ldots,X_{n} \sim\mathcal{N}(0,I_{d})^{\otimes n}\quad\text{and}\] \[H_{1}:X_{1},X_{2},\ldots,X_{n} \sim\mathcal{N}\left(0,I_{d}+\theta vv^{\top}\right)^{\otimes n} \text{ where }\left|\left\|v\right\|_{0}^{+}-\frac{k}{2}\right|>\delta\cdot k. \tag{5}\] The recovery task is to estimate \(\text{supp}(v)\) given observations \(X_{1},X_{2},\ldots,X_{n}\) sampled from \(H_{1}\). Specifically for this variant where the sum test can be shown optimal for detection, our result implies a detection-recovery gap which is lacking in its general unbiased form. ## 6 Open Problems We point out two open problems related to our work: 1. Construct "quiet" \(H_{0}\) hypotheses without any dense subgraphs that are hard to distinguish from PDS in order to resolve Conjecture 2. This would also imply a _detection-certification gap_ as well as Conjecture 2 itself. 2. Can one can construct the inverse of the reduction of Remark 5, from a binomial version of PDS to the fixed sized PDS? This would show equivalence of the binomial and fixed versions. #### Acknowledgments This work was supported in part by NSF CAREER award CCF-1940205.
2309.16282
AgEncID: Aggregate Encryption Individual Decryption of Key for FPGA Bitstream IP Cores in Cloud
Cloud computing platforms are progressively adopting Field Programmable Gate Arrays to deploy specialized hardware accelerators for specific computational tasks. However, the security of FPGA-based bitstream for Intellectual Property, IP cores from unauthorized interception in cloud environments remains a prominent concern. Existing methodologies for protection of such bitstreams possess several limitations, such as requiring a large number of keys, tying bitstreams to specific FPGAs, and relying on trusted third parties. This paper proposes Aggregate Encryption and Individual Decryption, a cryptosystem based on key aggregation to enhance the security of FPGA-based bitstream for IP cores and to address the pitfalls of previous related works. In our proposed scheme, IP providers can encrypt their bitstreams with a single key for a set S of FPGA boards, with which the bitstreams can directly be decrypted on any of the FPGA boards in S. Aggregate encryption of the key is performed in a way which ensures that the key can solely be obtained onboard through individual decryption employing the board's private key, thus facilitating secure key provisioning. The proposed cryptosystem is evaluated mainly on Zynq FPGAs. The outcomes demonstrate that our cryptosystem not only outperforms existing techniques with respect to resource, time and energy significantly but also upholds robust security assurances.
Mukta Debnath, Krishnendu Guha, Debasri Saha, Susmita Sur-Kolay
2023-09-28T09:27:56Z
http://arxiv.org/abs/2309.16282v2
# AgEncID: Aggregate Encryption Individual Decryption of Key for FPGA Bitstream IP Cores in Cloud ###### Abstract Cloud computing platforms are progressively adopting Field Programmable Gate Arrays (FPGAs) to deploy specialized hardware accelerators for specific computational tasks. However, the security of FPGA-based bitstream for Intellectual Property (IP) cores from unauthorized interception in cloud environments remains a prominent concern. Existing methodologies for protection of such bitstreams possess several limitations, such as requiring a large number of keys, tying bitstreams to specific FPGAs, and relying on trusted third parties. This paper proposes _AgEncID_ (Aggregate Encryption and Individual Decryption), a cryptosystem based on key aggregation to enhance the security of FPGA-based bitstream for IP cores and to address the pitfalls of previous related works. In our proposed scheme, IP providers can encrypt their bitstreams with a single key for a set \(S\) of FPGA boards, with which the bitstreams can directly be decrypted on any of the FPGA boards in \(S\). Aggregate encryption of the key is performed in a way which ensures that the key can solely be obtained onboard through individual decryption employing the board's private key, thus facilitating secure key provisioning. The proposed cryptosystem is evaluated mainly on Zynq FPGAs. The outcomes demonstrate that our cryptosystem not only outperforms existing techniques with respect to resource, time and energy significantly but also upholds robust security assurances. Keywords:Cloud environment FPGA Bitstream protection Key aggregation. ## 1 Introduction Field-programmable gate arrays (FPGAs) are increasingly popular in cloud environments due to their reconfigurability. Major cloud providers like IBM, Oracle, Cisco, Microsoft, Amazon, and Google have integrated FPGA reconfigurability into user applications [4]. Reconfiguring FPGAs with different application cores at various times enables efficient task scheduling and execution, resulting in high throughput, low latency, and low power consumption. In public cloud environments, hardware IP cores are obtained from various third-party vendors and deployed on FPGAs connected to the cloud. Instead of users having to acquire these IP cores themselves, they simply transmit their data to the cloud, where it is processed by the application cores within the FPGAs and returned to the users. Safeguarding the confidentiality of these FPGA-based bitstream for IP cores is a critical security concern in this shared cloud environment to prevent unauthorized access and tampering. ### Security Attacks on FPGA Bitstream in Cloud FPGA bitstreams are vulnerable to various types of attacks when moving through the cloud: * An attacker could copy bitstreams through a Man-in-the-Middle (MiM) attack and sell them at a lower price to different cloud service providers. This not only reduces the original provider's profits and market share but also damages their reputation. * An adversary might substitute genuine bitstreams with counterfeit ones through reverse engineering (RE) or insert malicious code such as hardware trojan horses (HTH). This could result in FPGA malfunctions or undesired results [15]. * The attacker could use side-channel analysis (SCA) to extract confidential information from the bitstream, potentially revealing details about the FPGA's design or the underlying bitstream [22]. * If the system software is compromised, it may contain malicious logic or HTH, which could harm the bitstream's performance, durability, or lead to the theft of confidential design data [28]. ### Limitations in Existing FPGA Bitstream Protection Techniques FPGA bitstream protection is typically achieved through bitstream encryption to prevent unauthorized access, cloning, hardware Trojan insertions, reverse engineering, and tampering attempts. Modern FPGAs, like those from Xilinx, Intel, and Microsemi, support bitstream encryption using AES as listed in Table 1, to \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline FPGA vendors: & Xilinx & & Altera & Microsemi \\ \hline \multirow{3}{*}{Devices:} & Spartan-7,Artix-7, & Kintex Ultrascale, & & \multirow{3}{*}{Stratic 10} & \multirow{3}{*}{SmartFusion2} \\ & Kintex-7, Virtex-7, & Virtex Ultrascale, & & & \\ & Zynq-7000 & Zynq Ultrascale & & & \\ \hline Encryption: & AES-CBC 256 & AES-GCM 256 & AES-CTR 256 & AES-GCM 256 & AES-128/256 \\ \hline Key & BBRAM/ & BBRAM/ & BBRAM/ & BBRAM/ & FLASH \\ Storage: & eFUSE & eFUSE & eFUSE & eFUSE & SRAM-PUF \\ \hline Side-channel & \multirow{2}{*}{Yes} & Yes & Yes & Yes & \\ Protection & & & & & \\ \hline \end{tabular} \end{table} Table 1: Bitstream protection adopted by FPGA Vendors defend against these types of passive attacks. Relying on vendor-provided symmetric encryption (AES), several studies propose various cryptosystems for encrypting, protecting, and verifying FPGA bitstreams [14, 1, 27]. However, those proposed cryptosystems vary in their approaches to generating and handling cryptographic keys. Table 2 shows some notable prior works in this area, along with where our contribution fits in. Furthermore, various alternative methods [20, 18, 16, 7] have been suggested to address the security issues. We identify four key limitations of these techniques for securing bitstreams in cloud platforms: * _Individual Encryption._ IP providers use FPGA-specific encryption-decryption, requiring developers to encrypt each bitstream for every FPGA, leading to significant key management overhead of time and energy. * _Tied to Specific FPGAs._ Each bitstream is typically associated with a specific FPGA board, limiting the cloud service providers' ability to dynamically allocate resources to meet varying customer demands. * _Third-Party Involvement._ Trusted third parties (TTPs) which play a role in provisioning cryptographic keys to FPGAs, may give rise to unintentional security vulnerabilities. * _Resource Overhead._ Existing solutions for key protection based on asymmetric cryptographic involves intricate cryptographic operations on FPGAs or necessitate changes to FPGA structures, often resulting in significant resource demands in terms of hardware, time, and power consumption. ### Our Contributions To address the four limitations listed above, we presents _AgEncID_ (Aggregate Encryption and Individual Decryption), a cryptosystem centered around key aggregation designed to safeguard FPGA bitstreams. _AgEncID_ offers flexible key management and secure provisioning of cryptographic keys. Below, we outline the key features of _AgEncID_: * _Single-Key Encryption._ To overcome the first limitation, _AgEncID_ allows IP developers to encrypt their bitstream with a single key, which can be used to decrypt the bitstream on a set of FPGA boards for execution. This significantly reduces the time and energy cost of the system. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Works**} & \multicolumn{3}{c|}{**Bitstream Encryption**} & \multicolumn{3}{c|}{**Key Encryption**} & \multirow{2}{*}{**FPGA**} & \multirow{2}{*}{**TPPGA**} & \multirow{2}{*}{**Change in**} & \multirow{2}{*}{**Mapping**} \\ & & & & & & & & & & & **IP: FPGA** & **(IP : FPGA)** \\ \cline{5-10} & \multicolumn{1}{c|}{**Cygnorphic**} & \multicolumn{1}{c|}{**Protect Against**} & \multicolumn{1}{c|}{**Cryptographic**} & \multicolumn{1}{c|}{**Protect Against**} & \multicolumn{1}{c|}{**Ctual**} & \multicolumn{1}{c|}{**Respitered**} & \multicolumn{1}{c|}{**FGA**} & \multicolumn{1}{c|}{**(IP : FPGA)**} \\ & \multicolumn{1}{c|}{**Technique**} & \multicolumn{1}{c|}{**HET**} & \multicolumn{1}{c|}{**Cloning**} & \multicolumn{1}{c|}{**Technique**} & \multicolumn{1}{c|}{**INITSTempirate(SCA)**} & \multicolumn{1}{c|}{**Closed**} & \multicolumn{1}{c|}{**Respitered**} & \multicolumn{1}{c|}{**FGA**} & \multicolumn{1}{c|}{**(IP : FPGA)**} \\ \hline Consumo et al.[11] & AES-266 & ✓ & ✓ & ECDH & N/A & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ \\ \hline Dermier et al.[20] & AES-266 & ✓ & ✓ & ECDH, SIL-1 & N/A & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ \\ \hline Eque et al.[10] & AES-266 & ✓ & ✓ & REGS, SIL-266 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ \hline Mn Mn et al.[20] & AES-266 & ✓ & ✓ & HVM & Training & N/A & ✓ & ✓ & ✗ & ✓ & ✓ & ✗ \\ \hline Boiss et al.[13] & AES-266 & ✓ & ✓ & IS, ECC & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ \\ \hline Mn et al.[17] & AES-266 & ✓ & ✓ & RAS & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\ \hline Mn et al.[22] & APGI-Pike & ✓ & HVM & Training & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ \hline Mn et al.[22] & APGI-Pike & ✓ & ✓ & RC,121 & N/A & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ \hline _AgEncID_ (**ours**) & AES-266 & ✓ & ✓ & Proposed key Aggregation & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ \hline \end{tabular} \end{table} Table 2: Prior Works on FPGA Bitstream Protection * _Decoupling from Specific FPGAs._ IPs are not tied to specific FPGAs, enabling cloud service providers to maximize resource utilization and flexibility. This principle helps us to avoid second limitation. * _No External TTP._ To circumvent the third limitation, _AgEncID_ eliminates the need for a TTP--the bitstream protection key is known only to the bitstream owner. The cryptographic components for key establishment is performed at the FPGA vendor side who is usually considered to be trusted [13, 2]. * _Lower Overhead. AgEncID_ does not require changes to FPGA fabric or a dedicated cryptographic processor on the FPGA, resulting in lower resource overhead. Significant reduction in time, and power requirement is also achieved compared to existing works. Therefore, _AgEncID_ effectively tackles the fourth limitation as well. The rest of the paper is organized as follows. Section 2 presents system and threat model for FPGA-based cloud system. Section 3 presents our proposed \(AgEncID\) along with its correctness especially in terms of key aggregation. Section 4 discusses the application of _AgEncID_ in FPGA-based cloud environment. Its security and performance analysis appear in Section 5. Experimental results are discussed in Section 6 with concluding remarks in Section 7. ## 2 FPGA-based Cloud System This section describes the system and threat models pertinent to an FPGA-based cloud system. Figure 1: FPGA-based cloud system model ### System Model In an FPGA-based cloud system, there are several key participants involved: FPGA vendors (FVs), IP providers (IPPs), the Cloud Service Provider (CSP), and the cloud users (CUs) as shown in Figure 1. The CSP is responsible for deploying FPGAs from one or more FPGA vendors and offers them as a cloud service to users [23]. To run applications on these cloud-connected FPGAs, the CSP obtains IP cores from various third-party IP providers [26]. These IP providers provide their designs in the form of HDL, netlists or bitstreams to the cloud. Cloud users engage with this system by submitting their tasks to the cloud, where application cores within bitstreams run on FPGAs, providing Application-as-a-Service functionality. Multiple users' workloads can share the same physical resources in the cloud, depending on the CSP's allocation and scheduling policies. Typically, commercial CSPs allocate an entire FPGA instance from a cloud-based pool to a single user [29], a model similar to Amazon EC2 F1 (www.aws.amazon.com/ec2/instance-types/f1). In this setup, the user gets dedicated allotment to the FPGA for a specified period according to their demand and agreement with the cloud provider. However, they do not have direct physical access to the FPGA hardware. ### Threat Model Given the various attacks at play (mentioned in Section 1.1), an individual with malicious intentions has the potential to introduce multiple threats to the integrity of the bitstream within the FPGA-based cloud system (Figure 1). We considers the following threat scenarios on bitstream. * _Malicious Cloud Service Provider._ CSPs are not always trustworthy. CSPs can access the RTL design of IP cores and engage in intellectual property (IP) theft by bitstream reverse engineering. * _Malicious Cloud User._ Any potential adversaries among the other cloud users who are not authorized for the bitstream usage can steal the bitstream and misuse it without any administrative privileges. * _Malicious IP Providers._ The IPPs have access to privileged information about the cloud system and may make efforts to intercept IP bitstreams from other providers when their IPs are running on the same group of FPGAs. * _Malicious External Agents._ An external agent may attempts to gain unauthorized access to a bitstream in transit within the cloud. ## 3 Aggregate Encryption Individual Decryption (_AgEncID_) We introduce _AgEncID_ (_Ag_regate _E_ncryption and _I_ndividual _D_ecryption), a cryptosystem based on key aggregation to enhance the security of FPGA-based bitstream IP cores. _AgEncID_ comprises two modules: bitstream encryption module and key aggregation module. The bitstream encryption module effectively leverages vendor-provided symmetric encryption, particularly AES, which is widely recognized and does not necessitate extensive elaboration. Our primary focus lies on the key aggregation module of _AgEncID_, where we introduce a key aggregation technique aimed at enhancing key management efficiency within cloud environments and ensuring the secure provisioning of keys within cloud FPGAs. In the subsequent section, we embark on a comprehensive exploration of this proposed key aggregation method implemented within the _AgEncID_ cryptosystem. ### Key Aggregation in _AgEncID_ The proposed key aggregation method of _AgEncID_ relies on the utilization of bilinear pairing on elliptic curves. A _bilinear pairing_ is a bilinear map defined over elliptic curve subgroups [3]. This concept is used in previous works on broadcast encryption (BE) [6] and key aggregate cryptosystem (KAC) [12]. Let \(G\) and \(G_{T}\) be two such (multiplicative) cyclic subgroups of prime order \(p\) and \(\widehat{e}:G\times G\to G_{T}\) be a map with the following properties: _Property 1._ (_bilinear_) For every \(g_{1},g_{2},g_{3}\in G\), we have \((i)\)\(\widehat{e}(g_{1}+g_{2},g_{3})=\widehat{e}(g_{1},g_{3})\cdot\widehat{e}(g_{2},g_{3})\); \((ii)\)\(\widehat{e}(g_{1},g_{2}+g_{3})=\widehat{e}(g_{1},g_{2})\cdot\widehat{e}(g_{1},g _{3})\). _Property 2._ (_efficiently computable_) The map \(\widehat{e}\) can be computed efficiently. _Property 3._ (_identity_) For every \(g\in G\), we have \(\widehat{e}(g,g)=1\). _Property 4._ (_non-degenerate_) For all \(g\in G\), we have \(e(g,h)=1\) for all points \(h\) if and only if \(h=\infty\). _Property 5._ (_alternation_) \(\widehat{e}(g_{1},g_{2})=\widehat{e}(g_{2},g_{1})^{-1}\) for every \(g_{1},g_{2}\in G\). The basic steps of the proposed key aggregation are inspired from KAC [12] based on a public key cryptosystem. While KAC supports constant-size ciphertexts and an aggregate key for decryption by a set of users, the proposed key aggregation needs an aggregate key for encryption with individual decryption keys. Thus a new aggregate key is generated with corresponding encrypt and decrypt operations. The following operations are needed: * \(\mathbf{Setup(1^{\lambda},n)}\) establishes the public cryptosystem parameters \(param\) for the entities, using their respective entity IDs \(i\). Let \(\lambda\) and \(n\) be two positive integers, where \(n\) denotes the number of entities, and \(1^{\lambda}\) the security level parameter. First, a bilinear group \(G\) of prime order \(p\) where \(2^{\lambda}\leq p\leq 2^{(\lambda+1)}\), a generator \(g\in G\) and \(\alpha\in_{R}Z_{p}\) are chosen randomly. For \(i=\{1,2....,n,n+2,...,2n\}\), \(g_{i}=\alpha^{i}g\) are computed. So, output the system parameter set as \(param=\{g,g_{1},...,g_{n},g_{n+2},.....,g_{2n}\}\). * \(\mathbf{KeyGen(n)}\) produces \(PK=\{g,g_{1},...,g_{n},g_{n+2},.....,g_{2n},v\}\) as the master public key, where \(\gamma\in Z_{p}\) is chosen randomly such that the master-secret key \(msk=\gamma\) and \(v=\gamma g\). The \(n\) private keys \(d_{1},....,d_{n}\) for each entity with IDs \(i\in\{1,2,.....,n\}\) are given by \(d_{i}=\gamma g_{i}=\alpha^{i}v\), where \(g_{i}\in G\), and \(\alpha\) can be safely deleted after \(Setup\). * **Extract(S)** is used to generate a constant size aggregate key for encryption. For a set of entities with their unique IDs \(j\in S\), where \(S\subseteq n=\{1,2,..,n\}\) the aggregate key \(K_{S}\) is computed as: \[K_{S}=\sum\limits_{j\in S}g_{n+1-j}\] * **Encrypt(PK,S,K_S,m)** is for encryption of a message \(m\) (in this case cryptographic key is been referred as message). For a plaintext message \(m\in G_{T}\) and set of entities with ID \(j\in S\), \(t\in_{R}Z_{p}\) is randomly chosen. The ciphertext is computed for the set using its corresponding aggregate key \(K_{S}\). The cipher text \(C=\{c_{1},c_{2},c_{3}\}\) comprises of three parts: \(c_{1}=tg\), \(c_{2}=t(v+K_{S})\), and \(c_{3}=m(\widehat{e}(g_{n},tg_{1}))\). * **Decrypt(S,i,d_i,C=(c_1,c_2,c_3))** is for decrypting the ciphertext for an entity \(i\in S\) using it's unique decryption key \(d_{i}\). If \(i\notin S\), then the output is \(null\), otherwise, the decrypted message \(\widehat{m}\) is obtained as: \[\widehat{m}=c_{3}\cdot\frac{\widehat{e}(d_{i}\ +b_{i,S},c_{1})}{\widehat{e}(g_{i},\ c_{2})}\] where \(b_{i,S}=\sum\limits_{(j\in S)\wedge(j\neq i)}g_{n+1-j+i}\). ### Correctness of Key Aggregation in _AgEncID_ The correctness of the proposed key aggregation approach is founded upon the assurance that decrypting the encrypted message will invariably yield the original message. The following two lemmas grounded in the properties of bilinear pairings play a crucial role. Lemma 1: _In the bilinear group \(G\), with elements \(g_{1}\) and \(g_{2}\), and \(a,b\in Z\), the following equality holds: \(\widehat{e}(ag_{1},bg_{2})=\widehat{e}(g_{1},g_{2})^{ab}\)._ Proof: Based on the bi-linear _Property 1_ as stated in section 3.1, we can write the following: \[\widehat{e}(ag_{1},bg_{2}) =\widehat{e}\left(g_{1}+(a-1)\cdot g_{1},bg_{2}\right)\] \[=\widehat{e}\left(g_{1},bg_{2}\right)\cdot\widehat{e}\left((a-1) \cdot g_{1},bg_{2}\right)\] \[\ldots \text{(iterating upto $a$-steps)}\] \[=\widehat{e}\left(g_{1},bg_{2}\right)^{a} \tag{1}\] Iterating in a similar manner with \(b\)-steps on Eqn. 1, we have the following: \[\widehat{e}\left(g_{1},bg_{2}\right)^{a}=\widehat{e}\left(g_{1},g_{2}\right)^ {ab}\] Therefore, we can conclude that, \[\widehat{e}(ag_{1},bg_{2})=\widehat{e}\left(g_{1},g_{2}\right)^{ab}\] Lemma 2: _For every \(g_{1},g_{2},g_{3}\in G\), the following relationship holds:_ \[\widehat{e}(g_{1}-g_{2},g_{3})=\frac{\widehat{e}(g_{1},g_{3})}{\widehat{e}(g_{2},g_{3})}\] Proof: By _Property 1_ (bilinear) in Section 3.1, we can express as follows: \[\widehat{e}(g_{1}-g_{2},g_{3}) =\widehat{e}(g_{1}+(-g_{2}),g_{3})\] \[=\widehat{e}(g_{1},g_{3})\cdot\widehat{e}(-g_{2},g_{3})\] \[=\widehat{e}(g_{1},g_{3})\cdot\widehat{e}(-1\cdot g_{2},g_{3}) \tag{2}\] By applying Lemma 1 with \(a=-1\) and \(b=1\), we can rephrase Eqn. 2 as: \[\widehat{e}(g_{1},g_{3})\cdot\widehat{e}(-1\cdot g_{2},g_{3})=\widehat{e}(g_{ 1},g_{3})\cdot\widehat{e}(g_{2},g_{3})^{-1} \tag{3}\] Therefore, based on equations 2 and 3, we can deduce that: \[\widehat{e}(g_{1}-g_{2},g_{3})=\frac{\widehat{e}(g_{1},g_{3})}{\widehat{e}(g_ {2},g_{3})}\] ( \[\blacksquare\] ) Theorem 3.1: _(Correctness of Key Aggregation in \(AgEncID\)) The decryption of an encrypted message yields precisely the original message \(m\)._ Proof: Consider \(\widehat{m}\) as the decrypted message corresponding to the original message \(m\). According to operations of proposed key aggregation module mentioned earlier, \[\widehat{m}=c_{3}\cdot\frac{\widehat{e}\left(\left(d_{i}+\sum \limits_{(j\in S\wedge j\neq i)}g_{n+1-j+i}\right),c_{1}\right)}{\widehat{e}(g _{i},c_{2})} \tag{4}\] By substituting values from key aggregation method's operations, we can express Eqn. 4 as: \[\widehat{m}=c_{3}\cdot\frac{\widehat{e}\left(\left(\gamma g_{i}+ \sum\limits_{j\in S}g_{n+1-j+i}-g_{n+1}\right),\ tg\right)}{\widehat{e}\left(g _{i},t\left(pk+\sum\limits_{j\in S}g_{n+1-j}\right)\right)}\] \[=c_{3}\cdot\frac{\widehat{e}(\gamma g_{i},tg)\widehat{e}\left( \sum\limits_{j\in S}g_{n+1-j+i}-g_{n+1},tg\right)}{\widehat{e}(g_{i},t\gamma g )\widehat{e}\left(g_{i},t\sum\limits_{j\in S}g_{n+1-j}\right)}\] (by Property 1) \[=c_{3}\cdot\frac{\widehat{e}\left(\sum\limits_{j\in S}g_{n+1-j+i},tg \right)}{\widehat{e}\left(tg,\sum\limits_{j\in S}\ g_{n+1-j+i}\right)\widehat{e }(g_{n+1},tg)}\qquad\text{(using Lemma 1 and 2)}\] \[=c_{3}\cdot\frac{1}{\widehat{e}(g_{n+1},tg)}\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\text{(by Lemma 2)}\] \[=m\cdot\frac{\widehat{e}(g_{n},tg_{1})}{\widehat{e}(g_{n+1},tg)} \qquad\qquad\qquad\qquad\qquad\qquad\text{(by Property 1)} \tag{5}\] By applying Diffie-Hellman Exponent assumption [5] (\(\widehat{e}(g_{1},g_{n})=\widehat{e}(g,g_{n+1})\)) on Eqn. 5, \(\widehat{m}=m\cdot\frac{\widehat{e}(g_{n},tg_{1})}{\widehat{e}(g_{n},tg_{1})}=m\). This establishes the correctness of the encryption and decryption algorithms of the proposed key aggregation module of \(AgEncID\). (\(\blacksquare\)) ### A Working Example Figure 2 shows a general illustration of how _AgEncID_ cryptosystem works in terms of key aggregation. Here, _AgEncID_ follows a public-key cryptosystem. It creates a fixed-size encryption key for a message that needs to be encrypted for various entities, each having its own unique identity ID. However, it ensures that each entity has its own separate decryption key. Given a plaintext message, \(msg_{1}\), to be encrypted for a set of \(n\) entities with IDs \(i=1,2,..,n\), it is possible to generate a constant-size aggregate encryption key \(Ks\), for any arbitrary subset \(S\) of the IDs, \(S\subseteq\{1,2,..,n\}\). The ciphertext can be decrypted by an entity with ID \(j\) such that \(j\in S\) using its private individual decryption key \(d_{j}\). Figure 2 illustrates the \(AgEncID\) scheme with two sets of entities represented by their unique IDs in \(S_{1}=\{1,3\}\) and \(S_{2}=\{2,4\}\). The entity administrator generates the master public key \(PK\) and aggregate keys \(Ks_{1,3}\) (for\(S_{1}\)) and \(Ks_{2,4}\) (for \(S_{2}\)). Figure 2: An overview of key aggregation method of \(AgEncID\) These keys are then employed to encrypt the plaintext message \(msg_{1}\) for both \(S_{1}\) and \(S_{2}\). The entity administrator also generates the master secret key \(msk\), which is used to construct the individual decryption keys. The ciphertext \(C_{1}\) for the message \(msg_{1}\) can be decrypted by entities from \(S_{1}\) using their individual decryption key \(d_{1}\) and \(d_{3}\) but fails to be decrypted by entities from \(S_{2}\) or any other entity set. ## 4 _AgEncID_ in FPGA Cloud Environment This section explains how _AgEncID_ works in an FPGA-cloud environment using Algorithm 1. While a cloud system can have multiple FPGA vendors (FVs), each providing a number of FPGA boards, and IP providers (IPPs), each providing one or more IP cores, here we focus on the communication among a CSP, an FV and an IPP using \(AgEncID\) and provide an example (Figure 3). In order to achieve a resemblance with Figure 2, we can align FPGA boards and FPGA vendors with entities and the entity administrators depicted in the figure. **Cluster Formation for FPGAs.** Cluster Formation for FPGAs involves the CSP grouping FPGA board requirements based on FVs and board families. We consider three scenarios: _Scenario 1:_ A cluster with \(n\) boards from the same FV and family. _Scenario 2:_ A cluster with \(n\) boards from the same FV but with \(m\) different families, \(m\leq n\). _Scenario 3:_ A cluster with \(n\) boards from various FVs, potentially containing Figure 3: An example of _AgEncID_’s working principle with one FV and one IPP in FPGA Cloud Environment scenarios 1 and 2 within each FV. Table 3 compares the number of cryptographic operations required by _AgEncID_ with _naive_ approaches that do not involve cluster formation. A _naive_ approach requires individual encryption and decryption for both symmetric bitstream encryption and asymmetric key encryption for each board. _AgEncID_ performs best in scenario 1. In scenario 2, the number of _AgEncID_ operations increases with more families (\(m\)), and in scenario 3, _AgEncID_ is less efficient, with the number of operations depending on the number of FVs and their board families in a cluster. Our primary focus is on discussing _AgEncID_ within scenarios 1 and 2. For example, Figure 3 shows how \(n=5\) boards are grouped into two clusters: \(Cluster1\) for scenario 1 and \(Cluster2\) for scenario 2 within one FV. **Initial Setup and Key Generation.** The asymmetric key generation for the \(AgEncID\)'s key aggregation module are primarily conducted by the FVs. First, the FV creates the unique board IDs for each of the boards supplied by her/him, that are hashed to index \(i\) such that \(i\in{1,2,3,...,n}\) and generates the public parameters based on these board IDs. The FV generates a public key \(PK\), a master secret key \(msk\) for \(AgEncID\) and the private keys \(d^{\prime}_{i}s\) for the \(n\) boards. Additionally, FV generates the aggregate key \(AgK_{Sj}\) for each cluster. As shown in Figure 3, the FV receives request for \(n\) boards divided into two clusters of boards IDs \(j\), \(Cluster_{1}\) with \(S_{j}=\{1,3,4\}\) and \(Cluster_{2}\) with \(S_{j}=\{2,5\}\). While _Setup()_ and _KeyGen()_ operations are performed one-time for all \(n\) boards, the FV must execute the _Extract()_ operation for each cluster separately. Subsequently, the FV transfers the public key \(PK\) of all \(n\) boards along with the corresponding \(AgK_{Sj}\) keys for each cluster to the CSP. In order to enhance security, the FV embeds the private keys \(d^{\prime}_{i}s\) into a tamper-proof non-volatile memory segment of the boards. Finally, each FPGA is equipped with the \(AgEncID\) decryption core. **Encryption.** The IPP receives requests from the CSP to run IP on a group of boards. Initially, the IPP creates an encrypted bitstream for the design using its unique AES key specifically for that cluster of boards. Figure 3 shows IPP encrypting the bitstream for \(Cluster_{j}\). Following that, the IPP encrypts the AES key itself using the aggregate key it received for that cluster, \(AgK_{Sj}\). The IPP sends both the encrypted bitstream \(EncBit\) and the encrypted AES key \(EncKey\) to the cloud resource pool. Importantly, this encrypted bitstream can only be decrypted on the FPGA boards within \(Cluster_{j}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{**Operation**} & \multicolumn{6}{c|}{**Scenario 1**} & \multicolumn{6}{c|}{**Scenario 2**} \\ \cline{2-9} & \multicolumn{2}{c|}{Naive Solution} & \multicolumn{2}{c|}{**AgEncID Solution**} & \multicolumn{2}{c|}{Naive Solution} & \multicolumn{2}{c|}{**AgEncID Solution**} \\ \cline{2-9} & Bitstream & Key & Bitstream & Key & Bitstream & Key & Bitstream & Key \\ & Encrypt & Encrypt & Encrypt & Encrypt & Encrypt & Encrypt & Encrypt \\ \hline Key Generation & \(=n\) & \(=n\) & \(=\mathbf{1}\) & \(=\mathbf{1}\) & \(=n\) & \(=n\) & \(=\mathbf{1}\) & \(=\mathbf{1}\) \\ \hline Encryption & \(=n\) & \(=n\) & \(=\mathbf{1}\) & \(=\mathbf{1}\) & \(=n\) & \(=n\) & \(\geq\mathbf{m}\), \(\leq\mathbf{n}\) & \(=\mathbf{1}\) \\ \hline Decryption & \(=n\) & \(=n\) & \(=n\) & \(=n\) & \(=n\) & \(=n\) & \(=n\) & \(=n\) \\ \hline \end{tabular} \end{table} Table 3: Possible cluster scenarios for \(n\) FPGA boards of \(m\) families along with the number of operations needed by _AgEncID_ vs._Naive_ approaches ``` 1:procedureInitialSetup andKeyGeneration(FPGA-BOARDS)\(\triangleright\) By FVs 2:for each FPGA do 3: Assign a unique ID to each of the \(n\) FPGA boards 4:endfor 5:\((param)\gets AgEncID.setup(n)\)\(\triangleright\) generates the _AgEncID_ cryptosystem parameters 6:\((PK,msk)\gets AgEncID.KeyGen(n)\)\(\triangleright\) generates the keys for _AgEncID_ system 7: Let \(N\) denote the set of all IDs for \(n\) FPGA boards 8:for each\(ID\in N\)do 9: Generate the board specific private keys \(d^{\prime}_{i}s\) 10: Embed \(d_{i}\) in a tamper-proof non-volatile memory segment 11:endfor 12: Each FPGA is provisioned with a _AgEncID_ decryption engine 13: Let \(S\) denote the set of all ID-s for a cluster of FPGA boards \(S\subseteq N\) 14:for each\(ID\in S\)do 15:\(AgK_{S}\gets AgEncID.Extract(S)\)\(\triangleright\)\(AgK_{S}\) is aggregate key for encryption 16:endfor 17: Register the \(n\) FPGAs with CSP in clusters along with their \(PK\) and \(AgK_{S}\)s 18:endprocedure 19: 20:procedureEncryption(Bitstream)\(\triangleright\) Performed by IP Providers 21: An IP Provider is assigned with a cluster \(S\) of board \(IDs\) by CSP 22:\(K_{S}\gets AES.KeyGen\)\(\triangleright\)\(K\) is one-time AES key for \(S\) boards 23:\(EncBit_{S}\gets AES.Encrypt(Bitstream,K_{S})\)\(\triangleright\) encrypted bitstream 24:\(EncK_{S}=(k_{1},k_{2},k_{3})\gets AgEncID.Encrypt(PK,S,AgK_{S},K_{S})\)\(\triangleright\) encrypted key 25: Upload encrypted bitstream \(EncBit_{S}\), ciphertext key triple \(EncK_{S}\) in cloud 26:endprocedure 27: 28:procedureBitstreamDecryption(Encrypted-Bitstream) 29:\(K_{S}\gets AgEncID.Decrypt(S,i,d_{i},EncK_{S}=(k_{1},k_{2},k_{3}))\)\(\triangleright\) Decrypted AES key 30:\(bitstream\gets AES.Decrypt(EncBit_{S},K_{S})\)\(\triangleright\) The original bitstream 31:endprocedure ``` **Algorithm 1** Aggregate Encryption Key Individual Decryption Key -- \(AgEncID\) Cryptosystem **Decryption.** When a cloud user requests a task, the CSP handles the request and forwards the appropriate bitstream to an available FPGA board. In the decryption process, the board's private key is used for on-board decryption to obtain the AES key, which is then employed within the board's AES decryption core to decrypt the bitstream (depicted in Figure 3 for \(Cluster_{1}\)). Finally, the FPGA is set up with the decrypted bitstream, and the resulting output is delivered to the user. ## 5 Analysis of \(AgEncID\) Security and Performance In this section, we delve into the security and performance aspects of the _AgEncID_ framework. ### \(AgEncID\) Security The proposed \(AgEncID\) offers security against the threat model described in Section 2.2. The security of \(AgEncID\) relies on the security of the AES-256 cipher and the key aggregation scheme. AES-256 is a strong encryption algorithm that has been subjected to extensive cryptanalysis and no significant weaknesses have been found. AES-256 key is protected by \(AgEncID\) with an elliptic curve (EC)-based construction that is provably secure against chosen plaintext attacks (CPAs) on the key [12] under the discrete logarithm problem (DLP) in EC and the bilinear Diffie-Hellman exponent (BDHE) problem [21]. The public and secret key parameters for the _AgEncID_ system are protected under the security assumptions of DLP in EC and BDHE [6]. **Security of Bitstream.** The bitstream is encrypted with AES-256 by IPP using its own key and sent to the CSP over secure channels such as SSL or TLS to prevent interception through _MiM_ attacks. The AES-256 encryption provided by major FPGA vendors is resistant to conventional attacks such as _cloning_, _RE_, unauthorized copying of the bitstream, and _tampering_. Decryption of the bitstream is exclusively possible on the FPGA chip, using the AES key known only to its owner. This AES key is further encrypted with _AgEncID_, ensuring that it can only be decrypted on-chip using FPGA's embedded private key. Hence, neither the CSP nor a malicious external entity can access the unencrypted bitstream. Even if IPPs and CUs were to collude in an attempt to expose other authorized IPPs or CUs' bitstreams, their efforts would be futile. Accessing the AES key is only feasible on the FPGA chip itself, and the storage and operations on these FPGAs are considered highly secure. The vendors claim that their FPGAs can defend against attacks that occur while the FPGA is operating, such as _SCA_ and the leakage of protected information across internal boundaries. **Security of Key.** In the _AgEncID_ framework, the AES key provisioning to the FPGA is highly secure, thwarting unauthorized interception. The _AgEncID_ scheme's CPA-secure nature makes it exceptionally resilient, even if the encrypted key is compromised. This key is stored securely alongside the bitstream in the cloud, albeit vulnerable to potential _MiM_ attacks during transit. Decryption relies on the device-specific private key inside the secure FPGA storage. This key is inaccessible to everyone, including the CSP and IPPs. Additionally, the key provisioning process does not involve TTPs, which reduces the risk of _MiM_ attacks. AES key decryption occurs on the FPGA through _AgEncID_, with the decrypted key briefly stored in Battery-Backed RAM (BBRAM), reliant on continuous power for data retention. Tampering attempts lead to power cuts, erasing BBRAM and the FPGA configuration, preventing Side-Channel Attacks (SCA) and tampering[24]. Additionally, Xilinx (AMD) ensures that the key cannot be read or reprogrammed externally once programmed, limiting access solely to the internal bitstream decryption engine (www.docs.xilinx.com/v/u/en-US/xapp1239-fpga-bitstream-encryption). Cloud-FPGA communication employs secure protocols, granting CSP logical access for processing without hardware access. IPs are provided to users as an Application-as-a-Service, eliminating external network connections and mitigating side-channel attack risks. While this work doesn't address high-level adversaries targeting FPGA fabrics, it establishes robust security. _AgEncID_ guarantees that the AES key remains inaccessible to any unauthorized individuals, including CUs and external agents. ### AgEncID Performance and Efficiency The AES algorithm has the same computational complexity as naive AES-256. We discuss the algorithmic efficiency of the key aggregation method of _AgEncID_ below. **Low overhead Key Generation.**\(AgEncID.setup()\) operation has a cost of \(\mathcal{O}(|n|)\) for generating the cryptosystem parameters of size \(\mathcal{O}(|n|)\). The size of the public key, which depends on these parameters, scales linearly with the total number of FPGA boards. This is generally manageable in organizations with ample shared storage. Each board is assigned a private key consisting of just one group element, resulting in low and constant-size private keys. A single AES key is used to encrypt bitstreams for a cluster of \(|S|\leq n\) boards. Extracting the fixed size aggregate key for encrypting AES keys for \(|S|\leq n\) boards requires \(\mathcal{O}(|S|)\) group additions and is of fixed size. Consequently, _AgEncID_'s key generation is both time and space-efficient. **Efficient Encryption and Decryption.** Encryption doesn't require pairing operations, the pre-computed value \(\hat{e}(g_{n},tg_{1})\) can be quickly processed at a constant rate. The resulting ciphertext is compact in size. Decryption, on the other hand, involves \(\mathcal{O}(|S|)\) additions in the group and two pairing operations. Recent high-speed software implementations (www.crypto.stanford.edu/pbc, www.github.com/herumi/mcl, etc.) enable fast bilinear pairing computations, even when employed on devices with limited computational resources. Furthermore, both the encryption and decryption routines in AgEncID require a fixed number of elliptical curve operations. **Flexible Resource Management by CSP.** The CSPs should have the flexibility in provisioning bitstreams and keys to FPGAs. \(AgEncID\) encryption scheme allows IPPs to encrypt their bitstreams and their keys for a set of FPGAs as requested by CSP. Thus CSPs can manage the FPGAs and assign different FPGAs to users at each run in the cloud according to resource availability. ### Discussion The CSP may wish to add more FPGAs in the cloud to satisfy users' demands. In such a scenario, the CSP would contact the FV for a new board registration with the cloud. The FPGA vendor then manages the \(AgEncID\) parameters for a new board. Dynamic board registration at the FPGA vendor side is managed through the unique board IDs \(i\) that belongs to a set \(S\). The FV maintains a flag to monitor availability of previously assigned \(i\) in case of de-registration of a board from the set \(S\). Available board IDs \(i\)'s are assigned to the new boards dynamically during board registration. In such a scenario, the public key \(PK\) does not need to be updated. With a new board registrations under new ID, the FV expands the public key \(PK\) by simply adding two more elements to the \(param\) of \(PK\). The secret parameter \(\alpha\) during setup is randomly generated. \(\alpha\) is made event driven and generated whenever a new board is registered. Thus the CSP need not contact the IP developer to use their IP on the newly added FPGAs unless a new board ID is added in \(S\). ## 6 Experiments and Evaluation ### Experimental Setup Experiments are performed on 64-bit linux machine, having Intel Core i5 processor and 16 GB RAM clocked at 3.2 GHz. For our evaluation, we utilized five Xilinx boards from various families, as indicated in Table 4. Vivado Design Suite is used to generate bitstream from a HDL design and perform simulations. A pairing friendly elliptic curve construction is needed for the bilinear group mapping. We have used Type-A curves bundled with Pairing-Based Cryptography, PBC version 0.5.14 (www.crypto.stanford.edu/pbc). Four properties of the curve used for symmetric bilinear pairing are: (_i_) Type A supersingular curve \(y^{2}=x^{3}+x\); (_ii_) 160-bit Solinas prime number \(p\), which offers 1024-bit of discrete-logarithm security; (_iii_) the embedding degree \(k\) of pairing is 2; (_iv_) elements of group \(G\) and \(G_{T}\) take 512 and 1024 bits respectively. ### Implementation and tools used Our scheme relies on two cryptographic modules: the standard _AES_ cores offered by FPGA vendors and the key aggregation cores of _AgEncID_. In this section, we'll describe how we implemented and utilized these modules within the _AgEncID_ framework, including the tools we employed. **Key Generation and Encryption.** To generate the 256-bit AES encryption key for encrypting a bitstream, standard tools and techniques can be employed. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **No.** & **Board** & **Family** & **Device** & **Processor** & **Memory** \\ \hline B1 & ZC702 & F1 & XC7Z020-CLG484-1 & Dual ARM Cortex-A9 core & 1GB DDR3 \\ \hline B2 & PYNQ-22 & Zynq-7000 & XC7Z020-1CLG400C & Dual ARM® Cortex”A9 MPCore” & 256 KB on-chip, 8 GB SD card \\ \hline B3 & Zybo-27 & & XC7Z020-1CLG400C & Dual-core ARM Cortex-A9 & 1 GB DDR3L \\ \hline B4 & ZCU104 & F2 & \begin{tabular}{c} \begin{tabular}{c} quad-core ARM® Cortex”A53, \\ Dual-core Cortex-R5 \\ \end{tabular} \\ \end{tabular} & \begin{tabular}{c} \begin{tabular}{c} \begin{tabular}{c} \multirow{2}{*}{FS DDR4 2GB} \\ \end{tabular} \\ \end{tabular} \\ \hline B5 & KCT05 & F3 & \begin{tabular}{c} \begin{tabular}{c} \multirow{2}{*}{XC7K325T-2FFG900C} \\ \end{tabular} \\ \end{tabular} & \begin{tabular}{c} \begin{tabular}{c} \multirow{2}{*}{MicroBlaze, 32bit RISC} \\ \end{tabular} & 1GB DDR3 SODIM \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 4: Properties of FPGA boards used in Experiments In our case, we used Vivado to create the specific AES key provided by the vendor. Bitstream encryption was carried out using the default AES-256 encryption feature available in Xilinx Vivado. To encrypt the 256-bit AES key, we utilized \(AgEncID\)'s key aggregation encryption module. This module is implemented in software using the GNU multiple precision arithmetic library (GMP) (www.gmplib.org, version 6.2.1) and the PBC Library (www.crypto.stanford.edu/pbc). PBC Library is used for the underlying elliptic-curve group and pairing operations. PBC provides the necessary APIs to compute pairings over the BN family of elliptical curves. The \(\mathbf{Encrypt}(\mathbf{PK},\mathbf{S},\mathbf{K_{S}},\mathbf{m})\) operation of key aggregation module in _AgEncID_ is implemented through three primary operations: point addition, point doubling and pairing with CPU latencies of \(3.01\times 10^{-3}\) ms, \(6.2\times 10^{-3}\) ms and 19.4 ms respectively. **Decryption.** Similar to the encryption phase, the decryption process in _AgEncID_ also consists of two parts: key decryption and bitstream decryption. For key decryption, the \(\mathbf{Decrypt}(\mathbf{S},\mathbf{i},\mathbf{d_{i}},\mathbf{C}=(\mathbf{c_{1 }},\mathbf{c_{2}},\mathbf{c_{3}}))\) operation of \(AgEncID\)'s key aggregation module involves elliptic curve pairing operations. For the decryption operation we utilize an embedded processor core in the Processing System (PS) section (eg. B1-B4 in Table 4) of modern SoCs. This decryption module leverages the same libraries (GMP and PBC) used for \(AgEncID\)'s \(Encrypt()\) operation, and is deployed in the PS region of the FPGAs (as shown in Figure 4). On \(ARMCortex-A9\) processors from the \(Zynq-7000\) family, this software module takes 401 ms to decrypty an AES-256 key. For SoCs lacking an embedded processor core in the PS, we've developed a hardware core for \(AgEncID\)'s \(Decrypt()\) operation written in Verilog. This core can be deployed in the Programmable Logic (PL) of the FPGA. It implements a pairing-friendly elliptic curve providing a 256-bit security level [3], including point addition and scalar multiplication operations. The bilinear pairing core used is inspired by the Duursma-Lee algorithm [19] for pairing operations. For bitstream decryption, we utilize the on-chip AES decryption module provided by vendors. This module has standard resource requirements--Xilinx AES decryption core (www.xilinx.com/htmldocs/ip_docs/pru_files/aes.html). Figure 4: Flowchart for \(AgEncID\)’s \(Decrypt()\) operation in key-aggregation module ### Results and Evaluation We evaluate the effectiveness of the _AgEncID_ cryptosystem in three key aspects: (_i_) execution time, (_ii_) energy consumption in terms of CPU usage, and (_iii_) FPGA resource overhead. To gauge its performance, we compared it with the one-to-one mechanism used in existing approaches, referred as _IEID_ (Individual Encryption Individual Decryption). Our evaluation for bitstream encryption was conducted using a set of experiments on bitstreams generated for the _ISCAS_ (www.sportlab.usc.edu/~msabrishami/benchmarks.html) benchmark circuits On the other hand, key encryption is performed on 256-bit AES key. We conduct following experiments: * _Experiment 1._ We conducted bitstream encryption for a cluster of 20 boards, representing scenario 1, using _AgEncID_ and _IEID_. We used the \(C17\) benchmark design and selected the 20 boards from the F1 family (specifically, boards B1, B2, and B3), as listed in Table 4. * _Experiment 2._ Bitstream encryption was carried out on \(C17\) for scenario 2. A cluster of 10 boards are chosen from three different families F1, F2, and F3 as listed in Table 4, i.e \(n=10\) and \(m=3\). We take 3 boards from F1 and F2 each and 4 boards from F3. We again compared the _AgEncID_ approach with the _IEID_ approach. * _Experiment 3._ We performed bitstream encryption for a cluster of 5 boards, depicting scenario 1, using both the _AgEncID_ and _IEID_ approaches. We evaluated on three benchmark designs of varying sizes: \(C432\), \(C499\), and \(C880\). * _Experiment 4._ We performed 256-bit AES key encryption using key aggregation module of AgEncID for 20 boards, covering scenarios 1 and 2. Additionally, we performed 256-bit AES key encryption by altering key aggregation module of _AgEncID_ according to _IEID_ framework. **Execution Time.** In Figure 5, we depict the CPU execution time for _AgEncID_'s AES bitstream encryption. In Figure 5 (a), we observe that _AgEncID_ consistently requires the same amount of time in _Experiment 1_, in contrast to Figure 5: Execution time of _AgEncID_ and _IEID_ for bitstream encryption the _IEID_ approach, where the time increases linearly as the number of boards increases. Figure 5 (b) depicts that _AgEncID_'s time requirement varied for AES encryption within the different board families (_Experiment 2_). Despite the differing time requirements, the _AgEncID_ approach maintains its superiority over the _IEID_ approach. In _Experiment 2_, _AgEncID_'s performance is similar to _IEID_ for bitstream encryption when \(n=m\). Figure 5 (c) depicts a significant \(5X\) time difference between _IEID_ and _AgEncID_ in _Experiment 3_, which represents a substantial cost when handling large bitstreams. These results are derived from Vivado CPU timing report. For both scenarios 1 and 2, _AgEncID_ maintains a constant time requirement in _Experiment 4_, unlike _IEID_ that requires individual key encryption for each board. Figure 7 (a) illustrates that symmetric key generation is performed at a constant time by _AgEncID_ for any number of boards, in contrast to _IEID_ approach. Figure 7 (b) provides timing results for _Experiment 4_, indicating the constant CPU time needed for executing _AgEncID_'s key encryption algorithm unlike _IEID_ that requires individual key encryption for each board. **Energy Consumption.** Figure 6 illustrates the comparison between _AgEncID_ and _IEID_ approaches in terms of energy consumption for _Experiment 1_, similar to the comparison of their respective execution times. From Figure 6 (a), it is evident that _AgEncID_ consistently demonstrates superior energy efficiency over _IEID_, regardless of the number of boards in the cluster. Furthermore, Figure 6 (c) highlights the energy requirements for _AgEncID_ and _IEID_ in _Experiment 3_, showcasing substantial energy savings achieved by _AgEncID_. These energy calculations are obtained from the Vivado CPU power consumption report. The results for _Experiment 2_ and _Experiment 4_ for energy consumption is similar to that of execution time which is depicted in Figure 6 (b) and Figure 7(c) respectively. Similar to execution time, when it comes to energy consumption, key encryption demonstrates significantly lower resource utilization compared to bitstream encryption. **FPGA Resource Overhead.**_AgEncID_'s key aggregation algorithm primarily operates in external software, reducing FPGA resource demands. Encryption Figure 6: Energy consumption by _AgEncID_ and _IEID_ for bitstream encryption tasks are conducted externally to the FPGA. The FPGA only loads the symmetric AES key when necessary. A few kilobytes (KBs) is required for the board-specific private key and _AgEncID_'s key decryption module running in the processor core of PS. Depending on device availability and licensing, we've deployed the _AgEncID_'s key decryption module on PL of board \(B1\) ( Table 4). In Table 5, an overview of FPGA's resource utilization, latency and power consumption of _AgEncID_ hardware decryption module is reported. Directly comparing FPGA resource overhead of _AgEncID_ with existing techniques is challenging due to differences in evaluation environments and FPGA features. However, in terms of parameters such as LUTs, Registers, and Slices, \(AgEncID\)'s resource overhead typically falls in the range of 2% to 17%, whereas related works have resource overhead ranging from 2% to 51% [10, 8, 2]. ## 7 Conclusion This paper present _AgEncID_ (_Agg_regate _E_ncryption and _I_ndividual _D_ecryption) cryptosystem for FPGA bitstream protection through secure key provisioning. _AgEncID_ employs a new key aggregation technique and a streamlined encryption-decryption process to significantly reduce the resource overhead (storage, time, and energy) of both key and bitstream encryption, compared to existing methods. In the future, we plan to implement cutting-edge cryptographic algorithms to address threats arising from advanced on-board FPGA attacks and large-scale quantum computing attacks. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Module**} & \multicolumn{2}{c}{**Resource Utilization**} & \multicolumn{2}{c}{**Storage (KB)**} & \multicolumn{2}{c}{**Performance**} & \multicolumn{2}{c}{**Power (Watt)**} \\ \cline{2-7} & **LUTs** & **Registers** & **Slices** & & \multicolumn{2}{c}{**\# Clock Cycles Latency (ns)**} & \\ \hline \multirow{2}{*}{ECC} & 1181 (\#) & 1403 (\#) & 503 (\#) & \multirow{2}{*}{7695.402} & \multirow{2}{*}{2415} & \multirow{2}{*}{6.667} & \multirow{2}{*}{0.114} \\ & 2.21 (\%) & 1.32 (\%) & & & & \\ \hline \multirow{2}{*}{Pairing} & 7608 (\#) & 13401 (\#) & 1668 (\#) & \multirow{2}{*}{8119.027} & \multirow{2}{*}{57456} & \multirow{2}{*}{20} & \multirow{2}{*}{0.192} \\ & 14.35 (\%) & 12.59 (\%) & & & & \\ \hline \hline \end{tabular} \end{table} Table 5: Resource utilization, storage, performance and power consumption report for _AgEncID_’s \(Decrypt()\) operation of key-aggregation, on _ZC702_ Figure 7: _AgEncID_ vs _IEID_ in terms of key generation and aggregation
2309.05598
Solving Partial Differential Equations with Monte Carlo / Random Walk on an Analog-Digital Hybrid Computer
Current digital computers are about to hit basic physical boundaries with respect to integration density, clock frequencies, and particularly energy consumption. This requires the application of new computing paradigms, such as quantum and analog computing in the near future. Although neither quantum nor analog computer are general purpose computers they will play an important role as co-processors to offload certain classes of compute intensive tasks from classic digital computers, thereby not only reducing run time but also and foremost power consumption. In this work, we describe a random walk approach to the solution of certain types of partial differential equations which is well suited for combinations of digital and analog computers (hybrid computers). The experiments were performed on an Analog Paradigm Model-1 analog computer attached to a digital computer by means of a hybrid interface. At the end we give some estimates of speedups and power consumption obtainable by using future analog computers on chip.
Dirk Killat, Sven Köppel, Bernd Ulmann, Lucas Wetzel
2023-09-11T16:24:53Z
http://arxiv.org/abs/2309.05598v1
**Solving Partial Differential Equations with Monte Carlo / Random Walk on an Analog-Digital Hybrid Computer** ## Abstract Current digital computers are about to hit basic physical boundaries with respect to integration density, clock frequencies, and particularly energy consumption. This requires the application of new computing paradigms, such as quantum and analog computing in the near future. Although neither quantum nor analog computer are general purpose computers they will play an important role as co-processors to offload certain classes of compute intensive tasks from classic digital computers, thereby not only reducing run time but also and foremost power consumption. In this work, we describe a random walk approach to the solution of certain types of partial differential equations which is well suited for combinations of digital and analog computers (hybrid computers). The experiments were performed on an Analog Paradigm Model-1 analog computer attached to a digital computer by means of a hybrid interface. At the end we give some estimates of speedups and power consumption obtainable by using future analog computers on chip. ## 1 Introduction The prospect that - extrapolating at the current growth rates - the energy required to support the global computational demands will exceed the available resources within the next few decades [1, 2] highlights the need for more energy efficient approaches to computation. Non-traditional computing architectures (also refered to as _unconventional_ or _exotic_ computing) are about to close the gap between computational needs and performance delivered by existing digital architectures. Amongst them, there are for instance natural, neuromorphic or quantum computing approaches [3, 4, 5, 6, 7, 8]. In particular for data-heavy applications such as AI, novel materials and _In-Memory Computing_ are worked on [9, 10, 11]. A different approach are analog and mixed analog-digital computers as promising candidates to deliver high computational output at low energy demand for certain fields of applications [12, 13, 14, 15]. In this respect, the most important properties of analog computers are the fully parallel computation and the high energy efficiency of such machines. This comes at a cost as the precision of operations is basically limited to a signal-to-noise ratio of about \(60\,\mathrm{dB}\), also contributing to the high energy efficiency due to Landauer's principle [16]. In order to perform a computation, analog architectures make use of transferring a problem task into an analogue problem that can be implemented within the structure of the analog computer, e. g., an electrical circuit. The results of the analog computation, i. e., the system's continuous state variables, are then measured yielding the desired results which can be stored and post processed on a digital computer. Such a hybrid architecture (the combination of a digital computer with an analog computer) therefore exploits the advantages of both concepts - analog and digital processing - to perform computations fast and efficiently. Classic analog computers feature a variety of computing elements, including integrators with time as the free variable. Using modern technologies these ideas can be extended considerably, yielding techniques such as in-memory computing. Given the continuous value nature of analog architectures, they are ideally suited for simulating or tracking fast time-evolutions of dynamical systems such as for instance in artificial intelligence (_AI_) but also the broad class of partial differential equations (_PDEs_), which are of central interest for describing problems in science and industry. In [17, 18] we have previously proposed the application of analog computers for fluid dynamics and molecular dynamics, both readily described by PDEs. In this work, we concentrate on stochastic differential equations (_SDEs_) and the Feynman-Kac approach [19, 20]. In this work we present a proof of principle that parabolic PDEs, such as e.g., the heat equation, can be solved on hybrid computers using a Monte Carlo/Random Walk (_MC/RW_) approach. This has been implemented on a modern modular analog computer, the _Analog Paradigm Model-1_ computer [21, 22] which is programmed in a classic fashion using patch cables to connect computing elements. Using power consumption and time to solution of this setup as a reference, we evaluate how efficiently such computations can be carried out after optimization. We also consider briefly future developments which will lead to reconfigurable analog computers on chip in CMOS technology, tightly integrated with a digital computer. This will considerably reduce the overall power consumption and overheads associated with the communication between the analog and digital parts of the system. The results are compared to the energy consumption of modern digital computers and are discussed in terms of their implications for next generation of microelectronic computation devices. ### Basics of analog and hybrid computers Classic stored program digital computers are machines capable of executing algorithms in a step-by-step fashion with individual instructions (and, in the case of a von Neumann machine also data) stored in some main memory. As powerful as this approach is, it forces the machine to a mainly sequential way of operation. The following example shows the basic characteristics of this approach quite clearly. The expression \(x=a(b+c)\) is to be solved for given values of \(a\), \(b\), and \(c\), which are stored in memory. To compute the result \(x\), three load operations are required to load the values into processor registers. This would not be necessary in case of a machine implementing instructions capable of working on values stored in memory directly. Nevertheless, the memory accesses would be still there although not as explicit instructions but disguised as an addition and multiplication operation. Then the value of \(x\) can be computed by executing an addition and a multiplication. Storing \(x\) back into memory would complete this little program. All in all this problem would require six individual instructions. These will be executed in a somewhat overlapping fashion by most modern digital computers but real parallelism would be hard to achieve. An analog computer is based on a completely different computational paradigm to a digital computer, as it does not use an algorithm in the classic sense and even has no memory at all. Instead, an analog computer consists of a plethora of computing elements, each capable of performing a basic mathematical operation such as summation, multiplication, or integration with time as the free variable. An analog computer program specifies how these computing elements are to be connected to each other in order to form an analog of the problem under consideration. It thus is a directed graph with computing elements at its nodes and vertices between these [22]. Variables in such a setup are typically represented by (continuous) voltages or currents instead of bit sequences, vastly simplifying the connections between computing elements. The above example of computing \(x=a(b+c)\) would be solved on an analog computer as shown in Figure 1. This program requires two computing elements, one summer and one multiplier as well as five vertices connecting these devices with their respective input values, etc. This approach has a number of advantages over classic digital computers. First of all, all computing elements work in full parallelism with no need for memory accesses at all, no need for synchronisation, etc. In addition to this analog computers are highly energy efficient as long as limited precision is acceptable in the results obtained. The actual symbols for analog computing elements are different from this shown in this picture and will be explained below when required. The analog computer used in this study was an _Analog Paradigm Model-1_ computer manufactured by _anabrid GmbH_. This is a recent modular analog computer and features integrators, summers, multipliers, comparators, and a hybrid computer interface which allows it to be coupled to a digital computer for parametrisation, control, and data augition. The computer uses physical voltages in the interval \([-10,10]\) V to represent values which are mapped onto a logical number representation over the domain \([-1,+1]\) with a precision of about \(10^{-4}\). Any computation which yields a number outside of this domain results in an _overload_. Given the domain, relative as well as absolute errors with respect to the maximum number representable are the same. ## 2 Random Walks for Solving PDEs PDEs are amongst the most important mathematical frameworks in science and engineering. Despite their descriptive power, almost all non-trivial problems are not analytically solvable but require simplifications and approximations instead. Today, numerical methods dominate the solution strategies. They can be classified by a variety of properties. One distinguishes for instance between grid-based methods such as finite difference/volume/element methods (_FD/FV/FEM_) or grid-free methods such as spectral methods [23] or stochastic attempts. Another differentiation is the applicability on PDE problem classes. One central property is the sign of the characteristics, indicating an elliptic, parabolic or hyperbolic problem. Where hyperbolic problems typically describe causal phenomena in physics undergoing some time-evolution, elliptic and parabolic systems typically describe stationary processes with no intrinsic information propagation direction. Therefore, a solution to an elliptic system is often the solution to an optimization problem. This work will focus on elliptic and parabolic systems. Therefore, we revisit the Feynman-Kac method, which establishes a mapping of a partial differential equation onto the expectation value of an associated SDE. Accordingly, many realizations of the stochastic differential equation need to be computed to obtain the expected value. This task of evolving the stochastic process in time can be implemented on an analog computer while the computation of the expected value is then delegated to an attached digital computer. Using electronic noise sources also makes it possible to avoid complex pseudo-random number algorithms. The Monte Carlo/Random Walk method can solve a specific set of 2nd order PDEs and can be implemented on digital-analog hybrid setups. This is a grid free method which can handle complex domain geometries and is able Figure 1: Analog computer setup (circuit) for computing \(x=a(b+c)\) to provide a solution at any point without the requirement of solving over the whole domain. For a detailed summary on MC methods see [24, 25, 26]. For a general introduction into SDEs, see [27] whereas [28] provides an outline on using stochastic processes for boundary value problems (_BVP_). Spectral methods basically translate the PDE to be solved into a linear algebraic problem, thus allowing the whole system to be solved using standard techniques (assuming that sufficient system resources are available). Another approach are meshless methods, an example of which is shown in the following [29]. A particular useful example is the Feynman-Kac method. It translates the PDE so that it can be solved with a stochastic process. The main idea is to trace back the solution within a spatial domain from the boundary by carrying out random walks starting at a certain initial position, eventually hitting some boundary coordinate. ### Feynman-Kac At the heart of this technique is the formula \[\partial_{t}u+\omega\partial_{x}u+\alpha\partial_{x}^{2}u-\sigma u+f=0. \tag{1}\] This system describes a subclass of parabolic problems which has a number of interesting special cases, like the heat equation, the Black-Scholes model, the Schrodinger equation, Fokker-Planck, Hamilton-Jacobi and Ornstein-Uhlenbeck equations [30], [31, pp. 108 ff.]. Here, the unknown \(u\) and the parameters \(\mu,\sigma,V,f\) are all fields like \(\sigma=\sigma(t,x)\) in one spatial and one temporal dimension. Here we will focus on the dimension-agnostic form for a stationary elliptic problem, i. e. \(\partial_{t}u=0\), so that \[\nabla\cdot(\alpha\nabla u)+\vec{\omega}\cdot\nabla u-\sigma u+f=0\,, \tag{2}\] with differential operators Nabla \(\nabla_{i}=\partial_{i}\) and Laplacian\(\Delta=\nabla^{2}\). The concept for solving the Laplacian\(\Delta u=0\) boundary value problem by sampling the domain with a Brownian motion (Wiener process) was pioneered by Kakutani[32]. This concept can be readily extended to (2). The main idea is to define a stochastical differential equation for the given PDE by an Ito-diffusion \[dX_{t}=\mu(X,t)dt+\sigma(X,t)dW_{t}, \tag{3}\] where \(\mu(X,t)\) and \(\sigma(X,t)\) are functions and \(W_{t}\) a Wiener process. The exit time for such a process is defined as: \[\tau:=\inf\left\{t\geq 0|X(t)\notin\Omega\right\}. \tag{4}\] The algorithmic approach uses this exit time for an estimation of the fields value within the domain, \[u(\vec{x})=e^{-\sigma\tau}u(\vec{x}^{\prime}),\quad\text{with}\;\vec{x}\in \Omega,\;\vec{x}^{\prime}\in\partial\,\Omega\,. \tag{5}\] A random walk is typically defined as a discrete stepwise process, while diffusion is understood as a continuous process. When tracking a random process naively checking the exit condition repeatedly at discrete time intervals, the exit time will always be overestimated. In the limit, with time steps going to zero, this overestimate will also approach zero. ### Test problem: Laplacian As a benchmark problem we consider the problem of finding solutions for a simple PDE using an analog computer coupled with a digital computer and compare these with a purely digital approach. The Laplace equation is an ideal test candidate: Solutions to symmetric geometries can be found analytically, more complex ones by means of Fourier series or approaches based on Green's functions. Furthermore, many numerical methods exist for approximate approaches. The extension to the heat equation is given in a straightforward fashion by extending the differential operator \(\Delta\rightarrow(\partial_{t}^{2}+\Delta)\). Non-homogenuous source terms enter on the right hand side as in \(\Delta u=s\) and are only present on the boundary in the following scenario (eq. 7). The benchmark scenario sketched in the following is described by the two-dimensional spatial domain \[\Theta_{\pm} =\left\{\,\vec{x}\,|\,\sqrt{(x_{0}\pm 0.35)^{2}+(y\mp 0.35)^{2}}<0.25 \,\right\}\] \[\Xi =[-1,+1]^{2}\] \[\Omega =\Xi\setminus(\Theta_{+}\,\cup\,\Theta_{-}) \tag{6}\] over the real numbers. That is, a square domain with two enclosed circles. Thus, there are three distinct boundaries: The square one of the outer simulation domain and those of the two circles. The boundary values at \(u(\vec{x},t)\in\partial\,\Omega\) are defined as \[u(\vec{x},t)=\begin{cases}0&\text{if}\;\vec{x}\in\partial\,\Xi\\ -1&\text{if}\;\vec{x}\in\partial\,\Theta_{+}\\ +1&\text{if}\;\vec{x}\in\partial\,\Theta_{-}\end{cases} \tag{7}\] A near-to-exact solution of this setup is depicted in Figure 5. ## 3 Implementation on a hybrid computer The Feynman-Kac approach to solving PDEs is ideally suited for analog and hybrid computers and can be directly parallelised given that there are enough independent noise sources available. The basic idea is to implement the actual random walk on an analog computer with one noise source per dimension of the problem while the attached digital computer in a hybrid setup will do the necessary statistics over the individual random walks. ### The analog circuit Figure 2 shows the setup for ths two-dimensional PDE using the Feynman-Kac technique. It consists of two basically identical circuits, one for each of the two dimensions, each fed an independent noise source. The noise signals used in this study were obtained by purely analog means, i. e., based on the noise of a PN-junction in a semiconductor with some signal and spectrum shaping applied. The white noise generators used were of the type Wandel & Goltermann RG-1 with 100kHz cutoff frequency. These noise signals are fed to a circuit consisting of two integrators. An integrator is represented by a triangular shape with a rectangle on its input side. Each integrator performs one integration with time as its free variable and performs an implicit change of sign which is due to its actual electronic implementation but of no relevance here. The first integrator which has a (negative) feedback to itself generates a correction signal to remove any residual DC component of the input noise signal (cf. [22, p. 80]), while the integrator following it yields the position of the random walk for one dimension. The resulting \(x\)- and \(y\)-components of the two-dimensional random walk are then fed to a circuit implementing the necessary boundary detection. This requires a number of summers, multipliers, and comparators [22].As soon as a particular random walk reaches a boundary, a HALT-signal is generated. This will halt the analog computation and signal the attached digital computer to read out the \(x\) and \(y\) values. Based on these the boundary value at this point is determined and taken into account to compute the expected value and thus the solution for the problem. In this example, the rectangular simulation domain contains two circles which are held at a constant temperature during simulation time. Figure 4 shows the actual setup of the analog computer. ### The digital program The analog computer is tightly coupled to a digital computer by means of a hybrid controller (_HC_). The digital computer executes the program as shown in Algorithm 1. First, the hybrid controller is configured to stop the running analog computation when an external halt signal, generated by the boundary detection circuit, is applied. The central for-loop iterates over all points within the domain which are of interest. This encloses an inner loop that performs a number of individual runs. The loop body sets the initial conditions of the \(x\)- and \(y\)-integrators accordingly (these comprise the initial position of a particular random walk). When the random walk reaches a boundary, the analog computer is halted, and the digital computer reads the corresponding \(x\)- and \(y\)-values, determines the actual boundary condition at this point, and updates the expected value for this element in the domain. This code can be implemented either fully on a microcontroller (MCU) or distributed whereas the MCU only serves for data aquisition and some USB uplinked desktop computer does the postprocessing, i.e. field value reconstruction and subsequent plotting. Figure 2: Block diagram/electronic circuit implementing a Monte Carlo/Random Walk solver for the heat equation. The diagram shows three large blocks, the stateful path integrator, the stateless boundary detection circuit, and the hybrid controller. ### Alternative boundary detection Performing the actual random walk based on high-quality noise signals for each of the dimensions involved is simple while the purely analog detection of boundaries becomes quite a chore even for relatively simple shapes, as can be seen in Figure 2 - most of the computing elements used in this setup are used for the secondary task of boundary detection. A more generalised approach to this task could employ a function generator yielding two outputs, \(f(x,y)\) representing the value at a certain point at a boundary, and characteristic function \(\chi(x,y)\), a flag which will be set when \((x,y)\) is no longer inside the active region. The basic structure of such a function generator is shown in Figure 3. It is of the classic table-lookup type. Two analog-to-digital converters (ADCs) convert the continuous input signals \(x\) and \(y\) into suitable partial addresses for a (small) high-speed memory with a word length of \(n\) bit. This feeds a digital-to-analog converter (DAC) with \(n-1\) of its output bits, while the \(n\)-th bit is used for the characteristic function \(\chi(x,y)\). A function generator like this could then be used as a generalised boundary detection circuit yielding a characteristic function \(\chi(x,y)\) to halt the analog computation, notify the attached digital computer, and provide the boundary value \(f(x,y)\). In addition to simplifying the overall setup, this would have the additional advantage that boundaries could be basically arbitrarily complex. Implementing a certain boundary would only involve writing suitable values to the lookup memory instead of designing a tailored analog circuit for this purpose. If a \(256\times 256\) grid with seven bit boundary values is sufficient, this would require two 8-bit ADCs, 64 kB of memory, and a seven-bit DAC. A function generator of this complexity can be easily implemented in CMOS technology and would allow for more generalised boundary shapes. ## 4 Results The benchmark problem described in section 2.2 is solved on a reasonably dense grid of \(N_{x}\times N_{y}=200\times 200\) points over the domain \([-1,1]^{2}\). \(N_{t}=800\) individual random walks are executed per point to obtain precise expected values. Thus, \(M:=N_{t}\times N_{x}\times N_{y}\approx\) 33 million runs (realizations) are carried out. For the benchmark, the _Analog Paradigm Model-1_ is tested against a _Intel\({}^{\text{\textregistered}}\) Whisky Lake_ "ultra-low power mobile" processor (_Core i7-8565U_) as a representative of a typical desktop-grade processor. ### Runtime and energy results The average time per run for a single realization of the random walk is \(T_{A}^{1}\approx 5.4\,\text{ms}\) on the _Analog Paradigm Model-1_ analog computer. This time does not take into account communication overheads for data acquisition such as the USB latency, which however is irrelevant given the unidirectional data flow between analog and digital computer. The serial approach (one random walk at a time) results in a total run time of about \(T_{A}^{M}=49\,\text{h}\) (wall clock time). The power requirement of the analog circuit is \(P^{A}\approx 3\,\text{W}\), where 20 computing elements are assumed with \(150\,\text{mW}\) average energy consumption [17]. To further simplify things, digital data aquisition etc. is not taken into account. This results approximately \(E_{M}^{A}\approx 147\,\text{Wh}\) Figure 4: Photo of the experimental setup. The two noise generators sit on top of the rack mounted modular analog computer _Analog Paradigm Model-1_. The hybrid controller is in the top left slot and can be recognised by the attached USB cable. The modules with knobs are integrators, knobs set the time scaling constant for the integrators. Not shown are oscilloscopes for debugging and output as well as the standard laptop used for data post processing. Figure 3: Block diagram demonstrating the principle of operation of a table lookup function generator. of energy consumption for the analog computer. ### Interpretation and Discussion Run time and energy requirements of the digital and analog approaches are in the same ballpark. It should be noted that such benchmarks have, by nature, a large uncertainty, given by the complexity and large number of configuration options of the systems under consideration. For instance, a highly optimized serial code for the digital processor could easily achieve one order of magnitude better performance. Possible variations are discussed in section 5. It should be also noted that problems at this given size can be also solved efficiently and quickly with other solution methods. For instance, the PDE can be recast into a system of linear equations by means of finite differences over the a discretised solution domain. This results in a sparse matrix of size \(10^{4}\times 10^{4}\) with a density of \(10^{-4}\). This system can be solved on the digital benchmark computer in a one-step approach within \(T_{D2}^{M}=200\,\)ms using a suitable numerical scheme. This solution time is three orders of magnitude smaller than the digital random walk, \(T_{D2}^{M}\ll T_{D}^{M}\). However, the scaling of this matrix method is worse and in particular parallelisation can not be achieved. Typical matrix solvers scale like \(T_{D2}^{M}=\mathcal{O}((N_{x}N_{y})^{2.37-3})\) where the naive serial runtime is \(T_{D}^{M}=\mathcal{O}(M^{2})\). ## 5 Towards Integrated Circuitry In the previous section, we have shown that the discrete analog computer _Model-1_ shows performance comparable to a modern desktop for the given test problem. However, this basically compares 1970s level discrete analog electronics with 2020s level digital processor technology. In this section, we will show the route towards contemporary and future analog computer implementation and capabilities. As a next step we already have a new analog computer, the _Model-2_, running in a laboratory setup, which features \(10\) times the bandwidth of a _Model-1_ system allowing for run times of \(T_{1}^{A2}\approx 500\,\mu\)sec per random walk with a similar power budget. This system, already being fully reconfigurable by the attached digital computer, has a much higher packing density (about ten times denser than a comparable _Model-1_). This system is roughly halfway towards a highly integrated general-purpose analog computer on chip, called _Model-3_. Based on [17] the _Model-3_ should exhibit \(10^{2.5\pm 0.5}\) times the bandwidth of the _Model-1_. This would result in a run time of \(T_{1}^{A3}\approx 17\,\mu\)sec per individual random walk and thus \(T_{M}^{A3}\approx 9\,\)min for the 33 million realizations, without using parallel random walk. The power ratio between such an integrated circuit implementation and a _Model-1_ can be estimated as \(P^{A}/P^{A3}=10^{-3}\), yielding \(P^{A3}\approx 3\,\)mW, an energy consumption of \(E_{1}^{A3}\approx 51\,\)nJ, and a total energy consumption of \(E_{M}^{A3}\approx 2\,\)J. Figure 5: Color-encoded plot of the solution obtained with a finite differences approach on the \(200\times 200\) grid executed on a digital computer. The full simulation domain including the two circular cut-outs in the upper left and lower right quadrant is shown. Figure 6: Color-encoded plot showing the result of a 49 h analog Monte Carlo/Random Walk approach with 800 runs per starting point \((x,y)\). The result obtained is well matched to that shown in Figure 5. Note, however, the sandy fine-grained pattern caused by the stochastic approach. Figure 7: Color-encoded plot of the absolute error between analog solution (Figure 6) and more exact digital solution (Figure 5). In this figure, the color code is different and shows maximum errors in the range of \(\pm 15\,\)%. ### Parallelization The main advantage of the proposed PDE solution method is the elimination of communication between neighbouring points in the solution domain. This makes parallelization extremely easy as \(n\) distinct analog random walk implementations yield a speedup of \(n\). Compared with a _Model-1_ with its small number of computing elements, the more advanced _Model-2_ already offers some degree of possible speedup due to parallelization of individual random walks. The proposed chip, the _Model-3_, will further increase this capacity. Depending on whether the boundary detection is carried out in software or hardware, we expect to be able to run between 20 to 100 random walks in parallel on a single microchip (assuming 65nm and roughly 10mm\({}^{2}\) of die area). This would yield \(10^{3.4\pm 0.4}\) random walks using 50 chips occupying roughly the physical volume of a _Model-2_ system, about \(2\,300\,\)cm\({}^{3}\)). A super computer configuration consisting of \(10^{5}\) chips would allow \(10^{6.6\pm 0.35}\) parallel random walks. To put this into perspective, the Top500 supercomputer list [33] is currently lead by the _Frontier_ system with \(8,699,904\approx 10^{7}\) cores and an overall power consumption of 22MW. In contrast to this, the 100,000 analog chips will only consume a few \(10\,\)kW and solve problems of this class about \(T_{1}^{D}/T_{1}^{43}\approx 150\) times as fast. This ratio does not even take into account slowdowns due to the digital communication overhead which are dominant in digital supercomputers of that size. ## 6 Summary and Outlook Analog computers will be an integral part of tomorrow's computer systems to speed up the solution of certain types of problems while simultaneously reducing the overall energy consumption for such computations. This requires new approaches to computing due to the non-algorithmic nature of analog computers, an example of which has been demonstrated in this paper by solving partial differential equations using random walks. This approach is superior to algorithmic approaches when the problem size gets very large, in particular when only low precision solutions are required. The study can be extended in several ways: First, the analog-digital hybrid methods can be refined by implementing findings of the last decades in the community, such as guided random walks (for instance Hamiltonian Monte Carlo) or implementing ideas of quantum random walk approaches. Second, it very interesting to apply the method on a broader class of PDEs or on analog problems such as the probabilistic solution of very large linear equations. Third, software support may be improved to allow a broad use of the presented methods. This will require software libraries to tightly integrate the analog part into the digital domain. Fourth, the hardware can be improved considerably by integration, as presented in the theoretical estimates from _Model_\(1\to 2\to 3\). In future works, we want to underpin the theoretical findings with practical measurements on the discussed digital-analog computer architectures yet to built. #### Acknowledgements We thank Nick Baberukxi for setup, run and analysis of the _Model-1_ experiments. We thank Maikel Hajiabadi for the finite difference/LGS runs and analysis and literature contributions. We thank Michael Steck for contributions for more efficient Random Walk Microcontroller code. The authors would like to thank Dr. Chris Giles for fruitful discussions and his meticulous proof reading.
2309.04686
Detailed balance in mixed quantum-classical mapping approaches
The violation of detailed balance poses a serious problem for the majority of current quasiclassical methods for simulating nonadiabatic dynamics. In order to analyze the severity of the problem, we predict the long-time limits of the electronic populations according to various quasiclassical mapping approaches, by applying arguments from classical ergodic theory. Our analysis confirms that regions of the mapping space that correspond to negative populations, which most mapping approaches introduce in order to go beyond the Ehrenfest approximation, pose the most serious issue for reproducing the correct thermalization behaviour. This is because inverted potentials, which arise from negative electronic populations entering into the nuclear force, can result in trajectories unphysically accelerating off to infinity. The recently developed mapping approach to surface hopping (MASH) provides a simple way of avoiding inverted potentials, while retaining an accurate description of the dynamics. We prove that MASH, unlike any other quasiclassical approach, is guaranteed to describe the exact thermalization behaviour of all quantum$\unicode{x2013}$classical systems, confirming it as one of the most promising methods for simulating nonadiabatic dynamics in real condensed-phase systems.
Graziano Amati, Jonathan R. Mannouch, Jeremy O. Richardson
2023-09-09T05:29:48Z
http://arxiv.org/abs/2309.04686v3
# Detailed balance in mixed quantum-classical mapping approaches ###### Abstract The violation of detailed balance poses a serious problem for the majority of current quasiclassical methods for simulating nonadiabatic dynamics. In order to analyze the severity of the problem, we predict the long-time limits of the electronic populations according to various quasiclassical mapping approaches, by applying arguments from classical ergodic theory. Our analysis confirms that regions of the mapping space that correspond to negative populations, which most mapping approaches introduce in order to go beyond the Ehrenfest approximation, pose the most serious issue for reproducing the correct thermalization behaviour. This is because inverted potentials, which arise from negative electronic populations entering into the nuclear force, can result in trajectories unphysically accelerating off to infinity. The recently developed mapping approach to surface hopping (MASH) provides a simple way of avoiding inverted potentials, while retaining an accurate description of the dynamics. We prove that MASH, unlike any other quasiclassical approach, is guaranteed to describe the exact thermalization behaviour of all quantum-classical systems, confirming it as one of the most promising methods for simulating nonadiabatic dynamics in real condensed-phase systems. + Footnote †: These authors contributed equally; Present Address: Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, Germany + Footnote †: These authors contributed equally; Present Address: Max Planck Institute for the Structure and Dynamics of Matter, Hamburg, Germany ## I Introduction Quantum nonadiabatic effects play a crucial role in the description of many relevant physical processes, including radiationless decay,[1] energy and charge transfer[2] and coherence control in quantum gates.[3] Unfortunately the computational cost of brute-force quantum calculations scales exponentially with the number of system degrees of freedom, making it impossible to simulate the vast majority of realistic systems in this way. An increased interest in the development of quasiclassical methods has therefore arisen, aimed at mapping nonadiabatic dynamics onto effective quasiclassical models that can be simulated as efficiently as classical systems. However, improvements in efficiency are often achieved at the expense of accuracy. Although rigorous quantum-classical dynamics can be defined in terms of coupled trajectories,[4; 5] we wish to reduce the numerical complexity by evolving trajectories independently. These quasiclassical methods are often accurate at short times, but can suffer from significant long-time errors.[6] In particular, if detailed balance is violated, the final populations will not reflect the correct thermal distribution. Ehrenfest dynamics is the prototypical quasiclassical nonadiabatic theory that propagates the nuclear dynamics on a mean-field potential energy surface corresponding to a dynamical superposition of electronic states.[7; 8] The advantage of this approach is its simplicity, although the accuracy is known to be deficient in many systems.[9; 10; 11; 12; 13; 14; 15] This is in part due to the fact that the method drastically violates detailed balance in thermal equilibrium.[6; 16] In order to improve upon the accuracy of Ehrenfest dynamics, a wide array of mean-field mapping approaches have been developed. The main idea is to construct a quasiclassical continuous phase space for the electrons, so that the electronic and nuclear degrees of freedom can be treated on an equal footing. The most commonly-used mapping for quasiclassical approaches is the Meyer-Miller-Stock-Thoss (MMST) formalism, where the electronic subsystem is mapped onto the single-excitation subspace of a set of harmonic oscillators.[9; 17; 18; 19; 20; 21; 22; 23] Further improvements in the accuracy of quasiclassical approaches can be achieved by treating the identity operator exactly within the mapping, by representing it by the number one.[24; 11; 25] A similar improvement is obtained more naturally by departing from the MMST formalism and instead describing the electronic degrees of freedom in terms of a set of spherical spin coordinates, known as spin mapping.[12; 26] While all these developments often lead to improved dynamics over Ehrenfest theory, they all clearly still violate detailed balance in general. One important difference between mapping-based approaches and Ehrenfest dynamics is that mapping-space regions corresponding to negative electronic populations are present in the former. Although mapping approaches are typically more accurate than Ehrenfest simulations, these problematic regions can pose a serious issue for the dynamics, as the resulting inverted potentials may give rise to an unphysical nuclear force that can cause trajectories to accelerate off to infinity.[27] In addition to these problems, there is a conceptual difficulty with using mapping approaches in strongly asymmetric systems in which one initializes the simulation in an excited state and expects the dynamics to thermalize to the ground adiabatic state. Here, the short-time dynamics must be described by a multi-state nonadiabatic system, whereas the long-time dynamics is an effective one-state problem. A method that attempts to tackle this issue head-on is the _ellipsoid-mapping_ approach,[28] which replaces the spherical spin-mapping phase space with an anisotropic ellipsoid geometry. The shape and orientation of the ellipsoid is dynamically adjusted so as to best represent the effective structure of the local electronic Hamiltonian. This is achieved in such a way that the dynamics rigorously obey detailed balance by construction. The approach is however only applicable for computing thermal correlation functions and not able treat the thermalization of systems initialized out of equilibrium, as we wish to do in this paper. Another approach designed to go beyond mean-field methods is the symmetric quasiclassical (SQC) approach.[29] Although the dynamics of SQC are identical to those of the mean-field methods introduced previously, the electronic populations are instead measured by windowing the electronic phase space in such a way that the obtained values are guaranteed to lie in the physical range between zero and one. An advantage of the method compared to other quasiclassical theories is that SQC is known to obey detailed balance in the limit of zero electron-nuclear coupling.[30] However, although it is often an improvement over the original MMST methods, SQC may still lead to inaccurate long-time dynamics in the more general coupled regime,[31] and is typically of a similar level of accuracy to the spin-mapping methods.[12; 26] This is because although SQC avoids measuring negative populations, its trajectories still suffer from their contribution to the nuclear force. Fewest switches surface hopping (FSSH)[32] ensures that the nuclear force is always physical by propagating the nuclei on a single adiabatic surface at any given time. In order to describe nonadiabatic transitions, the active surface is changed stochastically with a probability that mimics the dynamics of the underlying electronic wavefunction. After a transition, the nuclear momenta are rescaled in the direction of the nonadiabatic coupling vector, consistent with earlier semiclassical scattering theories.[33; 34] Remarkably, FSSH has been shown to approximately fulfill detailed balance, even in cases when the electronic and nuclear degrees of freedom are strongly coupled (although rigorously so only in the limit of strong nonadiabatic coupling).[35] Although much is known about the approximations underlying surface hopping (see in particular the work of Subotnik and Kapral),[36; 37] the main problem of FSSH is that the approach still lacks a complete formal derivation. This means that there is still some disagreement over the correct way of performing certain aspects of the method that could not be derived from first principles, such as the treatment of frustrated hops. Additionally, the stochastic nature of FSSH means that the electronic wavefunction and the current active surface can become inconsistent during the dynamics, leading to the so-called 'inconsistency error' that is known to significantly degrade the accuracy of FSSH observables.[38] It seems that the ultimate quasiclassical approach for accurately describing both the short- and long-time dynamics in nonadiabatic systems is one that combines the best features of SQC and FSSH. The mapping approach to surface hopping (MASH) is a newly developed quasiclassical method that achieves just that.[38] MASH consistently windows both the observables and the nuclear force, eliminating the possibility of obtaining negative populations in either. This results in MASH having deterministic dynamics, for which the active surface and the electronic degrees of freedom remain consistent throughout the entire time evolution. The approach is also completely derivable from first principles, leading to a unique prescription for frustrated hops that ensures that the exact short-time dynamics is correctly reproduced. Finally, MASH has been tested on a range of commonly-used condensed-phase model systems, where it appears to describe the correct long-time thermalization behaviour. In this paper, we utilize classical ergodic theory to predict the long-time thermalization behaviour of a wide range of quasiclassical approaches, including mean-field mapping methods, SQC and MASH. By comparing our predictions to those of the expected quantum-classical outcome, we provide a simple and rigorous procedure for benchmarking the long-time dynamics. We apply our theory to spin-boson models in challenging parameter regimes and also to an anharmonic model, which together illustrate why negative populations pose a serious problem for reproducing the correct thermalization behaviour for the majority of quasiclassical approaches. Despite this, we show that MASH is guaranteed to exactly reproduce the correct thermal populations in the long-time limit for all quantum-classical systems, confirming its potential for being one of the best methods for accurately simulating nonadiabatic dynamics in real condensed-phase systems. ## II Theory In the present work, we focus on the dynamics of a quantum subsystem consisting of \(N\) states, \(|k\rangle\), coupled to a classical environment, in this case a heat bath. The general form of the Hamiltonian is \[\hat{H}(x,p)=\frac{p^{2}}{2m}+U(x)+\hat{V}(x), \tag{1}\] where \((x,p)\) are the multidimensional classical phase-space variables of the environment. \(U(x)\) and \(\hat{V}(x)\) are state-independent and state-dependent potentials respectively; the latter is a traceless \(N\!\times\!N\) matrix in the diabatic basis, \(\{|k\rangle\}_{k=1}^{N}\). Let us remark that it is valid to treat the bath classically provided that the energy associated with the thermal fluctuations of the environmental modes is large compared to their zero-point energies, although not necessarily large compared to the energy scales of the quantum subsystem. In the framework of atomistic systems in the condensed phase, the subsystem and the environment typically refer to electronic and nuclear degrees of freedom respectively, although this particular situation is not strictly required in the following. The aim of quasiclassical approaches is to accurately predict the evolution of quantum correlation functions, with a computational cost comparable to classical simulations. Approximate quasiclassical dynamics associated with Eq. (1) can be derived by first taking the partial Wigner transform of the quantum Hamiltonian and then taking the classical limit of the bath.[39; 40] The quantum subsystem is mapped onto a continuous phase-space representation, via Cartesian mapping variables, \(X,P\in\mathbb{R}^{N}\).[41; 9] This leads to a representation of the full system in terms of the phase-space points, \(\Gamma=\{X,P,x,p\}\). The Hamiltonian [Eq. (1)] is then mapped onto a phase-space function, \(\mathcal{H}(\Gamma)=p^{2}/2m+U(x)+\mathcal{V}(x,X,P)\), from which the dynamics are obtained. Quasiclassical approaches derived in this way result in equations of motion of the form \[\dot{X}_{k} =\sum_{k^{\prime}}\bra{k}\hat{V}(x)|k^{\prime}\rangle\,P_{k^{ \prime}}, \tag{2a}\] \[\dot{P}_{k} =-\sum_{k^{\prime}}\bra{k}\hat{V}(x)|k^{\prime}\rangle\,X_{k^{ \prime}},\] (2b) \[\dot{x}_{j} =\frac{p_{j}}{m_{j}},\] (2c) \[\dot{p}_{j} =-\frac{\partial U(x)}{\partial x_{j}}+\mathcal{F}_{j}(x,X,P). \tag{2d}\] Here we have assumed that the matrix elements \(\bra{k}\hat{V}(x)|k^{\prime}\rangle\) are real. The first two lines are equivalent to the Schrodinger equation for the real and imaginary parts of the electronic wavefunction. The last two lines are Newton's equations of motion for the bath modes. The expressions for the state-dependent potential, \(\mathcal{V}(x,X,P)\) and nuclear force, \(\mathcal{F}_{j}(x,X,P)\), depend on the specific method considered. Apart from the total energy, \(\mathcal{H}(\Gamma)\), these equations of motion additionally conserve the norm of the Cartesian mapping variables, \(r=\frac{1}{2}\sum_{k=1}^{N}(X_{k}^{2}+P_{k}^{2})\). From these dynamics, quantum correlation functions, \(\operatorname{Tr}[\hat{\rho}_{0}\hat{A}\hat{B}(t)]\), are then approximated by \[\mathcal{C}_{AB}(t)=\int\,\mathrm{d}\Gamma\,\rho_{0}(\Gamma)A(\Gamma)B( \Gamma_{t}), \tag{3}\] where the observable operators are mapped onto phase-space functions [i.e., \(A(x,p)\mapsto A(\Gamma)\)], the specifics of which are method dependent. Note that in some approaches, the quasiclassical representations of the initial and time-evolved operators can differ, even if quantum mechanically the operators are identical (due to different projections of electronic operators[11; 22] or due to the inclusion of weighting factors into correlation functions[38]). A well known limitation of the majority of quasiclassical methods is their inability to obey detailed balance and recover the correct long-time populations.[6; 16; 28; 42] This issue results in dynamical correlation functions relaxing to incorrect long-time limits. As we will see, the way in which the correlation function operators and the state-dependent force are represented by phase-space functions can lead to significant differences in the thermalization properties of different methods. ### Mean-field approaches For mean-field approaches, the state-dependent potential is approximated by its expectation value with respect to the Cartesian mapping variables. This gives rise to dynamics that correspond to the nuclei being propagated on a weighted average of the potential energy surfaces \[\mathcal{V}(x,X,P) =\frac{1}{2}\sum_{kk^{\prime}}\bra{k}\hat{V}(x)|k^{\prime}\rangle \,(X_{k}X_{k^{\prime}}+P_{k}P_{k^{\prime}}), \tag{4a}\] \[\mathcal{F}_{j}(x,X,P) =-\frac{1}{2}\sum_{kk^{\prime}}\frac{\partial\bra{k}\hat{V}(x)|k^ {\prime}\rangle}{\partial x_{j}}(X_{k}X_{k^{\prime}}+P_{k}P_{k^{\prime}}). \tag{4b}\] Let us remark that Eqs. (2) are simply Hamilton's equations of motion given the mean-field form of the state-dependent potential [Eq. (4a)]. The prototypical mean-field approach is Ehrenfest dynamics, for which both the nuclear force and the electronic observables are given by expectation values associated with the normalized time-evolved electronic wavefunction (here expressed in terms of mapping variables). Improved mean-field methods have been obtained by mapping the electronic subsystem onto a fictitious system containing continuous degrees of freedom, \((X,P)\), such that the quantum and nuclear subsystems can be treated on an equal footing. Whereas in Ehrenfest theory, \(X\) and \(P\) simply correspond to the real and imaginary parts of the electronic wavefunction, in the mapping approaches, they span a space in which representations of the wavefunction are constructed. While the mean-field expression for the state-dependent nuclear force remains the same [Eq. (4b)], the phase-space representations of the observable operators generally depend on the mapping space used. In the case of the Meyer-Miller-Stock-Thoss (MMST) mapping,[17; 18] electronic states are mapped onto the single-excitation subspace of a set of \(N\) harmonic oscillators. This leads to (at least) two possible representations of the electronic operators in terms of phase-space functions.[11; 22] In the Wigner approach, the phase-space functions are obtained from the Wigner transform of the associated operators in the mapped harmonic-oscillator system, analogous to the representation of the potential energy operator used in Eq. (4a). In the singly-excited oscillator (SEO) approach,[18] projection operators onto the physically relevant single-excitation subspace are added, which results in an additional factor of \(\phi(r)=16\mathrm{e}^{-2r}\) appearing from the Wigner transform. At least one factor of \(\phi(r)\) is required for the integral over mapping variables to converge. These exponential factors are normally incorporated into the definition of the initial density, \(\rho_{0}(\Gamma)\), as they define the distribution from which the Cartesian mapping variables are initially sampled. We will call this contribution \(\rho_{0,s}\). Both of these approaches lead to the representation of the identity operator, \(\mathcal{I}_{\mathrm{s}}\), being a function of the mapping variable norm, \(r\). It was however previously observed that representing the identity operator with the number one (which can be thought of as the exact mapping for this operator) can lead to more accurate results.[11; 24; 25] We refer to MMST methods that represent the identity operator in this way as 'unity approaches'. Another variant of MMST mapping utilizes so-called focused initial conditions when the dynamics are initialized in an electronic population,[21] by initially sampling the mapping variables from the regions that satisfy \(A(\Gamma)=1\). These thus have delta functions for their initial \(r\)-distributions. All Ehrenfest trajectories have \(r=1\) due to the absence of electronic zero-point energy, whereas in focused MMST methods, which utilize the energy levels of a harmonic oscillator, the populated state contributes \(\frac{3}{2}\) and the \(N-1\) unpopulated states contribute \(\frac{1}{2}\) each, leading to \(r=\frac{3}{2}+\frac{1}{2}(N-1)=\frac{1}{2}(N+2)\). For spin mapping,[12; 26] the electronic states are described by a set of spin variables, which results in the following advantageous features over MMST mapping. First, the associated mapping space is isomorphic with the original quantum subspace, such that the use of projection operators is no longer required and the representation of observable operators is therefore unique. Second, the correspondence between the identity and the number one arises naturally from the underlying theory and does not need to be imposed, as in the case of the unity approaches. The algorithm is practically identical to the MMST-focused method except that the hypersphere radius is \(r=\sqrt{N+1}\). Spin mapping in its original formulation is a linearized quasiclassical method.[12; 26] SpinPLDM is a partially linearized extension[13; 43] developed within the framework of the partially linearized density matrix (PLDM) approach.[44; 45] The equations of motion are similar to those of the fully-linearized mapping methods except that two sets of mapping variables are employed to describe the forward and backward propagation separately. Table 1 summarizes the different ways in which observable operators are represented by the methods studied in this work. We remark that double-SEO is also known as the linearized semiclassical initial value representation (LSC-IVR),[19; 20] while single-SEO is often referred to as the Poisson bracket mapping equation (PBME).[22; 46] Most cases are easily obtained by rewriting the expressions from the original papers[11; 12; 26] in terms of \(r\). The more obscure spin-PLDM expression for \(\rho_{0,s}\) given in Table 1 has first been derived in Ref. [47] by integrating out all other spin degrees of freedom apart from the centroid. Note that this particular expression is only valid for two-state systems. One of the major problems of most mean-field approaches is the presence of negative populations, which can lead to both unphysical values for the long-time limit of correlation functions and unphysical dynamics corresponding to propagating the nuclei on inverted potentials. In the following, we introduce two approaches that are designed to alleviate the problem of negative populations, namely the symmetric quasiclassical theory (SQC) and the mapping approach to surface hopping (MASH). ### Symmetric quasiclassical windowing (SQC) The symmetric quasiclassical approach (SQC) attempts to improve upon the standard MMST mapping approaches by measuring the electronic populations with 'windows' that guarantee that population observables always physically lie between zero and one. These windows are usually defined in terms of the action-angle variables (\(n_{k}\geq 0\) and \(q_{k}\) respectively), which are related to the Cartesian mapping variables introduced in Sec. II as follows:[48] \[X_{k} = \sqrt{2(n_{k}+\gamma)}\cos q_{k}, \tag{5a}\] \[P_{k} = -\sqrt{2(n_{k}+\gamma)}\sin q_{k}. \tag{5b}\] The most successful choice for the windowing functions are the so-called 'triangular windows', which for state \(k\) take the form[49] \[W_{k}(n) = (2-\gamma-n_{k})^{2-N}h(n_{k}+\gamma-1)h(2-\gamma-n_{k}) \tag{6}\] \[\times \prod_{k^{\prime}\neq k}h(2-2\gamma-n_{k}-n_{k^{\prime}}),\] where \(h(\cdot)\) denotes the Heaviside step function, the zero-point energy parameter is set to \(\gamma=1/3\), and the angle variables, \(q_{k}\), are uniformly sampled independently from the interval \([0,2\pi)\). The success of this windowing scheme is due to the fact that the windows associated with different electronic states touch, allowing the approach to correctly describe population transfer even between weakly-coupled states.[50] However, because the windowing functions given by Eq. (6) do not fill the entire mapping space, the sum of the populations is not a constant of motion. Therefore, all SQC observables must be renormalized to obtain a total population of one whenever they are measured. While this scheme guarantees that the population observables always return physical values, the approach still suffers from the effects of inverted potentials, because the dynamics evolve via the same Hamiltonian as for the mean-field methods. This means that while it has been shown that SQC correctly thermalizes in the limit of weak system-bath coupling,[30] it is expected that this is in general not true in other parameter regimes, particularly for strongly-coupled and anharmonic systems. One suggestion to alleviate the problem of inverted potentials is to introduce a state-dependent zero-point energy parameter, \(\gamma_{k}\), into the definition of the state-dependent nuclear force [Eq. (4b)], and adjust their values independently for each trajectory so that the contribution to the force from each electronic population is correctly reproduced.[51] On the one hand, this strategy can reduce the likelihood of trajectories unphysically accelerating off to infinity (at least at short times). On the other hand, this means that the values of \(\gamma_{k}\) used for measuring the observables and performing the dynamics are inconsistent. ### Mapping approach to surface hopping (MASH) The mapping approach to surface hopping (MASH) [38] is a new nonadiabatic trajectory approach that offers the best of both worlds between FSSH and quasiclassical mapping dynamics. MASH can be thought of as going beyond SQC by windowing both the observables and the nuclear force, thus solving the problem of inverted potentials completely. Windowing forces in this way introduces hops similar to those of FSSH, although the inconsistency error of FSSH is not present in MASH. These appealing features make MASH one of the most promising approaches for performing accurate nonadiabatic dynamics within realistic ab initio simulations of molecules at a relatively low computational cost. MASH was originally derived for systems involving two electronic states using the spin-mapping variables in the adiabatic basis, given by \[rS_{x}^{\mathrm{ad}} =X_{+}X_{-}+P_{+}P_{-}, \tag{7a}\] \[rS_{y}^{\mathrm{ad}} =X_{+}P_{-}-X_{-}P_{+},\] (7b) \[rS_{z}^{\mathrm{ad}} =\tfrac{1}{2}(X_{+}^{2}+P_{+}^{2}-X_{-}^{2}-P_{-}^{2}), \tag{7c}\] where \(\pm\) refer to the upper and lower adiabatic states. Like FSSH, MASH works within the kinematic picture, where the nuclear momentum retains the intuitive meaning of mass times velocity, unlike the canonical momentum associated with the Hamiltonian in the adiabatic representation.[52] Thus the MASH expression for the energy takes the same form as in Eq. (1), except that the state-dependent potential is now given in the adiabatic basis. In terms of these mapping variables, the MASH representation of the state-dependent potential and the nuclear force operator are given by [38] \[\mathcal{V}(x,S^{\mathrm{ad}}) =V_{z}^{\mathrm{ad}}(x)\;\mathrm{sgn}(S_{z}^{\mathrm{ad}}), \tag{8a}\] \[\mathcal{F}_{j}(x,S^{\mathrm{ad}}) =-\frac{\partial V_{z}^{\mathrm{ad}}(x)}{\partial x_{j}}\;\mathrm{ sgn}(S_{z}^{\mathrm{ad}})\] \[\quad+4V_{z}^{\mathrm{ad}}(x)d_{j}(x)S_{x}^{\mathrm{ad}}\delta(S_ {z}^{\mathrm{ad}}), \tag{8b}\] where \(\mathrm{sgn}(\cdot)\) returns the sign of its argument, the adiabatic potential energy surfaces are \(V_{\pm}(x)=U(x)\pm V_{z}^{\mathrm{ad}}(x)\), \(d_{j}(x)\) is the nonadiabatic coupling vector [53] and \(\delta(\cdot)\) is the Dirac delta function. From Eq. (8a) it can be seen that the adiabatic windows used in MASH correspond to the spin-hemispheres. These windows, like the triangular windows of SQC, touch at the equator, but unlike SQC windows, they fill the whole mapping space, which is necessary in MASH to guarantee that the force is well defined throughout the entire time-propagation. The last term on the right-hand side of Eq. (8b) ensures energy conservation by applying an impulse at the equator. As MASH is guaranteed to exactly reproduce the short-time dynamics of the quantum-classical Liouville equation,[4] this term therefore constitutes a unique and correct prescription for the momentum rescaling and frustrated hops, which is a feature that is lacking \begin{table} \begin{tabular}{l c c c c c} Method & \(r^{N-1}\rho_{0,s}\) & \(\mathcal{I}_{\mathrm{s}}\) & \(\beta\to 0\)1 & \(\beta\to\infty\)1 & \(\alpha\to 0\)1 \\ \hline MASH1 & \(\mathcal{W}_{AB}(\mathbf{S}^{\mathrm{ad}})\) & 1 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ SQC1 & \(h(2-r)\) & \(h([rS_{z}^{\mathrm{ad}}]-2+r)\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ Spin mapping & \(\delta(r-\sqrt{N+1})\) & 1 & \(\checkmark\) & \(\times\sqrt{3}\) & \(\times\) \\ Spin-PLDM2 & \(\frac{4}{3}rh(\sqrt{3}-r)\) & 1 & \(\checkmark\) & \(\times\frac{4\sqrt{3}}{3}\) & \(\times\) \\ Ehrenfest & \(\delta(r-1)\) & 1 & \(\times\frac{1}{3}\) & \(\checkmark\) & \(\times\) \\ MMST-focused2 & \(\delta(r-\frac{1}{2}(N+2))\) & 1 & \(\times\frac{4}{3}\) & \(\times\)2 & \(\times\) \\ Single-Wigner3 & \(\frac{1}{2}r^{N-1}\phi(r)\) & \(r-1\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ Double-SEO3 & \(\frac{1}{2}r^{N-1}\phi^{2}(r)\) & \(r-\frac{1}{2}\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ Single-SEO4 & \(\frac{1}{2}r^{N-1}\phi(r)\) & \(r-\frac{1}{2}\) & \(\times\frac{3}{2}\) & \(\times\)2 & \(\times\) \\ Double-unity4 & \(\frac{1}{2}r^{N-1}\phi^{2}(r)\) & 1 & \(\checkmark\) & \(\times\)4 & \(\times\) \\ Single-unity5 & \(\frac{1}{2}r^{N-1}\phi(r)\) & 1 & \(\checkmark\) & \(\times\)2 & \(\times\) \\ Single-unity6 & \(\frac{1}{2}r^{N-1}\phi(r)\) & 1 & \(\checkmark\) & \(\times\)4 & \(\times\) \\ Single-unity7 & \(\frac{1}{2}r^{N-1}\phi(r)\) & 1 & \(\checkmark\) & \(\times\)2 & \(\times\) \\ \end{tabular} \end{table} Table 1: A comparison between the different quasiclassical methods studied in this work. For each method, the initial mapping-variable distribution, \(\rho_{0,s}\) and the identity operator representation, \(\mathcal{I}_{\mathrm{s}}\) are given. Here, \(\phi(r)=16\mathrm{e}^{-2r}\). Derivations of the radial distributions and representations of the identity for different mean-field approaches can be found in Refs. [11; 12]. Additionally, the accuracy of the thermalization behaviour of each method in the high-temperature (\(\beta\to 0\)), low-temperature (\(\beta\to\infty\)) and weak electron-nuclear coupling (\(\alpha\to 0\)) limits is indicated, valid for any two-state system. We mark with ticks all entries that agree with the correct quantum–classical result, along with any erroneous multiplicative factors for the \(\beta\to 0\) and \(\beta\to\infty\) limits. For example, Ehrenfest predicts the long-time adiabatic population difference a factor of 3 too small when \(\beta\to 0\) but is correct when \(\beta\to\infty\). Further details on the representation of the operators in different methods are given in Sec. IV. in almost all previous surface-hopping approaches. In practice the algorithm is simply a deterministic version of FSSH with momentum rescalings at each attempted hop. Finally, the initial mapping-variable distribution, \(\rho_{0,\mathrm{s}}=\mathcal{W}_{AB}(\mathcal{S}^{\mathrm{ad}})\), is determined so that the MASH windows exactly reproduce the Rabi oscillations of a bare electronic system.[38] Because of its use of the kinematic picture, the equations of motion associated with MASH are not obviously generated by a Hamiltonian. This means that \(\mathcal{H}(\Gamma)\) with the MASH representation of the potential [Eq. (8a)] is technically a conserved energy rather than the generator of the quasiclassical dynamics. This raises the question of whether applying classical ergodic theory to predict the long-time limit of the MASH correlation functions is justified. Despite this, the method does possess many of the features associated with Hamiltonian dynamics, such as the conservation of phase-space volume, suggesting that the use of classical ergodic theory may still be applicable. In Sec. III.2, we will show numerically that the long-time limits of MASH correlation functions do agree with the predictions of classical ergodic theory, which justifies our use of it for MASH. ### Ergodic Theory Quasiclassical approaches that are both deterministic and based on independent trajectories can be analyzed within the framework of classical ergodic theory. In this section, we expand this idea to derive a general expression for the long-time limits of correlation functions, Eq. (12). As previously discussed, the equations of motion [Eq. (2)] conserve the norm of the Cartesian mapping variables, \(r=\frac{1}{2}\sum_{k=1}^{N}(X_{k}^{2}+P_{k}^{2})\). We cannot therefore simply apply classical ergodic theory directly to our problem. In fact, according to the so-called _ergodic hierarchy_,[54] a necessary requirement for the dynamics to exhibit mixing behavior on a given manifold of the phase space is the condition of ergodicity, i.e., only the Hamiltonian, \(\mathcal{H}(\Gamma)\), is allowed to be conserved on that manifold. Thus, in order to isolate the additional conserved variable, \(r\), we define a set of hyperspherical coordinates for the mapping variables, \((X,P)\mapsto(r,\Omega)\), where \(\Omega\) denotes the solid angle. By introducing \(\tilde{\Gamma}=(\Omega,x,p)\), we can rewrite Eq. (3) as \[\mathcal{C}_{AB}(t) =\int_{0}^{\infty}\mathrm{d}r\,r^{N-1}\mathcal{C}_{AB}^{(r)}(t), \tag{9a}\] \[\mathcal{C}_{AB}^{(r)}(t) =\int\mathrm{d}\tilde{\Gamma}\,\rho_{0}(r,\tilde{\Gamma})A(r, \tilde{\Gamma})B(r,\tilde{\Gamma}_{t}). \tag{9b}\] To evaluate the long-time limit of Eq. (9), we define \(\rho_{t,0}^{(r)}(\tilde{\Gamma}^{\prime}|\tilde{\Gamma})\) as the conditional probability of occupying the state \(\tilde{\Gamma}^{\prime}\) at time \(t\), given that the dynamics propagate on the hypersurface of constant \(r\) and that the system was initialized in state \(\tilde{\Gamma}\). Equation (9b) is then expanded as \[\mathcal{C}_{AB}^{(r)}(t) =\int\mathrm{d}\tilde{\Gamma}\,\rho_{0}(r,\tilde{\Gamma})A(r, \tilde{\Gamma})\int\mathrm{d}\tilde{\Gamma}^{\prime}\,\rho_{t,0}^{(r)}(\tilde {\Gamma}^{\prime}|\tilde{\Gamma})B(r,\tilde{\Gamma}^{\prime}). \tag{10}\] We will assume that the dynamics fulfill the _strong mixing condition_[54, 55] \[\lim_{t\to\infty}\rho_{t,0}^{(r)}(\tilde{\Gamma}^{\prime}|\tilde{\Gamma})= \rho_{\mathrm{eq}}(r,\tilde{\Gamma}^{\prime})=\frac{\mathrm{e}^{-\beta \mathcal{H}(r,\tilde{\Gamma}^{\prime})}}{\mathcal{Z}(r)}, \tag{11}\] where \(\mathcal{Z}(r)=\int\mathrm{d}\tilde{\Gamma}\,\mathrm{e}^{-\beta\mathcal{H}(r,\tilde{\Gamma})}\). The equilibrium canonical distribution in Eq. (11) is the long-time distribution expected for a classical ergodic system with finite temporal correlations.[56] The inverse temperature, \(\beta\), in Eq. (11) is obtained using the assumption of equipartition of energy in thermal equilibrium. Equation (11) implies that as \(t\to\infty\), the probability of reaching any point \(\tilde{\Gamma}^{\prime}\) on a given manifold at constant \(r\) does not depend on the initial conditions. This is expected to be valid in most models that include a large number of bath degrees of freedom that couple to the relevant subsystem. With Eqs. (10) and (11), we obtain the following expression for the long-time limit of quasiclassical correlation functions: \[\mathcal{C}_{AB}(t\to\infty) =\int_{0}^{\infty}\mathrm{d}r\,r^{N-1}\langle A\rangle_{0}^{(r)} \langle B\rangle_{\mathrm{eq}}^{(r)}, \tag{12a}\] \[\langle A\rangle_{0}^{(r)} =\int\mathrm{d}\tilde{\Gamma}\,\rho_{0}(r,\tilde{\Gamma})A(r, \tilde{\Gamma}),\] (12b) \[\langle B\rangle_{\mathrm{eq}}^{(r)} =\frac{1}{\mathcal{Z}(r)}\int\mathrm{d}\tilde{\Gamma}\,\mathrm{e}^ {-\beta\mathcal{H}(r,\tilde{\Gamma})}B(r,\tilde{\Gamma}). \tag{12c}\] The correct long-time limit expected for the quantum-classical correlation function as \(\hbar\to 0\) is however given by \[\mathcal{C}_{AB}^{\mathrm{QC}}(t\to\infty) =\langle A\rangle_{0}^{\mathrm{QC}}\,\langle B\rangle_{\mathrm{eq }}^{\mathrm{QC}}\,, \tag{13a}\] \[\langle A\rangle_{0}^{\mathrm{QC}} =\int\mathrm{d}x\mathrm{d}p\;\mathrm{tr}_{\mathrm{s}}[\tilde{ \rho}_{0}(x,p)\hat{A}(x,p)],\] (13b) \[\langle B\rangle_{\mathrm{eq}}^{\mathrm{QC}} =\frac{1}{Z_{\mathrm{QC}}}\int\mathrm{d}x\mathrm{d}p\;\mathrm{tr}_{ \mathrm{s}}[\mathrm{e}^{-\beta\hat{H}(x,p)}\hat{B}(x,p)], \tag{13c}\] where \(\mathrm{tr}_{\mathrm{s}}[\cdot]\) denotes the partial trace with respect to the subsystem and \(Z_{\mathrm{QC}}=\int\mathrm{d}x\mathrm{d}p\;\mathrm{tr}_{\mathrm{s}}[\mathrm{e}^ {-\beta\hat{H}(x,p)}]\).[57] This provides the benchmark result against which we will test the various quasiclassical predictions. While higher-order contributions in \(\hbar\) to the quantum-classical Boltzmann operator do in principle exist,[58] we do not consider them here, as almost all quasiclassical approaches cannot even reproduce the dominant zeroth-order term correctly in all cases. Additionally we show in Appendix A that Eq. (13) is independent of the Hamiltonian representation, as long as the \(\hbar\to 0\) limit is taken. The main difficulty for quasiclassical approaches in reproducing this long-time limit is found in the term given by Eq. (13c). This is because the majority of mappings implemented in quasiclassical approaches can only at most reproduce the correct trace relations for a product of two operators, \(\int\mathrm{d}x\mathrm{d}p\;\mathrm{tr}_{\mathrm{s}}[\hat{H}(x,p)\hat{B}(x,p)]= \int\mathrm{d}\Gamma\,\mathcal{H}(\Gamma)B(\Gamma)\), and will therefore not be able to correctly describe all the terms arising from the Taylor expansion of the Boltzmann operator in Eq. (13c). In order to better understand the relative accuracy of different quasiclassical approaches in reproducing the correct long-time limit of correlation functions, we apply this analysis to the specific case of two-state systems coupled to a heat bath. ## III Application to two-level systems The arguments derived so far hold for a subsystem consisting of an arbitrary number of quantum levels. For the sake of simplicity, here and in the following we apply our analysis to two-level quantum systems, although the majority of our arguments apply equally well to multi-state problems. In this case, the state-dependent potential in Eq. (1) can be written as \[\begin{split}\hat{V}(x)&=\Delta(x)\hat{\sigma}_{x} +\kappa(x)\hat{\sigma}_{z},\\ &\equiv V_{z}^{\mathrm{ad}}(x)\hat{\sigma}_{z}^{\mathrm{ad}}(x), \end{split} \tag{14}\] where \(\Delta(x)\) and \(\kappa(x)\) denote diabatic Hamiltonian parameters and \(\hat{\sigma}_{i}\), for \(i=x,y,z\), are the Pauli operators in the diabatic basis. These, together with the \(2\times 2\) identity, \(\hat{\mathcal{I}}_{\mathrm{s}}\), form a complete set of Hermitian operators for the two-level system. The \(\hat{\sigma}_{i}^{\mathrm{ad}}(x)\) operators are the Pauli matrices in the adiabatic basis, related to the diabatic \(\hat{\sigma}_{i}\) operators by a linear transformation.[38] Finally, \(V_{z}^{\mathrm{ad}}(x)=\sqrt{\Delta(x)^{2}+\kappa(x)^{2}}\) denotes half the energy gap in the adiabatic basis. We test our theoretical predictions on two nonadiabatic models in which the state-dependent potential depends on a one-dimensional reaction coordinate, \(x_{\mathrm{c}}\). This convenient choice implies that the nuclear phase-space integrals in Eq. (12) become one-dimensional, simplifying the interpretation of our results. An \(f\)-dimensional secondary bath, with coordinates \(x_{1},\dots,x_{f}\), is introduced to provide friction on the reaction coordinate, such that the dynamics thermalize in the long-time limit. The secondary bath interacts with the subsystem only via the reaction coordinate and can therefore be easily integrated out of the long-time limit expressions.[28] The two contributions to the potential in Eq. (1) are then expressed as \[U(x) =\frac{1}{2}\sum_{j=1}^{f}\omega_{j}^{2}\left(x_{j}+\frac{c_{j}x_ {\mathrm{c}}}{\omega_{j}^{2}}\right)^{2}+U_{\mathrm{RC}}(x_{\mathrm{c}}), \tag{15a}\] \[\hat{V}(x) =\hat{V}_{\mathrm{RC}}(x_{\mathrm{c}}), \tag{15b}\] where \(U_{\mathrm{RC}}(x_{\mathrm{c}})\) and \(\hat{V}_{\mathrm{RC}}(x_{\mathrm{c}})\) depend on the specific model considered and \(m=1\) for all degrees of freedom, which corresponds to working in mass-weighted coordinates. We also choose a purely Ohmic spectral density for the secondary bath \[J(\omega)=\eta\omega. \tag{16}\] The use of an Ohmic spectral density means that the dynamical simulations can be easily performed by evolving the reaction coordinate using Langevin equations of motion, which implicitly describe the effects of the secondary bath.[59] In the following, we focus on the long-time dynamics of \(\hat{B}=\hat{\sigma}_{z}^{\mathrm{ad}}(x_{\mathrm{c}})\), which corresponds to the population difference between the two adiabatic states. Additionally, we consider a factorized initial condition, where the electronic subsystem is initialized in \(\hat{A}=\frac{1}{2}\hat{\mathcal{I}}_{\mathrm{s}}\) and the phase-space variables associated with the reaction coordinate are sampled from an initial Gaussian distribution \[\rho_{\mathrm{b}}(x_{\mathrm{c}},p_{\mathrm{c}})=\frac{\beta\Omega}{2\pi} \mathrm{e}^{-\beta(\frac{1}{2}p_{\mathrm{c}}^{2}+\frac{1}{2}\Omega^{2}x_{ \mathrm{c}}^{2})}, \tag{17}\] where \(\Omega\) defines its physical width. We call this correlation function \(\mathcal{C}_{\mathcal{I}z}(t)\). Note that other correlation functions with observables orthogonal to \(\hat{\sigma}_{z}^{\mathrm{ad}}(x_{\mathrm{c}})\) [e.g., the coherences \(\hat{\sigma}_{x}^{\mathrm{ad}}(x_{\mathrm{c}})\) and \(\hat{\sigma}_{y}^{\mathrm{ad}}(x_{\mathrm{c}})\)] have to zeroth-order in \(\hbar\) a zero expectation value in the long-time limit by symmetry. This implies that if the long-time limit of \(\mathcal{C}_{\mathcal{I}z}(t)\) is captured correctly, so will all other correlation functions, as long as they are constructed from the appropriate linear combinations of the adiabatic observables. ### Spin-boson model The spin-boson model[60] is commonly used to describe charge transfer in the condensed phase. Despite its apparent simplicity, it provides a stringent test for quasiclassical methods, especially in strongly asymmetric cases due to the problem of negative populations. In the reaction-coordinate picture, the potentials associated with the spin-boson model are[61] \[U_{\mathrm{RC}}(x_{\mathrm{c}}) =\tfrac{1}{2}\Omega^{2}x_{\mathrm{c}}^{2}, \tag{18a}\] \[\hat{V}_{\mathrm{RC}}(x_{\mathrm{c}}) =\Delta\hat{\sigma}_{x}+(\varepsilon+\alpha x_{\mathrm{c}})\hat{ \sigma}_{z}, \tag{18b}\] where \(\Omega\) is the frequency of the reaction coordinate (which we also choose to coincide with the width of the initial Gaussian distribution [Eq. (17)]), \(\Delta\) is the coupling between the two diabatic states and \(\alpha\) is the system-bath coupling strength. We employ reduced units where mass and \(\hbar\) are \(1\). The long-time limits of the adiabatic population difference are shown in Fig. 1 as a function of the energy bias, \(\varepsilon\), and are calculated with several different quasiclassical approaches. The other parameters are fixed to \(\beta=0.3\), \(\Delta=1\), \(\alpha=1\), \(\Omega=1\), so that the condition for classical nuclei, \(\beta\Omega<1\), is satisfied.[62; 63] This parameter choice is also justified by calculations discussed in Ref. [28], where quasiclassical dynamics in the same parameter regime were found to agree well with numerically exact results with all degrees of freedom treated quantum mechanically.[64; 65] The validity of our theoretical formula, Eq. (12), for predicting the long-time limit of quasiclassical correlation functions has already been demonstrated by us for this model in another recent work,[15] where we found that its predictions matched well with dynamical simulations. The general robustness of quasiclassical approaches for thermalizing correctly in any parameter regime of this model can be ascertained by considering the two parameter limits of \(\varepsilon\to 0\) and \(\varepsilon\to\infty\). Of these extremes, one would expect the \(\varepsilon\to 0\) limit to be the least challenging for quasiclassical approaches, as for our chosen parameter set of the model, the thermal energy is large compared to all other energy scales, such that the quasiclassical description of both electronic and nuclear degrees of freedom should be valid. It is therefore slightly surprising that a significant number of the approaches we tested fail in this regime, including the commonly used Ehrenfest approach. The \(\varepsilon\to\infty\) limit is even more challenging, as this leads to unphysical negative populations associated with the excited state in many quasiclassical approaches [i.e., \(\mathcal{C}_{\mathbb{T}z}<-1\)], as can be seen in Fig. 1. We do however note that the double SEO and single Wigner approaches, despite still having phase-space regions that correspond to negative populations, do thermalize to the correct physical value in the \(\varepsilon\to\infty\) limit. The best approaches with regards to their thermalization behaviour for this model are single Wigner, SQC and MASH, which all thermalize reasonably accurately in both the \(\varepsilon\to 0\) and \(\varepsilon\to\infty\) limits. As a result, these approaches are observed to thermalize relatively well for all parameter regimes of the model, as illustrated in Fig. 1. In particular, MASH is seen to exactly reproduce our benchmark result across the whole range. ### An anharmonic model In order to fully test the effect of inverted potentials on the thermalization behaviour of mean-field approaches, an anharmonic model is needed for which at least one of the diabatic potentials can become unbounded. We choose the reaction-coordinate potentials appearing in Eq. (15) to be \[U_{\rm RC}(x_{\rm c}) =\tfrac{1}{2}\big{[}\tfrac{1}{2}\Omega^{2}x_{\rm c}^{2}+{\rm e}^ {-\Omega(x_{\rm c}-\bar{x}_{\rm c})}\big{]}, \tag{19a}\] \[\hat{V}_{\rm RC}(x_{\rm c}) =\Delta\hat{\sigma}_{x}+\tfrac{\alpha}{2}\big{[}\tfrac{1}{2} \Omega^{2}x_{\rm c}^{2}-{\rm e}^{-\Omega(x_{\rm c}-\bar{x}_{\rm c})}\big{]} \hat{\sigma}_{z}, \tag{19b}\] where \(0\leq\alpha\leq 1\) determines the strength of the electron-nuclear coupling and we choose the other parameters to be \(\Delta=1\), \(\Omega=1\), \(\bar{x}_{\rm c}=5\) and \(\beta=0.3\). This model is constructed so that the electronic and nuclear subsystems are uncoupled at \(\alpha=0\) and the model becomes identical to a previously used electronic predissociation model[66; 67] at \(\alpha=1\). On increasing \(\alpha\), we see in Fig. 2 that the red diabatic well at lower energy becomes progressively less bounded. Additionally, the energy gap between the lower and upper adiabatic states increases, which more significantly drives the system into the lower energy adiabat in the long-time limit. If a trajectory has a large enough negative population in the purple diabat [Fig. 2], so that the contribution from its potential becomes inverted, then the total nuclear force can become unbounded, resulting in the trajectory accelerating off to infinity. This problem can be understood in terms of the mean-field approximation to the Figure 1: The long-time limits of the adiabatic population difference as a function of the energy bias, \(\varepsilon\), for the spin–boson model introduced in Sec. III.1. The other parameters are fixed to \(\beta=0.3\), \(\Delta=1\), \(\alpha=1\), \(\Omega=1\). Inset: the difference between the benchmark quantum–classical result [from Eq. (13)] and the quasiclassical predictions for the same methods shown in the main panel. Figure 2: The diabatic potentials associated with the anharmonic model, \(\tfrac{1}{4}[1\pm\alpha]\Omega^{2}x_{\rm c}^{2}+\tfrac{1}{2}[1\mp\alpha]{\rm e }^{-\Omega(x_{\rm c}-\bar{x}_{\rm c})}\), for several values of the coupling parameter, \(\alpha\). The potentials have been vertically shifted such that the lowest energy minimum of the diabats is located at zero. In the case of \(\alpha=0\), the two diabatic potentials are identical. state-dependent potential [Eq. (4a)], the magnitude of which can become unphysically large through the multiplication of the Cartesian mapping variables with a norm, \(r\), greater than one. For this model, this results in the following definition of the effective electron-nuclear coupling associated with these methods: \(\alpha_{\mathrm{eff}}=\alpha r\). Given that that the red diabatic potential shown in Fig. 2 becomes unbounded when \(\alpha\geq 1\), we see that this will first occur for the mean-field methods when \(\alpha r_{\mathrm{max}}\geq 1\), where \(r_{\mathrm{max}}\) is the maximum allowed value of the Cartesian mapping variable norm for that method. We therefore note that the inverted potential problem is particularly problematic for MMST approaches, for which \(r_{\mathrm{max}}=\infty\), so that the nuclear force can become unbounded for any non-zero value of \(\alpha\). Results for the long-time limits of the adiabatic population difference for this model are shown in Fig. 3 for different quasiclassical methods. The lines in the picture correspond to the theoretical predictions from ergodic theory, while the stars are numerical results from dynamical simulations. Results for each method are only provided for the values of \(\alpha\) for which the nuclear force cannot become unbounded because of the problem of inverted potentials. We first note that the long-time limits from the dynamical simulations agree well with our simple formula [Eq. (12)], further confirming the validity of applying classical ergodic theory to predict the long-time limit of quasiclassical correlation functions. Importantly, the agreement confirms that the same formula can also be used to accurately predict the long-time limit of MASH correlation functions, even though MASH does not formally have Hamiltonian generated dynamics, as discussed previously in Sec. II.3. Of all the quasiclassical methods we have tested, MASH is the only one in complete agreement with the quantum-classical benchmark. Its success is largely due to its windowing scheme applied consistently to both the observables and the nuclear force, such that the method does not suffer from the problem of negative populations in either. Although the SQC approach does not deviate too strongly from the exact long-time limit in the parameter regime \(0\leq\alpha<0.5\) (in part due to the windowing of its observables), the problem of unbounded inverted potentials contributing to the nuclear force (which unlike for MASH are not windowed) means that SQC becomes unstable in the \(0.5\leq\alpha\leq 1\) regime (because \(r_{\mathrm{max}}=2\) for SQC, as shown in Appendix B). It would of course be possible to simply discount unbounded trajectories, but this would introduce an ad hoc modification to the method, which may affect some of its formal properties. In addition to MASH, Ehrenfest also does not suffer from the problem of inverted potentials (because \(r=1\)) and can therefore also be applied in all parameter regimes of the model. However, we see from Fig. 3 that the Ehrenfest long-time populations are highly inaccurate, as was also observed in Fig. 1. In Sec. IV, we further analyze our long-time limit formula to better understand the deficiencies in the thermalization behaviour of certain methods, as well as the excellent thermalization properties of MASH. ## IV Analysis For the two-state models considered in this paper, the benchmark quantum-classical predictions for the long-time limits of the adiabatic populations are obtained from Eq. (13) using \(\hat{A}=\frac{1}{2}\hat{\mathcal{I}}_{\mathrm{s}}\) and \(\hat{B}=\hat{\sigma}_{z}^{\mathrm{ad}}(x)\). Inserting the expressions for the Hamiltonian [Eqs. (1) and (14)] and explicitly performing both the quantum traces and the integrals over the nuclear momenta, we find that \[\mathcal{C}_{\mathcal{I}z}^{\mathrm{QC}}(t\to\infty)=-\frac{\left\langle\sinh \left[\beta V_{z}^{\mathrm{ad}}(x)\right]\right\rangle_{\mathrm{b}}}{\left\langle \cosh\left[\beta V_{z}^{\mathrm{ad}}(x)\right]\right\rangle_{\mathrm{b}}}. \tag{20}\] The phase-space average associated with the state-independent nuclear potential is defined as \[\langle f(x)\rangle_{\mathrm{b}}=\frac{\int\mathrm{d}x\,\mathrm{e}^{-\beta U (x)}f(x)}{\int\mathrm{d}x\,\mathrm{e}^{-\beta U(x)}}, \tag{21}\] and we have assumed that \(\hat{\rho}_{0}(x,p)\) is normalized, \(\int\mathrm{d}x\mathrm{d}p\,\mathrm{tr}_{\mathrm{s}}[\hat{\rho}_{0}(x,p)]=1\). This expression is most easily derived within the kinematic representation. We show in Appendix A that the same result can be obtained using the Hamiltonian in the adiabatic representation, in the \(\hbar\to 0\) limit. In the following we calculate the equivalent long-time limit expressions for several quasiclassical methods. In Figure 3: The long-time limits of the adiabatic population difference for the anharmonic model introduced in Sec. III.2, as a function of the coupling parameter, \(\alpha\). The lines in the figure correspond to our theoretical predictions given by Eq. (12), while the stars are results from dynamical simulations. The simulations were performed by sampling the initial nuclear phase-space variables from Eq. (17) and then propagating the trajectories for \(t=400\) with \(\eta=2\Omega\) in Eq. (16). Results are only shown in regions where trajectories cannot become unbounded due to inverted potentials. order to perform the electronic phase-space integrals necessary to compare our predictions with Eq. (20), we choose to work with the spin-mapping variables in the adiabatic basis [Eq. (7)]. ### Mean-Field Approaches For mean-field approaches, the long-time limit involves the following operator representations: \(A(r,\tilde{\Gamma})=\frac{1}{2}\mathcal{I}_{\mathrm{s}}(r)\), \(B(r,\tilde{\Gamma})=rS_{z}^{\mathrm{ad}}\) and \(\rho_{0}(r,\tilde{\Gamma})=\rho_{0,\mathrm{s}}(r)\rho_{\mathrm{b}}(x,p)\). Inserting these into Eq. (12) and performing some of the phase-space integrals, we find \[\mathcal{L}_{\mathrm{Z}}^{\mathrm{MF}}(t\to\infty)=-\int_{0}^{ \infty}\mathrm{d}r\,r^{2}\rho_{0,\mathrm{s}}(r)\,\mathcal{I}_{\mathrm{s}}(r)\] \[\times\left\langle\frac{\beta rV_{z}^{\mathrm{ad}}(x)\cosh[ \beta rV_{z}^{\mathrm{ad}}(x)]-\sinh[\beta rV_{z}^{\mathrm{ad}}(x)]}{[\beta rV _{z}^{\mathrm{ad}}(x)]^{2}\mathcal{Z}^{\mathrm{MF}}(r)}\right\rangle_{\mathrm{ b}}, \tag{22a}\] \[\mathcal{Z}^{\mathrm{MF}}(r)=\left\langle\frac{\sinh\left[\beta rV_{z}^{ \mathrm{ad}}(x)\right]}{\beta rV_{z}^{\mathrm{ad}}(x)}\right\rangle_{\mathrm{ b}}. \tag{22b}\] Comparing Eq. (20) and Eq. (22), we see that the expression for the long-time limit of the mean-field correlation function has the wrong functional form, such that there is no universal expression for \(\rho_{0,\mathrm{s}}(r)\mathcal{I}_{\mathrm{s}}(r)\) that guarantees that the corresponding mean-field approach would always thermalize to the quantum-classical benchmark. For a given temperature, a system-specific expression for these functions can however be determined so that Eq. (20) and Eq. (22) agree. This idea has recently been used to design the "ellipsoid mapping", a new mean-field approach for computing thermal correlation functions that rigorously obeys detailed balance.[28] Such an approach however requires a static calculation to be carried out at thermal equilibrium for each system and temperature of interest, in order to calculate the correct associated expression for \(\rho_{0,\mathrm{s}}(r)\mathcal{I}_{\mathrm{s}}(r)\), before any dynamical simulations can be performed. If we instead decide to only use universal functions for \(\rho_{0,\mathrm{s}}(r)\mathcal{I}_{\mathrm{s}}(r)\), then their form can at best be chosen to ensure correct thermalization in certain parameter limits. In the following, we consider the limiting cases of high temperature, low temperature and weak system-bath coupling. For the high-temperature limit (\(\beta\to 0\)), we show in Appendix C.1 that mean-field approaches are at best capable of reproducing the benchmark [Eq. (20)] up to first order in \(\beta\). To achieve this for any two-level system, the condition \[\frac{1}{3}\int_{0}^{\infty}\mathrm{d}r\,r^{3}\rho_{0,\mathrm{s}}(r)\mathcal{I }_{\mathrm{s}}(r)=1 \tag{23}\] must be satisfied. Mean-field methods that fulfill Eq. (23) are marked with a tick in the \(\beta\to 0\) column of Table 1. For methods that do not satisfy this condition, the value associated with the left-hand side of Eq. (23) is given, which by comparing Eq. (14a) and Eq. (14b) is seen to be the multiplicative error in the long-time population difference of the method when \(\beta\to 0\). Incidentally, these errors qualitatively explain the deviations of the long-time limit populations from the quantum-classical benchmark in the \(\varepsilon\to 0\) limit of Fig. 1. While the \(\varepsilon\to 0\) and the \(\beta\to 0\) limits of a spin-boson model do not always coincide, they do so approximately for the parameter regime that we study here. In the low-temperature limit (\(\beta\to\infty\)), mean-field approaches can correctly reproduce the long-time limit of correlation functions to at best zeroth order in \(\mathrm{e}^{-\beta V_{z}^{\mathrm{ad}}(x)}\). As shown in Appendix C.2, they must then satisfy \[\int_{0}^{\infty}\mathrm{d}r\,r^{2}\rho_{0,\mathrm{s}}(r)\mathcal{I}_{ \mathrm{s}}(r)=1. \tag{24}\] Mean-field methods that satisfy Eq. (24) are marked with a tick in the \(\beta\to\infty\) column of Table 1. For methods that do not satisfy this condition, the value associated with the left-hand side of Eq. (24) is given, which is the multiplicative error in the long-time population difference of the method when \(\beta\to\infty\). For strongly asymmetric systems, relaxation in the long-time limit will predominately occur into the lowest potential-energy well, like in the low-temperature limit. This means that our results for \(\beta\to\infty\) can also be used to explain the thermalization behaviour of mean-field methods in the spin-boson model for \(\varepsilon\to\infty\), given in Fig. 1. We note however that the asymptotic approach to the low-temperature limit, which corresponds to the term that is first-order in \(\mathrm{e}^{-\beta V_{z}^{\mathrm{ad}}(x)}\), is always wrong for mean-field approaches, as is also observed numerically from our results in Fig. 1. We show in Appendix C.3 (and also illustrate with crosses for all the mean-field methods in the \(\alpha\to 0\) column of Table 1) that it is not possible to design mean-field methods which consistently thermalize correctly in the weak-coupling limit. Despite this, spin mapping and spin-PLDM appear very accurate in the \(\alpha\to 0\) limit of the anharmonic model [see Fig. 3]. This is because the weak-coupling limit for this particular parameter set of the model also coincides with the \(\beta\to 0\) limit. These methods would not have been as accurate in this limit if we would have instead studied a lower-temperature regime. Let us remark that if the electron-nuclear coupling is identically zero, all methods discussed are able to correctly capture the Rabi oscillations of the isolated electronic subsystem. In this case ergodic theory does not apply, given that the dynamics do not thermalize to equilibrium at long times. Note that this condition is different from the \(\alpha\to 0\) limit discussed in Table 1 and Appendix C.3, where the coupling is assumed to be infinitesimal but nonzero. ### Symmetric quasiclassical windowing (SQC) For SQC, the potential still retains the same representation as for the mean-field methods [i.e., in Eq. (14)], but the observable operators are now represented by the triangular windows, given by Eq. (14) in terms of the spin-mapping variables. Inserting these operator representations into Eq. (12) and performing some of the phase-space integrals gives \[\mathcal{C}^{\text{SOC}}_{\mathcal{I}z}(t\to\infty) =\frac{\bar{\mathcal{C}}^{\text{SOC}}_{\mathcal{I}z}(t\to\infty)} {\mathcal{N}^{\text{SOC}}(t\to\infty)}, \tag{25a}\] \[\bar{\mathcal{C}}^{\text{SOC}}_{\mathcal{I}z}(t\to\infty) =-\int_{1}^{2}\mathrm{d}r\left(r-1\right)\] \[\times\left\{\frac{\sinh[\beta V_{z}^{\text{ad}}(x)]\sinh[\beta( r-1)V_{z}^{\text{ad}}(x)]}{\beta rV_{z}^{\text{ad}}(x)\mathcal{Z}^{\text{MF}}(r)} \right\}_{\text{b}},\] (25b) \[\mathcal{N}^{\text{SOC}}(t\to\infty) =\int_{1}^{2}\mathrm{d}r\left(r-1\right)\] \[\times\left\{\frac{\cosh[\beta V_{z}^{\text{ad}}(x)]\sinh[\beta( r-1)V_{z}^{\text{ad}}(x)]}{\beta rV_{z}^{\text{ad}}(x)\mathcal{Z}^{\text{MF}}(r)} \right\}_{\text{b}}, \tag{25c}\] where \(\mathcal{N}^{\text{SOC}}(t)\) is the normalization factor, which is required to ensure that the electronic populations sum to one. Note that we have applied the ergodic theory outlined in Sec. II.4 separately to each term in the ratio of Eq. (25a). A comparison with Eq. (20) clearly shows that Eq. (25) is not exact, again having the wrong functional form compared to the benchmark quantum-classical long-time limit. However, the advantage of SQC over the other mean-field approaches is that the long-time limit of its correlation function also reproduces the benchmark in the weak-coupling limit,[30] in addition to the high- and low-temperature limits (as indicated in Table 1). More details on the thermalization of SQC in these limits is given in Appendix C. The fact that SQC can simultaneously describe the thermalization correctly in all of these limits explains why it is reasonably accurate for all values of \(\varepsilon\) for the spin-boson model in Fig. 1 and accurate in the \(\alpha\to 0\) limit of the anharmonic model in Fig. 2. However for larger values of \(\alpha\), there is a clear deviation even before the unbounded inverted potentials appear at \(\alpha\geq 0.5\). ### Mapping approach to surface hopping (MASH) In MASH, both the potential and observable operators are consistently described through the windowing scheme, \(\hat{\sigma}^{\text{ad}}_{z}(x)\mapsto\text{sgn}(S^{\text{ad}}_{z})\). Following a strategy similar to the one outlined above for the other methods, we find for the long-time population difference of MASH that \[\mathcal{C}^{\text{MASH}}_{\mathcal{I}z}(t\to\infty) =\int_{-1}^{1}\mathrm{d}S^{\text{ad}}_{z}\ \text{sgn}\,(S^{\text{ad}}_{z})\rho^{\text{MASH}}_{\text{eq}}(S^{\text{ad}}_{ z}), \tag{26a}\] \[\rho^{\text{MASH}}_{\text{eq}}(S^{\text{ad}}_{z}) =\frac{\left\langle\mathrm{e}^{-\beta V_{z}^{\text{ad}}(x)\, \text{sgn}(S^{\text{ad}}_{z})}\right\rangle_{\text{b}}}{2\left\langle\cosh \left[\beta V_{z}^{\text{ad}}(x)\right]\right\rangle_{\text{b}}}, \tag{26b}\] where we have additionally used that the initial distribution for the spin-mapping variables for this correlation function is \(r\rho_{\text{o},\text{s}}=\mathcal{W}_{\text{PP}}(\mathbf{S}^{\text{ad}})=2|S^{ \text{ad}}_{z}|\),[38] The marginal equilibrium distribution function for MASH, \(\rho^{\text{MASH}}_{\text{eq}}(S^{\text{ad}}_{z})\), corresponds to the long-time limit of the conditional probability, given by Eq. (11), with the bath degrees of freedom integrated out. We first note that our expression for the marginal equilibrium distribution function for MASH agrees with the long-time distribution of \(S^{\text{ad}}_{z}\), histogrammed from an ensemble of dynamical MASH trajectories. This is shown in Fig. 4 for the same spin-boson model that was introduced in Sec. III.1, for several values of \(\varepsilon\). This marginal distribution constitutes an exact discrete-valued representation of the quantum-classical Boltzmann factors, which demonstrates how MASH effectively quantizes the electronic states, enabling it to thermalize to the benchmark quantum-classical distribution in all cases. The ultimate proof of the long-time accuracy of the MASH dynamics is the fact that Eq. (20) and Eq. (26a) are seen to be identical once the integrals over the spin-mapping variables have been performed analytically. To the best of our knowledge, MASH is the only nonequilibrium quasiclassical approach that is guaranteed to thermalize correctly in all nonadiabatic systems in the \(\hbar\to 0\) limit. This result may come as a surprise as it is known that MASH does not rigorously obey detailed balance, due to the non-conservation of its weighting factor by the dynamics.[38] This raises the question of how MASH can still thermalize correctly. To describe the issue quanti Figure 4: The marginal equilibrium distribution function for the mapping variable \(S^{\text{ad}}_{z}\), calculated for the spin–boson model defined in Sec. III.1, for several values of the energy bias \(\varepsilon\). The colored lines are obtained from histogramming MASH trajectories initialized from Eq. (17) and propagated for \(t=500\) with the optimal damping parameter, \(\eta=2\Omega\) [Eq. (16)]. Additionally, the black dashed lines correspond to the expected results from classical ergodic theory, given by Eq. (26b). tatively, a measure of the microscopic reversibility error (MRE) was defined in Ref. [38]. Based on the same ergodicity arguments used here, we show in Appendix D that in the long-time limit the MRE vanishes. This shows that the violation of detailed balance by MASH can only lead to errors at intermediate times, with no adverse effects on its long-time relaxation behaviour. ## V Conclusions In this paper, we presented an analysis of the long-time limits of quasiclassical approaches for simulating nonadiabatic dynamics. We exploited the Hamiltonian structure of the dynamics to make use of results from classical ergodic theory, rigorously accounting for the conservation of the norm of the mapping variables. This allowed us to assess and compare the accuracy of different methods in capturing the correct detailed balance at long times. Our theoretical framework can be applied to any quasiclassical method which is incompressible (i.e., it conserves phase-space volume) non-integrable (i.e., the dynamics cannot be solved analytically) and deterministic. [68] Using our theory, we tested the thermalization behaviour of a wide array of quasiclassical approaches on both the harmonic and anharmonic models. Our analysis revealed that most of the commonly used quasiclassical approaches violate detailed balance and thus do not recover the correct quantum-classical thermal distribution. Errors are particularly large for the anharmonic model, where the problem of inverted potentials meant that trajectories from the majority of quasiclassical approaches were unphysically accelerated off to infinity. Among all of the methods considered, only MASH is guaranteed to predict the exact quantum-classical correlation functions in the long-time limit, to zeroth-order in \(\hbar\). This method is therefore capable of solving the long-standing issue of detailed balance (at least in the long-time limit), a problem which has plagued the field of quasiclassical dynamics to date. In addition, because MASH does not suffer from the problem of inverted potentials, it is ideally suited for performing ab-initio nonadiabatic simulations of molecules, where the strongly anharmonic adiabatic potentials would pose a significant challenge to the other quasiclassical approaches. The main limitation of the original version of MASH, introduced in Ref. [38], is that it was derived only for two-state problems. Recently a multi-state generalization of MASH has been proposed by Runeson and Manolopoulos. [69] In this version, they modified the observable operators to be more similar to the mean-field approaches, which made it easier for them to generalize the underlying theory to multi-state problems. Like the original MASH approach, it is guaranteed to thermalize correctly in the long-time limit. However, it is not guaranteed to exactly reproduce the short-time dynamics of the quantum-classical Liouville equation, which was the means by which the original MASH approach obtained unique and rigorous prescriptions for the momentum rescalings, treatment of frustrated hops and decoherence corrections. In addition, the measurement of observables is not performed using the same windowing procedure as for the nuclear force, meaning that the two can become inconsistent and the observables may measure negative populations. Whilst the accuracy of the numerical results look promising, [69] given that the rigorous nature and internal consistency of the original MASH approach was its main advantage over FSSH, we still think that there is more work to be done in order to obtain the ultimate multi-state generalization of MASH. There may however still be a place for mean-field mapping methods. In contrast to typical molecules in the gas phase, many solid-state systems are characterized by a high degree of electronic coherence. Due to the dense bands of electronic states, the treatment of the nonadiabatic dynamics using mean-field approaches seems the most suitable. From our analysis, we conclude that single-Wigner and SQC would be the best mean-field approaches to use in this case. Given that the electronic structure of solids is fairly harmonic, we also expect that the problem of inverted potentials would not be as severe. It is interesting to compare the conclusions from our analysis to those obtained in previous work. In Refs. [24; 11; 25], it was observed that the unity methods led to improved accuracy compared to other MMST approaches. Similar improvements were observed in Refs. [12; 26; 43; 13], and [70] for spin-mapping methods, which also treat the identity operator exactly within the mapping. However, most previous studies were performed on harmonic models with reasonably small energy biases. From our analysis in this paper, we observe that for more extreme systems, the quasiclassical approaches that treat the identity operator exactly do not always perform the best, suggesting that other criteria may also be important to consider. For example, a recent analysis performed on the ab-initio exited-state dynamics in ethylene found that the best approaches in this case were those that had the least phase-space volume associated with negative populations. [71] The long-time accuracy of quasiclassical methods can often be significantly improved by utilizing these techniques within the Nakajima-Zwanzig generalized quantum master equation (GQME) formalism. [72; 73; 74] This approach describes the non-unitary dissipative dynamics of the subsystem in terms of a non-Markovian equation of motion. Memory effects in the GQME are captured by kernels that decay on timescales that are in general much shorter than the typical electronic relaxation times. This means that it is often advantageous to calculate the kernels using quasiclassical methods, instead of obtaining the full dynamics directly. However it has recently been shown that using the GQME is not guaranteed to fix detailed balance [15] if the input quasiclassical method is not sufficiently accurate at short times. Given the remarkable long-time accuracy exhibited by MASH, it could be insightful to couple this method to the GQME for de termining the importance of non-Markovian dissipative effects in the dynamics, as well as to reduce the cost of quasiclassical simulations through computing short-lived memory kernels. Recently there has been increased interest in understanding the effect of strong light-matter coupling on the properties of matter. For quantum light, the coupled electron-photon dynamics can be described using quasiclassical approaches, where the photon phase-space variables are initially sampled from a Wigner distribution.[75; 76; 77] While the short-time dynamics is observed to be fairly accurate, the long-time dynamics is known to suffer as a result of unphysical zero-point energy leakage from these quantum modes.[78] For classical light, the presence of an external driving field breaks the thermalization to the Boltzmann distribution. The dynamics converge at long times to nonequilibrium stationary states which can be described in certain parameter regimes by the Floquet-Gibbs distribution.[79; 80] In both cases, our analysis based on classical ergodic theory could be extended in order to analyze how well quasiclassical approaches describe the correct light-modified long-time properties. ###### Acknowledgements. This project has received funding from European Union's Horizon 2020 under MCSA Grant No. 801459, FP-RESOMUS. We also thank Johan Runeson and Joseph Lawrence for helpful discussions. ## Author Declarations ### Conflict of Interest The authors have no conflicts to disclose. ### Author Contributions **Graziano Amat**: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Visualization (equal); Writing - original draft (equal); Writing - review & editing (equal). **Jonathan R. Mannouch**: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Visualization (equal); Writing - original draft (equal); Writing - review & editing (equal). **Jeremy O. Richardson**: Conceptualization (equal); Formal analysis (supporting); Supervision (lead); Writing - review & editing (equal). ## Appendix A Hamiltonian representations The total Hamiltonian can be expressed in various representations. In the following we demonstrate that our results are independent of these choices. For simplicity, we perform the analysis in this section on two-state Hamiltonians, although our conclusions are entirely valid for multi-state problems. We start with the Hamiltonian written in the diabatic basis [Eq. (1)]. If the classical nuclear limit is first taken before the electronic operators are transformed to the adiabatic basis, we end up in the kinematic representation[52] [Eq. (14)] \[\hat{H}_{\text{kin}}(x,p)=\frac{p^{2}}{2m}+U(x)+V^{\text{ad}}_{z}(x)\hat{ \sigma}^{\text{ad}}_{z}(x). \tag{17}\] To illustrate that our quantum-classical benchmark [Eq. (13)] is identical in both the diabatic and kinematic representations, we compute the quantum-classical partition function using the diabatic representation \[\begin{split} Z_{\text{QC}}&=\int\text{d}x\text{d} p\operatorname{tr}_{\text{s}}\bigl{[}\operatorname{e}^{-\beta\hat{H}(x,p)} \bigr{]}\\ &=\int\text{d}x\text{d}p\operatorname{e}^{-\beta p^{2}/2m-\beta U (x)}\operatorname{tr}_{\text{s}}\Bigl{[}\operatorname{e}^{-\beta\hat{V}(x)} \Bigr{]}\\ &=\int\text{d}x\text{d}p\operatorname{e}^{-\beta p^{2}/2m-\beta U (x)}\left(\operatorname{e}^{-\beta V^{\text{ad}}_{z}(x)}+\operatorname{e}^{ \beta V^{\text{ad}}_{z}(x)}\right),\end{split} \tag{18}\] where we have used the fact that the trace of the exponential of an operator is the sum of the exponentials of its eigenvalues. This result is clearly identical to that obtained from the kinematic representation, as the only difference is that the potential energy is already diagonal in the latter. If the order of operations is interchanged so that the transformation to the adiabatic basis is done before the classical nuclear limit is taken, we end up in the adiabatic representation \[\begin{split}\hat{H}_{\text{ad}}(x,p_{\text{ad}})=\frac{1}{2m}& \left(p_{\text{ad}}+\hbar d(x)\hat{\sigma}^{\text{ad}}_{y}(x) \right)^{2}\\ &+U(x)+V^{\text{ad}}_{z}(x)\hat{\sigma}^{\text{ad}}_{z}(x),\end{split} \tag{19}\] where \(d(x)=\bigl{\langle}\psi_{+}(x)\bigr{|}\,\frac{\partial}{\partial x}\,\bigl{|} \,\psi_{-}(x)\bigr{\rangle}\) is the nonadiabatic coupling vector and \(p_{\text{ad}}\) is the canonical momentum associated with the adiabatic representation. Factors of \(\hbar\), which originate from the nuclear momentum operator, are explicitly retained for the following discussion. Computing the quantum-classical partition function using the adiabatic representation, we find \[\begin{split} Z^{\text{ad}}_{\text{QC}}&=\int\text{ d}x\text{d}p_{\text{ad}}\,\operatorname{tr}_{\text{s}}\Bigl{[}\operatorname{e}^{- \beta\hat{H}_{\text{ad}}(x,p_{\text{ad}})}\Bigr{]},\\ &=\int\text{d}x\text{d}p_{\text{ad}}\operatorname{e}^{-\beta(P^{ \text{ad}}_{z\text{ad}}+\hbar^{2}d(x)^{2})/2m-\beta U(x)}\\ &\quad\times\operatorname{tr}_{\text{s}}\Bigl{[}\operatorname{e} ^{-\beta\left(\hbar d(x)p_{\text{ad}}\hat{\sigma}^{\text{ad}}_{y}(x)/m+\hat{V }^{\text{ad}}_{x}(x)\hat{\sigma}^{\text{ad}}_{z}(x)\right)}\Bigr{]},\\ &\simeq\int\text{d}x\text{d}p_{\text{ad}}\operatorname{e}^{- \beta p^{2}_{\text{ad}}/2m-\beta U(x)}\operatorname{tr}_{\text{s}}\Bigl{[} \operatorname{e}^{-\beta V^{\text{ad}}_{z}(x)\hat{\sigma}^{\text{ad}}_{z}(x)} \Bigr{]}\end{split} \tag{20}\] where in the final line we take the \(\hbar\to 0\) limit. Is it then seen that Eqs. (10) and (11) are identical in this limit. ## Appendix B Operators for SQC in spherical coordinates In this appendix, we rewrite the two-state SQC representation of the adiabatic population operators in spherical coordinates. Using this result, an analytic expression for the long-time limit of the adiabatic populations can be obtained for this method [Eq. (25)]. Unlike most other mapping approaches, the exact form of SQC depends on the representation chosen. We will utilize the adiabatic representation, which has previously been employed in ab initio simulations,[81] because this is expected to lead to the best possible predictions available from the approach. It is then also easier to compare with MASH, which is always run in the adiabatic representation. The SQC approach is usually expressed in terms of action-angle variables. These can be transformed to spherical coordinates by combining Eqs. (5) and (7), which leads to \[rS_{x}^{\mathrm{ad}}=r\sin\theta\cos\phi=2\sqrt{(n_{+}+\gamma)(n_{-}+\gamma)} \cos(q_{+}-q_{-}), \tag{12a}\] \[rS_{y}^{\mathrm{ad}}=r\sin\theta\sin\phi=2\sqrt{(n_{+}+\gamma)(n_{-}+ \gamma)}\sin(q_{+}-q_{-}),\] (12b) \[rS_{z}^{\mathrm{ad}}=r\cos\theta=n_{+}-n_{-}, \tag{12c}\] where \(q_{+}\) and \(q_{-}\) are the angle variables for the upper and lower adiabatic states and are initially sampled uniformly in the range \([0,2\pi)\). Additionally, \(n_{+}\) and \(n_{-}\) are the action variables for the two adiabatic states, sampled from the appropriate triangular window corresponding to the initial electronic population [Eq. (6)].[50] The expression for the windows in spherical coordinates can be determined from the inverse transformation of Eq. (12), which corresponds to \[n_{+} =\frac{r}{2}(1+\cos\theta)-\gamma, \tag{13a}\] \[n_{-} =\frac{r}{2}(1-\cos\theta)-\gamma,\] (13b) \[q_{+}-q_{-} =\phi. \tag{13c}\] Note that \(q_{+}+q_{-}\) is an unimportant cyclic variable conjugate to the conserved radius. The new angle variable \(\phi\) is initially sampled uniformally in the range \([0,2\pi)\). Finally, inserting Eq. (13) into Eq. (6), we find that the triangular windows in spherical coordinates are \[\mathcal{W}_{*}(r,\theta) =h\left(\frac{r}{2}(1+\cos\theta)-1\right)h(2-r), \tag{14a}\] \[\mathcal{W}_{-}(r,\theta) =h\left(\frac{r}{2}(1-\cos\theta)-1\right)h(2-r). \tag{14b}\] which are illustrated in Fig. 5. Note that for \(r=2\), the SQC windows fill the \(\cos\theta\) axis and correspond to the hemispheres of the spin-sphere, which are identical to the windows used in MASH. This connection is unsurprising, as it has already been noted that the weighting factors used in the MASH correlation functions correspond to those for the \(r=2\) MMST sphere.[38] Finally, in order to find the SQC representations for the \(r^{N-1}\rho_{0,s}(r)\) distribution and the \(\hat{\mathcal{I}}_{s}\) and \(\hat{\sigma}_{z}^{\mathrm{ad}}(x)\) operators, linear combinations of Eq. (14) can be taken, leading to \[r^{N-1}\rho_{0,s}(r) =h(2-r), \tag{15a}\] \[\mathcal{I}_{s}(r) =h(|rS_{z}^{\mathrm{ad}}|-2+r),\] (15b) \[\sigma_{z}^{\mathrm{ad}}(x) =\mathrm{sgn}(S_{z}^{\mathrm{ad}})h\big{(}|rS_{z}^{\mathrm{ad}}|-2+ r\big{)}. \tag{15c}\] ## Appendix C Perturbative expansions of the long-time limit In this appendix, we obtain perturbative expressions for the long-time limit of various quasiclassical correlation functions. These expressions allow us to assess how accurately different methods thermalize in different parameter regimes. Note that it is unnecessary to carry out the following analysis for MASH, because, as proven in Sec. IV.3, the long-time limits of its correlation functions are exact and hence all orders of these perturbative expansions are guaranteed to be correctly reproduced. ### The \(\beta\to 0\) limit The \(\beta\to 0\) limit allows us to assess how accurately quasiclassical approaches reproduce the long-time limit of correlation functions at high-temperature. Because Figure 5: The triangular windows used in SQC expressed in spherical polar coordinates. The blue and orange regions correspond to \(W_{*}(r,\theta)\) and \(W_{-}(r,\theta)\) respectively, which take the form of spherical caps. we are already assuming in this paper that we are in the high-temperature limit with respect to the nuclear degrees of freedom (so that a classical treatment of the bath is valid), we clarify that by the high-temperature limit here we mean relative to the electronic energy scales. Using the expressions Eqs. (20), (22) and (25), we obtain the following results for the first-order term in the corresponding high-temperature Taylor expansions \[\mathcal{C}^{\text{QC}}_{\mathcal{I}z}(t\to\infty) \sim-\beta(V^{\text{ad}}_{z}(x))_{\text{b}}, \tag{14a}\] \[\mathcal{C}^{\text{MF}}_{\mathcal{I}z}(t\to\infty) \sim-\frac{\beta}{3}(V^{\text{ad}}_{z}(x))_{\text{b}}\int\,\mathrm{ d}r\,r^{3}\rho_{0,s}(r)\,\mathcal{I}_{\text{s}}(r),\] (14b) \[C^{\text{SQC}}_{\mathcal{I}z}(t\to\infty) \sim-\beta(V^{\text{ad}}_{z}(x))_{\text{b}}. \tag{14c}\] We thus see that SQC is able to correctly thermalize in the high-temperature limit, although we note that this is not the case for all mean-field methods. Comparing Eqs. (14a) and (14b) leads to the condition expressed in Eq. (23) that must be satisfied for mean-field methods to describe this limit correctly. While the \(\varepsilon\to 0\) and the \(\beta\to 0\) limits of a spin-boson model do not always coincide, they approximately do for the parameter regime that we study here. This means that these high-temperature limit formulas can be used to understand the thermalization behaviour of quasiclassical approaches in the \(\varepsilon\to 0\) limit of the spin-boson model discussed in Sec. III.1. #### c.2.2 The \(\beta\to\infty\) limit Here we investigate the low-temperature (\(\beta\to\infty\)) limit with respect to the electronic energy scales. We still assume that the temperature is high with respect to the nuclear degrees of freedom, such that it is valid to treat them classically. The zeroth-order terms of these expansions are \[\mathcal{C}^{\text{QC}}_{\mathcal{I}z}(t\to\infty) \sim-1, \tag{15a}\] \[\mathcal{C}^{\text{MF}}_{\mathcal{I}z}(t\to\infty) \sim-\int\,\mathrm{d}r\,r^{2}\rho_{0,\text{s}}(r)\,\mathcal{I}_{ \text{s}}(r),\] (15b) \[\mathcal{C}^{\text{SQC}}_{\mathcal{I}z}(t\to\infty) \sim-1. \tag{15c}\] We note that SQC also describes the thermalization in the low-temperature regime correctly, while mean-field methods do not unless they satisfy the condition Eq. (24). In the long-time limit, the system will predominately relax into the lowest potential-energy well in both the \(\beta\to\infty\) and the \(\varepsilon\to\infty\) limits of the spin-boson model. This means that our long-time limit formulas for \(\beta\to\infty\) can also be used to understand the thermalization behaviour of quasiclassical approaches in the \(\varepsilon\to\infty\) limit of the spin-boson model discussed in Section III.1. We additionally remark that quasiclassical approaches that reproduce the correct \(\beta\to\infty\) limit are not in general guaranteed to reproduce the right asymptotic approach (apart from MASH), as observed in our numerical results [see in particular Ehrenfest and double SEO in Fig. 1]. #### c.2.3 The \(\alpha\to 0\) limit The weak-coupling limit (\(\alpha\to 0\)) leads to a significant simplification of our long-time limit formulas, because in this limit \(V^{\text{ad}}_{z}(x)=V^{(0)}_{z}\) becomes independent of the nuclear positions and such terms can be taken outside the phase-space averages, \(\left\langle\cdots\right\rangle_{\text{b}}\). This means that identical terms in the numerator and denominator of a fraction that previously belonged in different phase-space integrals now cancel. Performing these simplifications leads to the zeroth-order terms \[\mathcal{C}^{\text{QC}}_{\mathcal{I}z}(t\to\infty) \sim-\tanh\left(\beta V^{(0)}_{z}\right), \tag{16a}\] \[\mathcal{C}^{\text{MF}}_{\mathcal{I}z}(t\to\infty) \sim-\int\mathrm{d}r\,r^{2}\rho_{0,\text{s}}(r)\,\mathcal{I}_{ \text{s}}(r)\] \[\qquad\times\Bigg{[}\coth(\beta V^{(0)}_{z})-\frac{1}{\beta rV^{( 0)}_{z}}\Bigg{]},\] (16b) \[C^{\text{SQC}}_{\mathcal{I}z}(t\to\infty) \sim-\tanh\left(\beta V^{(0)}_{z}\right). \tag{16c}\] We find that SQC is also exact in the weak-coupling limit, as was also shown in Ref. [82, 30]. This time, the mean-field condition depends on \(\beta V^{(0)}_{z}\) and so cannot be satisfied in general unless the method is explicitly dependent on this system-dependent variable (similar to Ref. [28]). These results are summarized in the corresponding column of Table 1. ## Appendix D Microscopic-reversibility error in MASH The _microscopic-reversibility error_ (MRE) was introduced as a way of estimating the error in MASH arising from the violation of time-reversal symmetry.[38] This arises due to the presence of a weighting factor in the definition of the MASH correlation functions that is not conserved by the dynamics. For the specific case of the adiabatic population difference studied in this work, the MRE is defined by \[\left\langle\left[|S^{\text{ad}}_{z}(t)|-|S^{\text{ad}}_{z}|\right]\text{sgn}( S^{\text{ad}}_{z}(t))\right\rangle_{0}, \tag{17}\] where \(\frac{1}{2}\) and \(\text{sgn}(S^{\text{ad}}_{z})\) are the MASH representations of the \(\frac{1}{2}\mathcal{I}_{\text{s}}\) and \(\hat{\sigma}^{\text{ad}}_{z}(x)\) operators respectively, \(2|S^{\text{ad}}_{z}|\) is the corresponding MASH weighting factor for this correlation function, and the ensemble average \(\left\langle\cdots\right\rangle_{0}\) is defined in Eq. (12b). Our formula for evaluating the long-time limit of quasiclassical correlation functions [Eq. (12)] can be applied here to evaluate the long-time limit of the MRE. Using \[\int_{-1}^{1}\mathrm{d}S^{\text{ad}}_{z}\left\langle\left[S^{\text{ad}}_{z}- \text{sgn}(S^{\text{ad}}_{z})\right]\rho^{\text{MASH}}_{\text{eq}}(S^{\text{ad }}_{z})\right\rangle_{\text{b}}=0 \tag{18}\] we find that the MRE vanishes in the long-time limit. To obtain this result, we have also used \(\langle|S^{\text{ad}}_{z}|\rangle_{0}=1\) and the definitions of the the ensemble average \(\left\langle\cdots\right\rangle_{\text{b}}\) and the MASH thermal density given in Eqs. (21) and (26b) respectively. This result therefore explains why MASH is able to exactly reproduce the long-time limit of correlation functions despite not rigorously obeying detailed balance.
2303.17879
CoSMo: a Framework to Instantiate Conditioned Process Simulation Models
Process simulation is gaining attention for its ability to assess potential performance improvements and risks associated with business process changes. The existing literature presents various techniques, generally grounded in process models discovered from event log data or built upon deep learning algorithms. These techniques have specific strengths and limitations. Traditional data-driven approaches offer increased interpretability, while deep learning-based excel at generalizing changes across large event logs. However, the practical application of deep learning faces challenges related to managing stochasticity and integrating information for what-if analysis. This paper introduces a novel recurrent neural architecture tailored to discover COnditioned process Simulation MOdels (CoSMo) based on user-based constraints or any other nature of a-priori knowledge. This architecture facilitates the simulation of event logs that adhere to specific constraints by incorporating declarative-based rules into the learning phase as an attempt to fill the gap of incorporating information into deep learning models to perform what-if analysis. Experimental validation illustrates CoSMo's efficacy in simulating event logs while adhering to predefined declarative conditions, emphasizing both control-flow and data-flow perspectives.
Rafael S. Oyamada, Gabriel M. Tavares, Sylvio Barbon Junior, Paolo Ceravolo
2023-03-31T08:26:18Z
http://arxiv.org/abs/2303.17879v4
# CoSMo: a Framework for Implementing Conditioned Process Simulation Models+ ###### Abstract Process simulation is an analysis tool in process mining that allows users to measure the impact of changes, prevent losses, and update the process without risks or costs. In the literature, several process simulation techniques are available and they are usually built upon process models discovered from a given event log or learned via deep learning. Each group of approaches has its own strengths and limitations. The former is usually restricted to the control-flow but it is more interpretable, whereas the latter is not interpretable by nature but has a greater generalization capability on large event logs. Despite the great performance achieved by deep learning approaches, they are still not suitable to be applied to real scenarios and generate value for users. This issue is mainly due to fact their stochasticity is hard to control. To address this problem, we propose the CoSMo framework for implementing process simulation models fully based on deep learning. This framework enables simulating event logs that satisfy a constraint by conditioning the learning phase of a deep neural network. Throughout experiments, the simulation is validated from both control-flow and data-flow perspectives, demonstrating the proposed framework's capability of simulating cases while satisfying imposed conditions. Keywords:Process Mining Business Process Simulation Deep learning What-if Analysis ## 1 Introduction Process mining refers to a set of tools used for analyzing recorded data collected over time from information systems. The overall goal is to obtain insights that allow users to improve their business processes or support them in decision-making. Among the existing techniques devoted to this, process simulation models have gained renewed attention in recent research papers [12, 6, 11]. Simulating processes allows researchers and practitioners to improve and support their processes in various ways, such as validating processes before implementation or diagnosing ongoing processes [1]. In general, the current solutions stochastically simulate traces based on assumptions from the probability distributions obtained or learned from the event logs. Most of the current simulation models available in the literature are either implemented upon process models discovered from event logs [23; 5; 6] or learned from event logs via deep learning techniques [8; 11]. The former group of methods usually requires first discovering a process model, extracting information from it, and then generating simulated traces [18]. This step might introduce limitations and lead to suboptimal solutions since selecting the suitable process discovery algorithm according to the event log characteristics can be challenging [28]. Traditionally, these methods are also restricted to the control-flow and temporal behavior, which means they are not able to take into consideration extra event attributes such as resources and costs. In this sense, deep learning present more flexibility when modeling a solution since these approaches are able to include as many event attributes as available [14; 15; 20]. Moreover, Camargo et al. [10] have demonstrated that simulation models based on deep learning outperform simulation models based on process models for larger event logs and perform similarly for smaller event logs. Despite these promising results, deep learning models still suffer from a lack of interpretability, which can be a major limitation for decision-makers. However, we believe that exploring innovative deep learning design ideas in process mining can help to mitigate this issue and advance the state-of-the-art of process simulation. Recently, Camargo et al. [11] have proposed a hybrid solution based on process mining and deep learning techniques to overcome the limitations of both groups of methods. Although hybrid solutions tend to be more powerful since they leverage the strengths of different techniques, approaches fully based on deep learning have not been widely and properly explored so far regarding the process simulation problem. To the best of our knowledge, the only contribution in the literature is the DeepGenerator [10], which has been proposed for the generation of event logs from scratch. However, simply generating data via deep learning might not be so beneficial due to the stochasticity introduced by the learned models. Process simulation models should be able to answer questions in order to compare possible changes with respect to key performance indicators [1]. For example, teaching models how to satisfy conditions imposed by users. Although stochasticity is an intrinsic property of many simulation methods, it is possible to constrain the output of a deep neural network by providing additional information during training. By incorporating auxiliary data into the training process, it is possible to limit the randomness of the network's output by conducting them into desired directions guided by conditions. This can effectively reduce the stochastic nature of the output and increase the control over the model's behavior, which can be valuable when performing what-if analysis. Thus, considering the recent successful deep learning applications in process mining and the lack of alternative solutions derived from it, in this paper we propose CoSMo: a framework for developing COnditioned Simulation MOdels. CoSMo is capable of learning how to simulate processes by satisfying constraints. An example of a constraint that can be taught to condition the simulation model is resource usage. We study in this work how to simulate processes that make usage (or not) of a specific resource, which basically consists of teaching neural networks how to perform simulation considering this condition of resource availability. Moreover, we take into consideration the existing deep neural architecture designs in the literature of process mining in order to develop our proposal. This way, we demonstrate through experiments that CoSMo is capable of learning how to satisfy conditions regardless of the underlying architecture. More specifically, we instantiate the DeepGenerator and a simpler architecture with fewer parameters to serve as baselines. Results show that our framework instantiations are capable of generating reasonable event logs from scratch by satisfying the imposed conditions and performing what-if analysis by simulating desired scenarios on ongoing cases. The paper is organized as follows. Section 2 introduces the basic concepts for the understanding of this work along with a discussion on the related works. Subsequently, in Section 3 we discuss the limitations of existing process simulation models, and in Section 4 we introduce our proposed solution. Section 5 describes the employed experimental setup and the experimental evaluation. Finally, we conclude our work and discuss future directions in Section 6. ## 2 Background and Related Works An event log consists of a set of cases (a.k.a. process executions) [2]. Each case is composed of an ordered sequence of events, where each event refers to the execution of a system activity and is characterized by a set of attributes. A sequence of events related to a given case is called a trace. The most common attributes that can be found in an event are the activity label and the timestamp denoting, for instance, when the activity started. Moreover, an event might present other attributes, such as a resource needed to execute the activity or the cost of this execution. Process simulation models aim at abstracting details from the event logs in order to simulate reality [1]. They are employed as a tool by the process mining community for several applications, such as conformance checking [24], event log generation [5, 10], purposed-oriented event log generation [6], what-if analysis [12], and predictive process monitoring [26, 14, 8]. The existing simulation solutions in the literature can be mainly divided into two groups: simulation models built upon a discovered process model or models learned from event logs via deep learning. For the sake of simplicity, in this work, we shorten the process simulation based on process models term as _PSPM_ approaches and process simulation based on deep learning as _PSDL_ approaches. In general, the _PSPM_ approaches discover a process model and extract statistical characteristics from a given event log [23]. Thus, the simulation is usually performed by replaying a process model (e.g. Colored Petri Net [21] or BPMN [6]) in a stochastic fashion to make simulations more realistic [1]. Moreover, the obtained statistics are usually managed according to each scenario's specifications and user requirements, although guidelines and automated solutions have been proposed [18, 7]. This group of simulation models is the most popular in the current literature and includes several proposals such as the PGL2 [5] and SIMOD [9]. These methods are usually designed to simulate different control-flow patterns by manipulating user-based requirements, such as the number of gates and the amount of noise. A weakness of this group of methods is the restriction of capturing only the control-flow and temporal behavior, whereas the strengths consist of higher interpretability since they rely on white-box representations (i.e., the discovered process model). On the other hand, _PSDL_ solutions are very recent in the process mining literature. These methods can be seen as extensions of the wide range of applications related to the predictive process monitoring field. Predictive process monitoring consists of a set of process mining techniques that aim mostly at solving the problems of the next activities, remaining time, and outcome predictions [22]. Thus, the simulation might be thought of as the problem of iteratively predicting the next activities or the whole remaining trace (a.k.a. suffix) of an ongoing execution [26]. Leveraging the recent achievements in this field, the first simulation model fully based on deep learning, named DeepGenerator, was proposed by Camargo et al. [8]. This process simulator differs from the traditional _PSPM_ approaches since it learns directly from an event log instead of relying on discovering a process model and extracting additional information. Later, the same authors demonstrated that this group of simulation models is capable of outperforming _PSPM_ approaches for larger event logs while performing similarly for smaller event logs in the event log generation task [10]. Furthermore, deep learning provides more flexibility for multi-dimensional modeling, i.e., including extra event attributes such as resources and costs. In order to mitigate or overcome the mentioned limitations of both groups of simulation models, hybrid approaches have been proposed recently. For example, Pourbafrani and van der Aalst [21] combined process mining and system dynamics techniques, which in a nutshell leverages the details captured by process mining with higher-level information extracted by system dynamics. On the other hand, Camargo et al. [11] proposed the DeepSimulator, which employs a process model for simulating the control-flow and a deep learning model for estimating the remaining time of activities. ## 3 Motivation We stress the motivation of our work by considering the current limitations of existing process simulation solutions. The _PSPM_ approaches might be influenced by the underlying process model [28] and they are restricted to control-flow aspects and temporal behaviors. This means they are affected by the bias introduced by the discovery algorithms. Indeed, there is no common sense for selecting the optimal discovery algorithm according to the given event log characteristics, which may lead to suboptimal performances [28]. A common characteristic among process simulation models is the stochasticity inserted during simulation. For example, simulations by replaying process models are supported by statistical information extracted from the event logs, such as branches' probabilities and activity duration time distributions. On the other hand, _PSDL_ approaches consist of learning the underlying data distributions and drawing event attributes from the probability distributions returned by the learned model. Although this stochastic approach makes simulations more realistic [1, 8], fully depending on randomness limits the users' flexibility to control the simulated behaviors based on desired conditions. To the best of our knowledge, the first deep learning application in process mining was proposed in 2017 by Evermann et al. [13] to introduce a solution for the problem of next activity prediction. Since then, many variations have been proposed and extended for other tasks [22], but a very small effort has been dedicated to process simulation exclusively. Conditioning the simulation of a process is relevant since it provides more flexibility to users by allowing them to restrict the simulation according to the desired scenario. The idea of this approach is to be more adaptable by learning how processes might behave under possible changes or desired constraints. The most related solution to ours in the literature is the Purple framework, recently introduced by [6]. The authors proposed a purpose-guided solution capable of simulating entire event logs by following a given purpose. For instance, Purple is capable of generating synthetic event logs specifically designed to evaluate and possibly benchmark process discovery algorithms. However, it is still harmed by the greatest limitation of _PMPS_ approaches since it focuses mainly on the control-flow aspects. In order to clarify the problem of control-flow restriction and fully rely on the stochasticity nature of simulation models, consider the following example. Consider an activity \(A\) which might be followed by one out of two possible activities: activity \(B\), if \(A\) is executed by the resource \(R_{1}\); or activity \(C\), if \(A\) is executed by a resource \(R_{2}\). A stochastic process simulation model will randomly associate a resource to \(A\) based on the learned probability distributions from the given event log. However, the intuitive idea of CoSMo is providing the user an alternative for restricting the simulations based on the condition that only \(R_{1}\) can be employed at that moment, i.e., \(A\) should always be followed by \(B\). More examples of naive and intuitive conditions in the process mining context might include the usage of given resources (e.g. if a resource is available or not for the process execution), time-based constraints (e.g. restricting allowed time for a process execution), or process outcomes (e.g. conditioning the simulation of a process that fails or succeeds w.r.t. a key process indicator). Now, consider the abstraction of a neural network as \(p(y|x)=\hat{y}\) with some abuse of notation, where \(\hat{y}\) is a probability distribution returned by the neural network \(p\)[3]. Hence, a naive neural network is naturally a conditioned probability function that aims at estimating \(y\) given a random variable \(x\). In probability theory, we are allowed to jointly measure the probability of the intersection of multiple random variables. For instance, functions that learn the underlying relation of sequential data (e.g. recurrent networks, see [26] for a formal definition in the context of process mining) are interested in knowing the probability dis tribution \(p(x_{t}|x_{t-1},...,x_{1})\) where \(t\) indicates, for instance, the position or time of the random variable \(x\). Thus, the conditional learning of neural nets in the context of this work can be performed by introducing customized conditions \(C\) in the form of auxiliary information, such that \(p(y|x,c)=\hat{y}\). Therefore, in this work, we propose a conditioned process simulation model. The overall idea consists of providing the model with some extra information during the training phase so that at the testing phase (simulation) the outputs might be restricted or manipulated according to the provided extra information. Such extra information can be provided in the form of class labels [19], for instance. However, conditional learning is very popular nowadays in the multi-modal learning context, for example, generation of text conditioned by images [16] or by short text prompts [4]. ## 4 CoSMo: A Framework for Conditioned Simulation Models In this section, we introduce CoSMo, our proposed framework for implementing process simulation models based on conditioned deep neural nets. We propose a methodology that leverages a basic deep learning technique to mitigate the stochastic nature of process simulation models. Although stochasticity will be always present in simulations, following this conditioned design allows users and practitioners to have more control over the outputs. Figure 1 summarizes the pipeline for instantiating CoSMo. The red blocks refer to data preprocessing steps, the yellow block refers to the training of a conditioned network, and the green block refers to the final conditioned simulation model. Given an event log, there are three preprocessing steps, and one of them is optional. The conditioning step regards the labeling of cases based on a constraint. Nevertheless, in this step users are free to design any condition according to their interests. The obtained conditions in form of labels serve as auxiliary Figure 1: Implementation pipeline for instantiating a process simulator based on the CoSMo framework. information when training a deep neural net. Moreover, alternative preprocessing procedures are included as an optional step. Finally, the _n-gram_ method is applied to transform the preprocessed event log into a dataset of conditioned prefixes. Given the dataset of conditioned prefixes, we can train a conditioned network. A generic design of conditioned networks for process simulation is proposed in Figure 2. We take into consideration two important aspects to propose this abstraction. First, we consider several design ideas of conditional learning from the deep learning community as briefly discussed in Section 3. Second, we also consider all the mentioned related works in predictive process monitoring since they follow a similar neural architecture design in general. Usually, the overall architectures employ a block for learning a representation of input features (a.k.a. encoding), e.g. RNNs to represent temporal dependencies [26] or CNNs to extract new features [29], followed by a linear (a.k.a. dense or fully connected) layer. Therefore, the overall idea of designing generic conditioned networks is first to learn feature representations of the input prefix and concatenate the outputs with the provided condition label. The next blocks are hence responsible for performing non-linear transformations in order to learn how to solve downstream tasks, for instance, the next activity prediction or remaining time estimation. Due to the _n-gram_ nature of data, CoSMo allows users to perform (i) simulations from scratch, i.e., given a zero-like prefix and a desired condition as input, and (ii) simulations from ongoing cases, i.e., simulating the remaining events from a real start point. The former allows users, for example, to simulate synthetic event logs with desired characteristics, e.g., a set of traces executed under the specified condition. The latter is intended for performing what-if analysis. For instance, how a current process execution will behave if a specific resource is not available. The simulation of an ongoing case leverages the provided information to predict the next events, unlike the simulation from scratch which starts from a zero-like prefix and takes into consideration only the desired condition to be satisfied. ## 5 Experiments In this section, we aim at demonstrating how process simulation models based on deep learning can learn to satisfy conditions imposed by the user. Thus, we first describe our experimental setup to evaluate the CoSMo pipeline and we conclude by discussing the performance evaluation. Figure 2: Generic design of conditioned networks for training process simulation models. ### CoSMo Pipeline **Datasets**. We employ almost all datasets benchmarked1 by Weytjens and Weerdt [30], which are detailed in the Table 12. We disregard the _BPI15_ and _BPI12_ event logs since the former has never been considered (to the best of our knowledge) by papers related to predictive process monitoring or process simulation based on deep learning and the latter has numerical resources, which does not fit the scope of this work aiming at the usage of categorical resources. For training, we include the following event attributes: _activity_, _resource_, and _remaining time_3. Since we aim at evaluating several datasets from a more generic perspective, we consider them since they represent the maximum common set in the employed datasets. Footnote 1: [https://github.com/hansweytjens/predictive-process-monitoring-benchmarks/](https://github.com/hansweytjens/predictive-process-monitoring-benchmarks/) Footnote 2: Note that it might slightly differ from the original event logs due to the preprocessing steps proposed by the authors. Moreover, we shorten the _BPI20_ logs: Permit Log (PL), Prepaid Travel Cost (PTC), and Request For Payment (RFP). Footnote 3: The benchmarked versions of the event logs contain the remaining time information. See [30]. **Preprocessing**. First, we encode traces to be processed by our framework. Similarly to [8], we generate prefixes using the _n-gram_ approach to handle multi-dimensional inputs and remark the end of sequences by including a special token "_<eos>_" for categorical attributes or a _zero value_ for numerical attributes. Further, we apply right-padding to the prefixes by also adding a special token "_<pad>_". **Conditioning**. The encoding procedure is completed by introducing the resource usage condition. In order to provide the neural net with such a condition, we first label each case as a binary class if it uses a specific resource at any point of its execution or not. For each event log, we select the second most used resource to be employed as the condition. The reason for choosing the second most frequent one is that for some datasets (e.g. BPI20 - RequestForPayment) \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **Event Log** & **\#traces** & **\#evts** & **\#acts** & **\#res** & **\#vars** & \begin{tabular}{c} **Avg act** \\ **per trace** \\ \end{tabular} & \begin{tabular}{c} **Avg trace** \\ **length** \\ \end{tabular} \\ \hline BPI13\_Closed & 652 & 4025 & 6 & 540 & 283 & 2.64\(\pm\)0.72 & 6.17\(\pm\)4.66 \\ BPI13\_Incidents & 5796 & 88587 & 4 & 1432 & 1963 & 2.74\(\pm\)0.5 & 15.28\(\pm\)14.52 \\ BPI17 & 31497 & 1210807 & 26 & 149 & 16441 & 15.43\(\pm\)2.4 & 38.44\(\pm\)17.96 \\ BPI19 & 148218 & 843195 & 40 & 471 & 6434 & 5.15\(\pm\)1.16 & 5.69\(\pm\)5.02 \\ BPI20\_PL & 6831 & 82190 & 50 & 2 & 1547 & 10.66\(\pm\)3.29 & 12.03\(\pm\)5.44 \\ BPI20\_PTC & 1781 & 15233 & 29 & 2 & 194 & 8.36\(\pm\)1.98 & 8.55\(\pm\)2.26 \\ BPI20\_RFP & 5692 & 29887 & 17 & 2 & 73 & 5.13\(\pm\)1.0 & 5.25\(\pm\)1.29 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics extracted from each event log: number of traces, number of events, number of activities, number of resources, number of variants, the average number of activities per trace, and average trace length. the most frequent is present in all cases, which would result in one label for all cases. Thus, a conditioned prefix takes the form of a tuple \(cp=(prefix,cond)\), where \(prefix\in\mathcal{R}^{l,d}\), with \(l\) being the sequence length and \(d\) the number of event attributes, and \(cond\) is a scalar. **CoSMo instantiations**. We instantiate the DeepGenerator architecture (see [8]) using our proposed framework, i.e. encoding traces using conditions. We designed the architecture following the descriptions provided in the paper and included an extra concatenation operation (see Figure 2) before feeding the last linear layers. Moreover, we also include a smaller baseline with fewer learnable parameters for comparison. The baseline architecture (Figure 3) contains an embedding layer of categorical features, an LSTM block for encoding the input data, a concatenation operation for conditioning the encoded data, followed by an MLP block (stack of linear layers), and individual linear layers to output each event attribute. The final number of learnable parameters varies for each dataset since the size of the set of activities also varies, and they are summarized in Table 2. As we are concerned specifically about the conditional learning of process simulation models, we attempt to simplify the experimental setup and focus on the methodology by demonstrating how well models are capable of learning how to satisfy user-based conditions. Therefore, we fixed hyperparameters related to the design of both architectures (e.g. number of layers) to reduce the hyperparameter search space for tuning. All details regarding the settings for architecture design and also for reproducibility are available in our repository4. Footnote 4: [https://github.com/raseidi/cosmo](https://github.com/raseidi/cosmo) **Training phase**. We used the bayesian optimization from WandB5 with a 4-dimensional hyperparameter search space to tune both architectures. Fixed hyperparameters include the number of epochs as 50, \(n=5\) regarding the _n-gram_, and the remaining architecture hyperparameters (e.g. number of layers). Subsequently, the optimization method ran for 10 iterations. The evaluated range and set of hyperparameters are described in Table 2. Footnote 5: [https://docs.wandb.ai/guides/sweeps](https://docs.wandb.ai/guides/sweeps) The architectures share the same loss functions: cross entropy for activity and resource predictions and mean squared error for remaining time prediction. The training is performed in a multi-task fashion, where all loss functions are minimized together. Moreover, we use the He initialization to initialize the neural network parameters and employ a scheduler to decay the learning rate at epochs 25 and 35 by a factor of 0.1. We implemented everything in Python using Pytorch6. Footnote 6: [https://pytorch.org/](https://pytorch.org/) **Testing phase (Simulation)**. We employ multinomial sampling to draw the next categorical event attributes from the probability distributions returned by the neural net. This sampling method has shown better results since it presents more diversity in the trace simulation despite the injected randomness [8]. The simulation is performed in two different ways: (i) we simulate traces from scratch, starting from a zero-like array; and (ii) we simulate the remaining trace by starting from different positions given an ongoing case. For instance, considering a case of length \(n\), we can simulate remaining traces starting from any position \(i\), where \(0<i<n\). However, to save computational resources and accelerate the experiments, we iterate \(i\) considering a step of 2. Regarding simulations from scratch, we simulate \(n\) traces where \(n\) is equal to the testing set size. Since we consider a binary condition in this work, half of these traces are simulated under one condition and the other half is simulated under the other condition. The remaining trace simulation is performed for each case from the testing set. **Evaluation**. We organize our evaluation into three steps. First, we consider _event-level_ metrics to validate the predictive performance of simulations regarding the next event attribute predictions. Thus, we employ the accuracy for categorical attributes and the mean absolute error for the remaining time. Second, we consider _trace-level_ metrics to evaluate the quality of the simulated event logs. We employ the Earth Mover's Distance (_EMD_) to measure the similarity between the distributions of real and simulated remaining time predictions; the Control-Flow Log Similarity (CFLS), which considers the optimal similarity measures between paired traces; and the fitness of the simulated log w.r.t. the process model discovered from the original log7. The first two metrics are also employed by Camargo et al. [10] to measure the quality w.r.t the data-flow and control-flow, whereas the latter metric has never been considered as a strategy to evaluate simulated logs. Footnote 7: This is performed by discovering a model from the original log (using inductive miner [17]) and measuring the simulated log fitness via token replay. We do not use the alignment-based fitness algorithm due to computational resource limitations. Finally, we simulated a what-if scenario and we measure the percentage of traces that were correctly simulated by satisfying the imposed constraint. As previously mentioned, we establish in this work the resource usage condition. \begin{table} \begin{tabular}{l l} \hline \hline **Hyperparam** & **Values** \\ Batch size & \{64, 256, 512\} \\ Learning rate & [1e-3, 1e-6] \\ Optmizer & \{Adam, SGD\} \\ Weight decay & \{0.0, 1e-2, 1e-3\} \\ \hline **Architecture** & **Avg. number** \\ & **of parameters** \\ Baseline & 1.71E+06 \\ DeepGenerator & 2.39E+06 \\ \hline \hline \end{tabular} \end{table} Table 2: Description of hyperparameters values considered for tuning and the average number of learnable parameters for each architecture. Although simple, the main goal of this work is to demonstrate how to learn conditioned process simulation models and how processes might be simulated by satisfying user-based conditions. Therefore, in the scope of this work, the what-if analysis consists of simulating how the processes behave by allowing or not the usage of a given resource. Although the overall idea of our work is slightly similar to the Purple framework [6], their solution focuses on the control-flow simulation only. Since we are not able to instantiate the proposal by guiding the generation based on resource usage, we are not employing it as a baseline in this work. ### Performance evaluation We organize this section in three steps. First, we discuss the event-level metrics employed to measure the predictive performances of the architectures instantiated by our CoSMo framework. Second, we present the trace-level metrics that measure the quality of logs simulated by each architecture. Finally, we introduce our what-if scenario and evaluate how well the proposed framework performs by simulating traces under imposed conditions. **Event-level metrics**. Table 3 shows the performances achieved by each architecture. The employed metrics are, respectively, the accuracy for the next activity and resource predictions and the mean absolute error for the next remaining time prediction. We can notice there is no significant difference between the employed architectures for most processes. This shows that the proposed baseline architecture performs as well as the DeepGenerator using about 30% \begin{table} \begin{tabular}{c c c c c} \hline \hline **Event Log** & **Architecture** & **Acc-ACT** & **Acc-RES** & **MAE-RT** \\ \hline \multirow{2}{*}{BPI13\_Closed} & Baseline & 0.6275 & 0.1838 & 0.0002 \\ & DG & 0.6455 & 0.2087 & 0.0003 \\ \hline \multirow{2}{*}{BPI13\_Incidents} & Baseline & 0.7956 & 0.6485 & 0.0001 \\ & DG & 0.7823 & 0.6041 & 0.000 \\ \hline \multirow{2}{*}{BPI17} & Baseline & 0.9026 & 0.7474 & 0.000 \\ & DG & 0.8969 & 0.7469 & 0.000 \\ \hline \multirow{2}{*}{BPI19} & Baseline & 0.823 & 0.5047 & 0.0001 \\ & DG & 0.8244 & 0.5078 & 0.0001 \\ \hline \multirow{2}{*}{BPI20\_PL} & Baseline & 0.8227 & 0.9716 & 0.000 \\ & DG & 0.8121 & 0.9663 & 0.000 \\ \hline \multirow{2}{*}{BPI20\_PTC} & Baseline & 0.8666 & 0.9976 & 0.000 \\ & DG & 0.6943 & 0.944 & 0.000 \\ \hline \multirow{2}{*}{BPI20\_RFP} & Baseline & 0.9277 & 0.9996 & 0.000 \\ & DG & 0.913 & 0.9949 & 0.000 \\ \hline \hline \end{tabular} \end{table} Table 3: Event-level evaluation metrics achieved by each architecture for each event log. _Acc-ACT_ stands for the accuracy of the next activity prediction, _Acc-RES_ for accuracy of the next resource prediction, and _MAE-RT_ for the mean absolute error of the remaining time prediction (in days). fewer parameters. An exception occurs for the dataset _BPI20 - Prepaid Travel Cost_, where the DeepGenerator performs poorly. A reason for that might be that the bayesian optimization method was not able to find the best hyperparameters in the defined amount of iterations. On the other hand, we see lower predictive performances for _BPI13 - Closed_ and _BPI13 - Incidents_. Crossing these results with the information from Table 1, we see that there is a certain correlation between the predictive performances and the average number of activities per trace. For the mentioned datasets, although we have traces as long as in other event logs, there is a very low variation of unique activities in the traces. **Trace-level metrics**. Figure 4 illustrates the performances of each architecture for each dataset. The lower the _EMD_ the better, whereas the higher the _CFLS_ and fitness the better. Notice that these metrics are measured using the logs simulated from scratch. Both architectures perform similarly again, except for _BPI19_ and _BPI20 - Request For Payment_ regarding the _EMD_ score. However, the performance can still be considered good since in both cases the score is close to zero. This result matches and complements the event-level metric regarding the remaining time prediction. Furthermore, the variation measured across the optimally paired traces regarding the _CFLS_ score is also similar for both architectures. For _BPI13 - Closed_ and _BPI20 - Prepaid Travel Cost_ both architectures present higher variations, whereas for the remaining datasets the architectures present lower variations. In some cases, the baseline architecture presented _CFLS_ scores slightly better, whereas the DeepGenerator achieved process model fitness scores slightly better in most cases. Figure 4: Trace-level evaluation metrics from the employed architectures in this work. Lower values for EMD are better, whereas the higher the better regarding the remaining metrics. **What-if analysis**. We now describe how our framework can be employed to perform what-if analysis and demonstrate the effectiveness of our proposal. As mentioned in the previous sections, we simulate traces that make the usage or not of specific resources. Thus, we simulate traces from scratch and from different positions of an ongoing case. The simulations from scratch start from a zero-like array, whereas the ongoing simulations take into consideration the information available so far. Figure 5 illustrates the percentage of traces correctly simulated under each condition. In this Figure, case position equals 0 means simulation from scratch. _BPII3 - Closed_ simulates traces that satisfy the conditions almost perfectly. Despite the low predictive performances, both this dataset and the _BPII3 - Incidents_ perform considerably well in this analysis. Although their characteristics (low number of activities and the low average number of activities per trace) affect the learning phase, both architectures are still able to learn how to satisfy the imposed conditions. Overall, all models learned more effectively how to simulate traces that do not make use of the specified resources (i.e. red line). Furthermore, the _BPII3 - Incidents_, _BPII7_, _BPI20 - Permit Log_, and _BPI20 - Request For Payment_ show the expected behavior of improvement in performances as long as more information on the ongoing trace is provided. In this case, we only see an exception for the DeepGenerator for the latter event log, which performed poorly considering the condition of ensuring resource usage. The drastic drop in performance considering the baseline architecture on the _BPI20 - Request For Payment_ is due to the fact this log has fewer longer cases. In this example, Figure 5: The percentage of traces that were correctly simulated by satisfying each imposed condition. This score is measured by starting the simulation from different positions in the cases. Case at position zero means simulation from scratch of the entire trace. there are only two ongoing cases being simulated from position 12, which means only one of them has not satisfied the condition and dropped the performance by 50%. Similar behavior can be seen for the same event log simulated by the DeepGenerator. _BPI19_ was able to learn the simulation of traces without using the given resource, but on the other hand, the performances achieved by both architectures were arbitrary when complying with the condition. Considering the usage of the resource, the baseline architecture performed reasonably well for simulations from scratch and from the beginning of traces, but both architectures performed quite poorly for all the other cases. ## 6 Conclusion and Future Work In this work, we introduced the CoSMo framework, which can adapt existing neural network architectures in order to make them learn how to simulate traces that satisfy different conditions. We introduced a very simple and naive condition to serve as an example and demonstrate how models can learn to satisfy this condition when performing simulations. Two instantiations of our proposal were considered using different neural architectures, where one is our proposed baseline and the other refers to the DeepGenerator [10]. Subsequently, we first validate the quality of simulated data through metrics specific to the event- and trace-level evaluation. Finally, we demonstrate the effectiveness of the conditioned simulation models for learning to simulate traces by satisfying an imposed condition. We believe this research introduces in the process mining community a new modeling approach to mitigate the complete stochasticity of current existing simulation models by guiding the simulation based on constraints. In future directions, we intend to investigate alternative conditions that might be more valuable for real scenarios and stakeholders. Furthermore, the current binary nature of conditions is also a limitation, so future research will also investigate how to perform the simulation based on multiple conditions. The current approach considers a "global" condition w.r.t. a case instance, i.e. it provides one single label for the entire case. However, a more valuable application could rely on "local" conditions, which might change throughout the process cycle time. Finally, although we opted for focusing on the methodology of our proposal, several approaches from the predictive process monitoring community might be leveraged to enhance the final process simulation model. Such techniques include feature engineering based on process mining algorithms [8], robustness enhancement [27, 25], representation learning [20], and hybrid solutions [11].
2309.12782
Generalized Second Law for Non-minimally Coupled Matter Theories
We prove the generalized second law (GSL) for higher curvature gravity theories when the matter sector is non-minimally coupled. The validity of our proof is in the regime of linearized fluctuations about equilibrium black holes, which is the same regime as considered in the previous proofs by Wall and Sarkar. These proofs were provided in different gravity theories - for instance, Lovelock theory and higher curvature gravity - but the matter sector was always taken to be minimally coupled. In this article, we describe how to generalize the proof of linearized semi-classical GSL when the matter sector comes with non-minimal couplings. The proof proceeds by suitably evaluating the matter path integral in the stress tensor expectation value by treating the higher derivative couplings in an effective field theory setting. We use the recently established result of the linearized second law for such theories.
Prateksh Dhivakar, Krishna Jalan
2023-09-22T10:48:32Z
http://arxiv.org/abs/2309.12782v1
# Generalized Second Law for Non-minimally Coupled Matter Theories ###### Abstract We prove the generalized second law (GSL) for higher curvature gravity theories when the matter sector is non-minimally coupled. The validity of our proof is in the regime of linearized fluctuations about equilibrium black holes, which is the same regime as considered in the previous proofs by Wall and Sarkar. These proofs were provided in different gravity theories - for instance, Lovelock theory and higher curvature gravity - but the matter sector was always taken to be minimally coupled. In this article, we describe how to generalize the proof of linearized semi-classical GSL when the matter sector comes with non-minimal couplings. The proof proceeds by suitably evaluating the matter path integral in the stress tensor expectation value by treating the higher derivative couplings in an effective field theory setting. We use the recently established result of the linearized second law for such theories. _Keywords_: generalized second law (GSL), non-minimal, effective field theory (EFT) ## 1 Introduction General Relativity (GR) is considered incomplete due to the presence of singularities [1], which are expected to be resolved by a quantum theory of gravity. At energies significantly below the Planck scale, any UV complete theory of gravity simplifies to Einstein's general relativity with suppressed higher derivative terms [2, 3]. Consequently, we can treat gravity as an effective field theory (EFT) and concentrate on the primary quantum corrections at these energy levels [4]. Black holes, as solutions of Einstein's theory, provide a useful context for studying the quantum nature of gravity. In this framework, black holes exhibit thermodynamic properties, such as well-defined notions of energy and temperature [5, 6, 7]. The black hole's entropy is linked to its horizon's area [8, 9], and the area theorem ensures that entropy never decreases, akin to the second law of black hole thermodynamics. Due to the presence of higher derivative terms, there's a question about whether black hole thermodynamics holds beyond general relativity. In the presence of higher derivative interactions, the area law typically fails [10, 11, 12, 13]. Iyer and Wald, [14, 15], introduced a major generalization for diffeomorphism invariant theories of gravity, where black hole entropy is derived from the integral of the Noether charge associated with the horizon's generating Killing isometry. While Wald entropy satisfies the first law of black hole mechanics, it suffers from JKM ambiguities for non-dynamical horizons [16, 17]. As a result, except for exceptional cases like \(f(\mathcal{R})\) theories, where \(\mathcal{R}\) is the Ricci scalar and \(f\) is a polynomial, Wald entropy doesn't generally adhere to the second law of black hole mechanics, which dictates that entropy should increase at each stage of evolution To prove the second law of black hole mechanics, one common approach is to consider an equilibrium black hole with a Killing horizon being slightly perturbed by external matter. As the black hole eventually returns to equilibrium, an entropy functional should be constructed to increase throughout this dynamical process. This can be achieved by extending the Wald entropy to dynamic situations while addressing the associated JKM ambiguities to ensure compliance with the second law. This approach has been used in previous studies [18, 19, 20, 21, 22, 23, 24]. In [22, 23, 24], building on [20], an ultra-local version of the second law of black hole thermodynamics was established for arbitrary diffeomorphism invariant theories of gravity. This was accomplished by introducing a local "entropy current" with a non-negative divergence on the black hole's dynamic horizon. The time component of the entropy current was a combination of the Wald entropy and uniquely resolved JKM ambiguities. The spatial components of the current facilitated the redistribution of entropy across different spatial sections of the horizon. In previous works [18, 20, 22, 23], it was assumed that the matter sector of the theory was minimally coupled and satisfied the null energy condition (NEC), which contributed to entropy production and the establishment of the second law. The minimal coupling prescription that we work with is the following: given any Lorentz invariant expression involving the matter fields in flat Minkowski space \(\eta_{ab}\), the minimally coupled expression is given by taking the flat metric \(\eta_{ab}\) to the curved space metric \(g_{ab}\) and the partial derivative \(\partial_{m}\) to the covariant derivative \(D_{m}\). It is well known that if one considers a matter sector non-minimally coupled to gravity, it can violate NEC [25, 26, 27, 28]. The analysis of [24] lifts this crucial assumption, establishing a linearized second law for arbitrary matter couplings to gravity. The current work aims to extend the analysis of [24] by treating matter fields quantum mechanically, thereby establishing a Generalized Second Law (GSL). The GSL states that the total entropy of the matter fields outside the horizon \(S_{\rm out}\) and the entropy of the black hole \(S_{H}\) always increases in a dynamical evolution. Thus, the proof of the GSL proceeds by showing that the generalized entropy \(S_{\rm gen}=S_{H}+S_{\rm out}\) always increases. In the semi-classical setting that we are interested in, one expands the metric in powers of the Planck mass, or equivalently, an \(\hbar\) expansion [18, 29, 30]: \[g_{ab}=g_{ab}^{(0)}+g_{ab}^{(\frac{1}{2})}+g_{ab}^{(1)}+\mathcal{O}(\hbar^{ \frac{3}{2}})\,. \tag{1}\] The zeroth-order term denotes the background metric, and the half-order term denotes the fluctuations due to quantized gravitons. The first-order term arises from the gravitational field due to quantized matter fields. Since we are considering linearized fluctuations, we will ignore all terms higher than \(\mathcal{O}(\hbar)\). As detailed in [30], we are interested in a regime where the metric fluctuations are solely driven by the quantum fluctuations of matter fields. This is the most interesting case because if we consider classical fluctuations, they far outweigh the quantum fluctuations. Unlike the previous proofs of GSL in different kinds of gravity theories, we consider the matter sector to be non-minimally coupled. We were able to establish a statement of linearized semi-classical GSL for the non-minimally coupled quantum field in an EFT sense (cf. Section 3.2). In this EFT treatment, we assume a clear separation of scale between the minimal and non-minimal sectors, which results in the splitting of the minimal and non-minimal parts of the stress-energy tensor. We find that the non-minimal sector, treated perturbatively over the minimal sector, should only contribute to the horizon entropy, whereas the minimal sector is the dominant contribution to \(S_{\rm out}\) at the linear order. With this simplification, we can make a statement of GSL for the non-minimally coupled matter by suitably building up on the previous works of Sarkar and Wall [18, 20] and using the boost-weight analysis and entropy current structure of [23, 24]. We should stress that this analysis does not incorporate the graviton contributions. We would like to mention that the GSL was considered for a particular example of non-minimal coupling in [31], and it has also been considered for specific examples of pure gravity theories in [32]. Recently, in [33], the authors prove a statement of linearized GSL for higher curvature gravity (including graviton contributions), generalizing the works in [34, 35] (CPLW, CPW). Following up on the work of Leutheusser and Liu [36, 37], Witten in [38] first showed that there is a change in the nature of the von Neumann algebra associated to local observables in the black hole exterior in AdS spacetimes with holographic boundary duals. This change in the nature of local observable algebra allows for associating a notion of density matrix and a (renormalized) von Neumann entropy with the black hole state, which is well-defined up to a state-independent additive constant. Later in [34, 35], this idea was generalized to black holes in asymptotically flat spacetimes, such that the entropy of any semi-classical state of the transformed algebra agrees with the generalized entropy, up to an additive constant that is independent of the choice of state. In [33], the authors claim a generalization of the results of [34, 35] to an arbitrary diffeomorphism invariant theory of gravity. However, the monotonicity of the generalized entropy, which is essential to the proof of GSL, was a statement in the boundary theory in [35], and it also does not generalize to out-of-equilibrium horizon cuts, unlike the case in [30] where Wall proves that for any horizon cut, the statement of GSL is given by \(\frac{\mathrm{d}S_{\mathrm{gen}}}{\mathrm{d}v}\geq 0\,\). The main novelty of our proof of GSL is that one does not need the input from the boundary theory, and we also have a statement for arbitrary out-of-equilibrium cuts. We conclude this section by briefly summarizing the different sections. In Section 2, we briefly summarize the proof of classical second law (CSL), introducing the idea of horizon-adapted coordinates, boost-weight analysis, and the entropy current structure. These form some key ingredients in our proof for the GSL as well. Readers familiar with these ideas can directly jump to Section 3 which is the key section of the paper. In Section 3, we first summarize previous attempts at proving GSL in different gravity theories. This sets the stage to explain our setup and how our proof compares to the previous literature. In this section, we also describe our EFT treatment of the non-minimal sector and give a proof of the linearized semi-classical GSL in the context of non-minimally coupled matter sector. Finally, in Section 4, we discuss the relevance of our work, mentioning some issues to be addressed and pointing out some open questions for future work. ## 2 Brief review of the classical second law In this section, we will recap the basic setup of the proof of linearized second law [20, 22, 23, 24]. We begin by explaining our coordinate choice for the black hole spacetime. Then, we detail the notion of stationarity in our spacetime, allowing us to define the fluctuating quantities about equilibrium solutions. These fluctuating quantities can be classified according to how they transform under the symmetries of the stationary spacetime. One can then argue for the linearized second law from the structures of these fluctuating quantities. We will state the main facts we will use, and we refer [23] for further details. ### Horizon adapted coordinates The choice of coordinates for our black hole spacetime is given by \(\{v,\ r,\ x^{A}\}\) where \((A=1,\ldots,d-2)\) such that the horizon is located at \(r=0\). The coordinate \(v\) is chosen along the null generators of the horizon \(\partial_{v}\). On each constant \(v\) slice, we shoot off a set of spatial tangents \(\partial_{A}\) in the \(d-2\) spatial directions. These spatial tangents \(\partial_{A}\) will span the coordinates \(x^{A}\). Thus, the horizon will be spanned by \(\{v,x^{A}\}\). The spacetime away from the horizon will be spanned by a set of null geodesics \(\partial_{r}\) emanating from the horizon, making appropriate angles with \(\partial_{A}\) and \(\partial_{v}\). This results in the gauge choice of [23, 24], namely \[\mathrm{d}s^{2}=g_{ab}\mathrm{d}x^{a}\mathrm{d}x^{b},\quad\mathrm{where}\quad g_ {rr}=g_{vA}\big{|}_{r=0}=g_{vv}\big{|}_{r=0}=g_{rA}=\partial_{r}g_{vv}\big{|}_{ r=0}=0\,,g_{rv}=1\,. \tag{2}\] With this, the near-horizon metric for a generic dynamical black hole can always be cast in the form given by [39] \[\mathrm{d}s^{2}=2\mathrm{d}v\,\mathrm{d}r-r^{2}X(r,v,x^{C})\mathrm{d}v^{2}+2r \omega_{A}(r,v,x^{C})\mathrm{d}v\,\mathrm{d}x^{A}+h_{AB}(r,v,x^{C})\mathrm{d}x ^{A}\,\mathrm{d}x^{B}\,. \tag{3}\] We assume a form of the Zeroth law that enables us to express the metric in this form. A Zeroth law for arbitrary pure gravity diffeomorphism invariant theories was established in [40] following upon initial work in [41]. Here \(X,\omega_{A},h_{AB}\) are general functions of coordinates. This gauge choice does not completely fix the choice of coordinates on constant \(v\) and \(r\) slices as there are residual symmetries in the gauge of (3) of the form \[v\to\tilde{v} =vf_{1}(x^{i})+f_{2}(x^{i})\,,\text{ with appropriate redefinition of }r\,,\] \[x^{i}\to\tilde{x}^{i} =g^{i}(x^{j})\,. \tag{4}\] We work within a regime where we are perturbatively close to the equilibrium solution. For this, we must construct an equilibrium solution in our gauge (3). We will use the residual symmetries of (4) to facilitate this. Let us consider a simple scaling symmetry of the form \[(r,v)\mapsto\left(\lambda r,\frac{v}{\lambda}\right)\,. \tag{5}\] This is called "boost transformation" and is a particular case of (4) when \(f_{1}\) is a constant and \(f_{2}=0\). Since equilibrium black holes must have a Killing vector that becomes null on the horizon, we can use (5) to fix our stationary background metric. As (5) is generated by \[\xi=v\partial_{v}-r\partial_{r}\,, \tag{6}\] a stationary black hole with metric \(g_{ab}^{(\mathrm{eq})}\) is given by \[\mathrm{d}s^{2}=2\mathrm{d}v\mathrm{d}r-r^{2}X(rv,x^{C})\mathrm{d}v^{2}+2r \omega_{A}(rv,x^{C})\mathrm{d}v\mathrm{d}x^{A}+h_{AB}(rv,x^{C})\mathrm{d}x^{A }\mathrm{d}x^{B}, \tag{7}\] where the coefficients \(X,\omega_{A},h_{AB}\) are now functions of the products of the coordinates \(r,v\). See A for a demonstration of how the standard Schwarzschild and Kerr solutions can be brought to this gauge. The metric is invariant under the scaling (5). This is said to be the boost symmetry of the horizon, and (6) is a Killing vector of the background spacetime. The horizon \(r=0\) is generated by (6) and is a Killing horizon of the spacetime. \(\xi^{\mu}\) of (6) becomes null on the horizon \(r=0\). For our construction, it is enough to consider a Killing horizon. We will assume that the event horizon of the equilibrium black hole is a Killing horizon. We want to describe the dynamics only in a perturbative sense, up to the linearized order, in which case we decompose the metric \(g_{ab}\) in (3) using (7) as \[g_{ab}=g_{ab}^{(\mathrm{eq})}(rv,x^{C})+\delta g_{ab}(r,v,x^{C}), \tag{8}\] where \(g_{ab}^{(\rm eq)}\) is the equilibrium metric as in (7), and \(\delta g_{ab}\) is the fluctuation in the metric at the linearized order in \(\hbar\) or \(\epsilon\). Once we have the notion of the background plus fluctuation, we can quantify non-equilibrium structures based on their "boost weight". The boost weight is defined according to how covariant tensors/structures transform under the boost transformation (5): \[\mathcal{T}\to\bar{\mathcal{T}}=\lambda^{w}\mathcal{T}\,,\ \text{under}\ \left\{r\to\bar{r}=\lambda r,\,v\to\bar{v}=\frac{v}{\lambda}\right\}\quad \Longrightarrow\quad\text{boost weight of $\mathcal{T}$ is $w$}\,. \tag{9}\] We have a non-equilibrium quantity if \(w>0\) for a covariant tensor. This can be understood as follows. Suppose if \(\mathcal{T}\) was of the form \[\mathcal{T}\sim(\partial_{r})^{k_{r}}(\partial_{v})^{k_{v}}\mathcal{A}\,, \tag{10}\] where \(\mathcal{A}\) is constructed out of \(X,\omega_{A},h_{AB}\) and \(\nabla_{A}\) (the covariant derivative compatible with induced metric \(h_{AB}\)) only. When we evaluate \(\mathcal{A}\) on (8), it breaks up into an equilibrium contribution and a fluctuating contribution. The equilibrium contribution \(\mathcal{A}(r,v,x^{A})\big{|}_{\rm eqbm}\sim\mathcal{A}(rv,x^{A})\). If we take \(k_{v}>k_{r}\), i.e., \(w>0\), then the equilibrium contribution of (10) has \(k_{v}-k_{r}\) factors of \(r\). This evaluates to zero on the horizon \(r=0\). Thus, only the fluctuating part of (10) remains: \[\mathcal{T}\big{|}_{r=0}\sim(\partial_{r})^{k_{r}}(\partial_{v})^{k_{v}} \mathcal{A}\sim\epsilon(\partial_{r})^{k_{r}}(\partial_{v})^{k_{v}}\delta \mathcal{A}\sim\mathcal{O}(\epsilon)\,. \tag{11}\] Now suppose if we consider a product of two such terms, then they become \(\mathcal{O}(\epsilon^{2})\) and we neglect them up to the linearized order we are working with: \[(\partial_{r}^{m_{1}}\partial_{v}^{m_{1}+k_{1}})\mathcal{A}_{1}\,(\partial_{r }^{m_{2}}\partial_{v}^{m_{2}+k_{2}})\mathcal{A}_{2}\sim\mathcal{O}(\epsilon^{2 })\,. \tag{12}\] Thus, when we have positive boost weight quantities, they are non-trivial only for non-equilibrium configurations. Likewise, we neglect the terms, which are in the form of a product of two positive boost weight quantities at the linearized approximation we are working with. ### Review of the second law within the setup of the entropy current The entropy of a dynamical black hole in arbitrary diffeomorphism invariant theories is given as [18, 20, 22, 23] \[S_{\rm tot,v}=\frac{1}{4\hbar G}\int_{H_{v}}\mathrm{d}^{d-2}x\,\sqrt{h}\,(1+s _{\rm wald}^{\rm HD}+s_{\rm cor})\,, \tag{13}\] where, \(h_{AB}\) is the induced metric on the space-like (\(d-2\)) dimensional cross-section of the horizon \(H_{v}\), (the subscript \(v\) indicates we are looking at some constant \(v\) slice) and \(h\) denotes its determinant. The factor of \(1\) contribution arises from the two-derivative Einstein-Hilbert term in the gravity Lagrangian and it is the standard Area term, \(s_{\rm wald}^{\rm HD}\) is the entropy density contribution arising only from the higher-derivative terms in the gravity action, and \(s_{\rm cor}\) is the possible correction to the entropy density due to non-equilibrium effects, which includes JKM ambiguities. It is not contained in the Wald entropy density \(s_{\rm wald}=1+s_{\rm wald}^{\rm HD}\). Clearly, in the equilibrium limit, this contribution should vanish, i.e., \(s_{\rm cor}\big{|}_{\rm eqbrm}\to 0\). Define \(\rho=s_{\rm wald}+s_{\rm cor}\). As we perturb the horizon, the entropy changes and the rate of change is given by \[\frac{\mathrm{d}S_{\rm tot,v}}{\mathrm{d}v}=\frac{1}{4\hbar G}\int_{H_{v}} \mathrm{d}^{d-2}x\,\sqrt{h}\,\left[\frac{\mathrm{d}\rho}{\mathrm{d}v}+\theta \rho\right]\,, \tag{14}\] where \(\theta\) is the expansion parameter of the null generators and is given by \[\theta=\frac{1}{\sqrt{h}}\partial_{v}\sqrt{h}\,. \tag{15}\] The total change can then be obtained as \[\Delta S_{\rm tot}=\frac{1}{4\hbar G}\int_{H}\mathrm{d}v\,\mathrm{d}^{d-2}x\, \sqrt{h}\,\left(\frac{\mathrm{d}\rho}{\mathrm{d}v}+\theta\rho\right)\,. \tag{16}\] We now define the "generalized expansion parameter" \(\Theta\) as the rate at which the entropy density of an infinitesimal region of the horizon changes, i.e., \[\Theta:=\frac{\mathrm{d}\rho}{\mathrm{d}v}+\theta\rho\,, \tag{17}\] which for the Einstein-Hilbert term reduces to the usual expansion parameter \(\theta\) that goes into the Raychaudhuri equation [42]. It will be assumed that the dynamical horizon will eventually reach a stationary configuration such that \(\Theta(v\to\infty)=0\). Thus, the classical second law follows if we can show \(\Delta S_{\rm tot}>0\). A way to show this is to prove \(\Theta>0\), which is a stronger statement than \(\Delta S_{\rm tot}>0\). This follows if we prove \(\partial_{v}\Theta<0\) and then use \(\Theta(v\to\infty)=0\) to infer \(\Theta>0\) for every finite \(v\). Now \(\partial_{v}\Theta\) is given by \[\begin{split}\partial_{v}\Theta&=\partial_{v}^{2}( s_{\rm wald}^{\rm HD}+s_{\rm cor})+\partial_{v}\theta(1+s_{\rm wald}^{\rm HD }+s_{\rm cor})+\theta\partial_{v}\rho\\ &=\partial_{v}\theta+\partial_{v}\left(\frac{1}{\sqrt{h}} \partial_{v}\left(\sqrt{h}(s_{\rm wald}^{\rm HD}+s_{\rm cor})\right)\right)+ \mathcal{O}(\epsilon^{2})\\ &=-\mathcal{R}_{vv}+\partial_{v}\left(\frac{1}{\sqrt{h}} \partial_{v}\left(\sqrt{h}(s_{\rm wald}^{\rm HD}+s_{\rm cor})\right)\right)+ \mathcal{O}(\epsilon^{2})\,.\end{split} \tag{18}\] In the second equality, we have used the boost weight prescription of (12) to get rid of quadratic terms like \(\theta\partial_{v}\rho\) and other terms that arise as the result of expressing them in the above form. In the final equality, we have used the linearized Raychaudhuri equation to write \(\partial_{v}\theta=-\mathcal{R}_{vv}+\mathcal{O}(\epsilon^{2})\). Since the Lagrangian is given by the Einstein-Hilbert (Ricci scalar) term in addition to higher derivative terms and a matter sector (non-minimally coupled matter plus a minimally coupled sector that satisfies the NEC), the equations of motion projected onto \(r=0\) (the horizon) in our gauge (3) are given by \[\mathcal{R}_{vv}+E_{vv}^{\rm HD}+T_{vv}^{\rm(non\mbox{-}min)}=T_{vv}^{\rm(min )}\,. \tag{19}\] Here \(E_{vv}={\cal R}_{vv}+E_{vv}^{\rm HD}\) denotes the contribution of pure gravity terms, and \(T_{vv}\) denotes the contribution of matter terms coupled to gravity tensors. Substituting (19) in (18), we get \[\partial_{v}\Theta=E_{vv}^{\rm HD}+T_{vv}^{\rm(non\mbox{-}min)}+\partial_{v} \left(\frac{1}{\sqrt{h}}\partial_{v}\left(\sqrt{h}(s_{\rm wald}^{\rm HD}+s_{ \rm cor})\right)\right)-T_{vv}^{\rm(min)}+{\cal O}(\epsilon^{2})\,. \tag{20}\] From (20), it is clear that if one can show that the equations of motion projected onto the horizon \(r=0\) in our gauge (3) satisfy \[E_{vv}^{\rm HD}+T_{vv}^{\rm(non\mbox{-}min)}=-\partial_{v}\left(\frac{1}{\sqrt{ h}}\partial_{v}\left(\sqrt{h}(s_{\rm wald}^{\rm HD}+s_{\rm cor})\right)\right)+{ \cal O}(\epsilon^{2})\,, \tag{21}\] then we have \[\partial_{v}\Theta=-T_{vv}^{\rm(min)}+{\cal O}(\epsilon^{2})<0\,, \tag{22}\] because of null energy condition \(T_{vv}^{\rm(min)}>0\). It is important to note that this inequality sign in the context of linearized approximation necessarily implies that \(T_{vv}^{\rm(min)}\) cannot be \({\cal O}(\epsilon)\). This is because one can switch the sign of \(\epsilon\) to violate the inequality. Thus, we have \(T_{vv}^{\rm(min)}\sim{\cal O}(\epsilon^{2})\) and we are essentially proving \(\partial_{v}\Theta\sim{\cal O}(\epsilon^{2})\). An intuitive way to understand this is that there are no terms in the linear order that can make \(\partial_{v}\Theta>0\). Such terms might occur in the quadratic order, and one must carefully analyze them like [43]. We will not have anything to say about the non-linear order in our paper. In the above proof, the structure of equations of motions projected onto the null horizon in (21) played a crucial role. Consider a Lagrangian which is a diffeomorphism invariant theory of gravity of the following form \[{\cal L}={\cal L}(g_{ab},{\cal R}_{abcd},D_{e}{\cal R}_{abcd},\dots,\Phi,D_{a} \Phi,D_{(a}D_{b)}\Phi,\dots,F_{ab},D_{c}F_{ab},\dots)\,, \tag{23}\] where the Lagrangian can have a complicated dependence of the metric \(g_{ab}\), scalar \(\Phi\), and \(U(1)\) gauge fields \(A_{a}\), with field strength tensor \(F_{ab}\). If one can show that the equations of motion projected onto the null horizon \(r=0\) in our gauge (3) take the form \[E_{vv}^{\rm HD}+T_{vv}^{\rm(non\mbox{-}min)}=\partial_{v}\left(\frac{1}{\sqrt{ h}}\partial_{v}\left(\sqrt{h}{\cal J}^{v}\right)+\nabla_{A}{\cal J}^{A}\right)+{ \cal O}(\epsilon^{2})\,, \tag{24}\] then (21) holds, and thus, a linearized second law holds for black holes in the theory we are considering. (24) has been established for the Lagrangian of the form (23) in [23, 24]. Using constraints from the boost symmetry (5) of the horizon and diffeomorphism invariance of the Lagrangian, [23] showed that \(E_{vv}\) of (19) has the structure of (24). The input of diffeomorphism invariance can be relaxed as this structure was also established for Chern-Simons theories of gravity, which are diffeomorphism invariant up to total derivatives only [44]. The analysis follows by carefully working out how different covariant tensors, built out of the Lagrangian, transform under the boost transformation (5). For \(T_{vv}^{\rm(non\mbox{-}min)}\), we can invoke the analysis of [24] to show that it also has the structure of (24). This proves the linearized classical second law. Here, \(\mathcal{J}^{v}\) and \(\mathcal{J}^{A}\) are considered as components of a \(d-1\) "vector" on the horizon known as entropy current. \(\mathcal{J}^{v}\) is given by \[\mathcal{J}^{v}=1+s_{\rm wald}^{\rm HD}+s_{\rm cor}\,, \tag{25}\] which is the equilibrium Wald entropy density plus the associated JKM ambiguities, which are now fixed as we are just rewriting the equations of motion. We put vector in quotes because as the horizon is null, this is not a covariant vector of the full space-time. If we integrate \(\mathcal{J}^{v}\) over the cross-section of the horizon, we get the total entropy (13). \(\mathcal{J}^{A}\) is a non-equilibrium quantity, and it can be understood as quantifying the redistribution of entropy across the spatial cross-section of the constant \(v\) slices. This interpretation holds because the increase of the expansion parameter \(\Theta>0\) implies \[\frac{1}{\sqrt{h}}\partial_{v}\left(\sqrt{h}\mathcal{J}^{v}\right)+\nabla_{A }\mathcal{J}^{A}>0\,. \tag{26}\] We have the result that the divergence of an entropy current with components \(\mathcal{J}^{v}\) and \(\mathcal{J}^{A}\) is positive. This is a stronger ultra-local version of the linearized second law when compared to the integrated version of the second law considered in [20], i.e., in (16). This is because the divergence of entropy current not only quantifies an increase in entropy along the "time" direction but also quantifies a redistribution along the spatial directions in such a way that the total entropy always increases (because there is ultra-local entropy production). Clearly, the presence of \(\mathcal{J}^{A}\) would not contribute to (14) if we assume compact horizons, whence the total derivative term drops out of the spatial integral. Thus, (24) and (21) are equivalent. At this point, one might raise an objection that the construction is heavily reliant on the horizon adapted coordinates. However, one can show that the entropy current structure is covariant under affine reparametrizations of the null generators [45]. This completes our review of the linearized classical second law. ## 3 The Generalized Second Law In this section, we will give a proof of the GSL when the matter is non-minimally coupled to gravity. We work in an EFT sense, which will be elaborated in Section 3.2. This also requires defining the expectation value of the stress-energy tensor, which involves the first-order quantum correction to the classical theory and will be treated similarly as in [46]. We then give a proof of the linearized semi-classical GSL in Section 3.2.2. ### Summary of GSL in other gravity theories We briefly summarize [18, 20], which formed the primary motivation for our work. Sarkar and Wall (SW) [18] constructed a proof for the (integrated form of) linearized classical second law and the linearized generalized second law for the semi-classical, minimally-coupled matter sector in the context of Lovelock theories. This was later extended to an arbitrary diffeomorphism invariant theory of gravity with minimally coupled matter by Wall [20]. The setup considered has been detailed in Section 2: a stationary black hole is perturbed slightly and it settles down to some equilibrium solution. This allows us to use the linearized approximation in the dynamics. The central idea was to argue that (13) with the JKM ambiguities fixed, resulted in a linearized version of the second law. The semi-classical equation of motion for the metric is given by \[E_{ab}=8\pi G_{N}\left\langle T_{ab}^{\rm(full)}\right\rangle\,, \tag{27}\] where \(E_{ab}\) will be considered to be a c-number whereas the stress-energy tensor \(T_{ab}^{\rm(full)}\) is treated as a quantum operator. The precise meaning of \(\left\langle T_{ab}^{\rm(full)}\right\rangle\) will be described in Section 3.2.1, [30, 46]. The state with respect to which the expectation value is calculated is chosen to be the generic semi-classical vacuum state corresponding to the background equilibrium solution \(g_{ab}^{(0)}\). The semi-classical equation in (27) is justified because we will be working in the linearized weak-field approximation limit, where the semi-classical equation in (27) is obtained from the full quantum operator equation by taking expectation values of the \(\mathcal{O}(\hbar)\) part of the metric, [29]. In [18], \(T_{ab}^{\rm(full)}\) received contribution just from the minimally coupled matter sector, but for a generic non-minimal matter theory it would include the non-minimal terms as well. Then, if we substitute (27) instead of (19) in (18), we have for first-order changes \[\frac{\rmd\Theta}{\rmd v}\approx-8\pi G_{N}\left\langle T_{vv}^{\rm(full)} \right\rangle\,. \tag{28}\] Now, the claim is that in any theory of gravity, if (28) is obeyed, then this will imply the linearized semi-classical GSL [18, 20]. This can be seen as follows. We first integrate (28) once in the transverse directions \(x^{A}\) and twice along the null generator \(\partial_{v}\) to get \[\frac{\hbar}{2\pi}\left(S_{\rm tot}(\infty)-S_{\rm tot}(v^{\prime})\right)= \int\rmd^{d-2}x\sqrt{h}\int_{v>v^{\prime}}\rmd v\left(v-v^{\prime}\right) \left\langle T_{vv}^{\rm(full)}\right\rangle\,. \tag{29}\] [18, 20] showed that this would imply a semi-classical GSL for matter minimally coupled to gravity. In both works above, certain assumptions were made on the algebra of observables on the horizon. These assumptions were first detailed in Wall's earlier work on proving GSL for Einstein's theory with fairly generic fields minimally coupled to gravity and for arbitrary horizon slices, [30]. All the above proofs required some (as in [18]) or all (as in [30]) of the four axioms that the algebra of observables restricted to the horizon must satisfy, namely, Determinism (the horizon algebra completely specifies all the information falling across the horizon such that together with the information available at the future null infinity, all the information outside the event horizon is completely determined), Ultralocality (the degrees of freedom on distinct horizon generators should be understood as independent systems), Local Lorentz Invariance (there exists an infinite dimensional group of symmetries, associated with the horizon algebra, which correspond to the affine transformations of each horizon generator), and Stability (the generator of null translations should be non-negative). In [33], the authors arrive at (29) by crucially using the structure of \(E_{vv}\) in (21) proved for arbitrary diffeomorphism invariant theories of gravity in [23, 24] following up on the works of [18, 20, 47, 48]. For instance, (II.38) of [33] follows crucially from equation (5.7) of [23]. They then proceed to generalize the constructions of [34, 35] for arbitrary diffeomorphism invariant theories of gravity by suitably generalizing the notions of canonical energy in the covariant phase space formalism of [14, 15]. However, as mentioned in the Introduction (cf. Section 1), the proof of GSL does not generalize to out-of-equilibrium cuts as the notion of the crossed product type II von Neumann algebra, and hence an associated entropy, does not exist corresponding to such arbitrary horizon cuts. ### The setup and a proof In this section, we will explain in detail the setup in which we prove the GSL. We start with the following action \[\begin{split}\mathcal{S}&=\mathcal{S}_{\rm g}[g_{ ab},\mathcal{R}_{abcd},D_{m}\mathcal{R}_{abcd},D_{(m}D_{n)}\mathcal{R}_{abcd}, \ldots]+\mathcal{S}_{\rm min}[g_{ab},\Phi,D_{m}\Phi]\\ &+\mathcal{S}_{\rm nm}[g_{ab},\mathcal{R}_{abcd},D_{m}\mathcal{R} _{abcd},D_{(m}D_{n)}\mathcal{R}_{abcd},\ldots,\Phi,D_{m}\Phi,D_{(m}D_{n)}\Phi, \ldots]\,,\end{split} \tag{30}\] where \(\mathcal{S}_{\rm g}\) is just the gravitational part of the action which is an arbitrary function of the \(\mathcal{R}_{abcd}\) and covariant derivatives acting on it, \(\mathcal{S}_{\rm min}\) represents the minimally coupled matter sector (containing terms only up to two derivatives in the field), and \(S_{\rm nm}\) represents the non-minimal interaction of the gravity and matter fields, and we assume that it has to have terms with at least four derivatives in them. We want to show that GSL holds to the linear order in \(\hbar\) corrections within the EFT approximation of [43]. We will consider the Lagrangian to be a formal sum of different terms with increasing number of derivatives in the fields. Each term comes with a suitable factor of some UV length scale \(\ell\), more precisely, a term with \(n+2\) derivatives will be multiplied by \(\ell^{n}\). The terms with two or fewer derivatives are understood as the usual Einstein-Hilbert term with a minimally coupled matter sector, besides some cosmological constant. The validity of EFT depends on the fact that the terms with an increasing number of derivatives become increasingly less significant. We can achieve this by restricting to spacetimes varying over some characteristic length/time scale \(L\) with \(\frac{\ell}{L}\ll 1\). The scale \(L\) could be considered a lower bound on the size of the final equilibrium black hole state and any perturbation scale away from the equilibrium, [43]. We will further consider the semi-classical regime where we can control the physical effects as a perturbative expansion in \(\hbar\) and \(\frac{\ell}{L}\). We will aim to argue for the GSL to linear order in \(\hbar\) from (28). The non-trivial part of the proof would be to develop a proposal that can evaluate the expectation value of the matter stress tensor when the matter is non-minimally coupled. One cannot straight-away use the four axioms of [30] to deal with the stress tensor expectation value because non-minimal matter contributes to the horizon entropy. Our proposal in Section 3.2.1 will deal with this issue in such a way that we can argue for the linearized GSL from (29). The higher order corrections to this equation in \(\hbar\) come from the renormalization theory, but these are safely neglected in our analysis, which treats backreaction only at the leading order in \(\hbar\). This is along the same lines as previous analyses of Wall in [20, 29], and Sarkar and Wall in [18]. Further, we stress that \(\hbar\sim\epsilon\), which follows from the fact that we treat the metric fluctuations as semi-classical, i.e., the size of the quantum backreaction is comparable to the dynamical fluctuations in the classical background. This is, in fact, the most interesting regime where the validity of GSL should be carefully checked. If we consider classical fluctuations, they far outweigh any quantum fluctuation. Thus, any violation of GSL is apparent only when the size of the classical fluctuations is comparable to the quantum fluctuations [30]. Thus, we consider the variation of the metric fluctuations along the horizon in the semi-classical case to be comparable to that of the classical fluctuations. #### 3.2.1 Stress tensor expectation value: a proposal In this subsection, we will describe a prescription for computing the expectation value of the stress tensor for the complete non-minimal theory by treating the non-minimal part as a small perturbation over the minimal theory, where we know how to compute the expectation values of operators, in particular the stress tensor, up to the one-loop level.1 Footnote 1: The theory truncated at one-loop level has all terms of the complete quantum theory to \(\mathcal{O}(\hbar)\), and it is in this sense the first order quantum correction to the classical theory, [46]. For a discussion on the issues related to semi-classical Einstein equation see [49]. The basic object we need to analyze is the generating functional of the full theory given by \[\mathcal{Z}[J]=\int\mathscr{D}\Phi\,\exp\!\left[\frac{\iota}{\hbar}(\mathcal{ S}_{\rm g}+\mathcal{S}_{\rm min}+\mathcal{S}_{\rm nm})+\iota\int\mathrm{d}^{D}x\,J(x )\Phi(x)\right], \tag{31}\] taken over the space of fields \(\Phi\) with suitable measure. Since the \(\mathcal{S}_{\rm g}\) term does not depend on \(\Phi\), it will contribute as an overall multiplicative factor to the generating functional, which we denote by \(\mathcal{N}_{g}\). Further, we can analyze the non-minimal part of the action as a perturbation over the minimal theory because \(\mathcal{S}_{\rm nm}\) is at least \(\mathcal{O}((\frac{\ell}{L})^{2})\) suppressed compared to \(\mathcal{S}_{\rm min}\). Thus we have \[\mathcal{Z}[J]=\mathcal{N}_{g}\int\mathscr{D}\Phi\,\exp\!\left[\frac{\iota}{ \hbar}\,\mathcal{S}_{\rm min}+\iota\int\mathrm{d}^{D}x\,J(x)\Phi(x)\right]\exp \!\left[\frac{\iota\,\ell^{2}}{\hbar\,L^{2}}\mathcal{S}_{\rm non-min}\right], \tag{32}\] where \[\mathcal{S}_{\rm nm}=\frac{\ell^{2}}{L^{2}}\mathcal{S}_{\rm non-min}\,.\] We now define the "minimal generating function" as \[\mathcal{Z}_{\rm min}[J]=\int\mathscr{D}\Phi\,\exp\!\left[\frac{\iota}{\hbar} \,\mathcal{S}_{\rm min}+\iota\int\mathrm{d}^{D}x\,J(x)\Phi(x)\right], \tag{33}\] and for any operator in the matter sector, \(\hat{A}\), defined in the full non-minimal theory, we define \[\left\langle\hat{A}(\Phi,D_{m}\Phi,\ldots)\right\rangle\coloneqq\frac{\int \mathscr{D}\Phi\,e^{\frac{\iota}{\hbar}\,\mathcal{S}_{\rm min}}A(\Phi,D_{m} \Phi,\ldots)}{\int\mathscr{D}\Phi\,e^{\frac{\iota}{\hbar}\,\mathcal{S}_{\rm min }}}\,. \tag{34}\] This implies (31) takes the form \[\mathcal{Z}[J]=\mathcal{Z}_{\rm min}[0]\,\left\langle\exp\!\left[\frac{\iota\,\ell^ {2}}{\hbar\,L^{2}}\,\mathcal{S}_{\rm non-min}\right]\right\rangle\,. \tag{35}\] At this point, it is helpful to comment on how we treat the gravity tensors in the path integral. At the level of the matter path integral, they are quantities independent of the matter fields. Since we have not imposed any gravity equations of motion, we can directly evaluate the path integral over the matter fields by treating gravity tensors as some "background" quantities. After the path integral, the matter fields couple to gravity through the semi-classical equation (27). Only at this point, the geometric tensors would couple to quantum matter fields non-trivially. The basic idea of the calculation is to mimic the interaction theory analysis for perturbative QFT. Ultimately, it boils down to calculating things with respect to the minimal theory in the sense that the weight factor in the path integral is defined only via the minimal part. Our EFT expansion is based on the parameter \[\lambda=\frac{\ell^{2}}{\hbar L^{2}}\ll 1\,. \tag{36}\] This parameter clarifies why we can safely neglect renormalization effects on the couplings. They tend to be suppressed because, at one loop, we have \(\lambda\hbar\ll\lambda\ll 1\). We then define a quantity which we call the _effective action_ for the quantum matter fields as \[\mathcal{W}_{\rm min}=-\iota\hbar\,\log\mathcal{Z}_{\rm min}[0]\equiv-\iota \hbar\,\log\mathcal{Z}_{\rm min}\,, \tag{37}\] and \[\mathcal{W}=-\iota\hbar\log\mathcal{Z}[0]=-\iota\hbar\log\left[\mathcal{Z}_{ \rm min}\left\langle\exp[\iota\lambda\,\mathcal{S}_{\rm non-min}]\right\rangle \right]\,, \tag{38}\] such that \[\frac{2}{\sqrt{-g}}\frac{\delta\mathcal{W}_{\rm min}}{\delta g_{ab}}\coloneqq \left\langle T^{ab}_{\rm min}\right\rangle=-\frac{2\iota\,\hbar}{\sqrt{-g} \mathcal{Z}_{\rm min}}\frac{\delta\mathcal{Z}_{\rm min}}{\delta g_{ab}}\quad \textrm{and}\quad\frac{2}{\sqrt{-g}}\frac{\delta\mathcal{W}}{\delta g_{ab}} \coloneqq\left\langle T^{ab}\right\rangle\,. \tag{39}\] So the expectation value of the stress tensor is defined to be \[\left\langle T^{ab}\right\rangle=\left\langle T^{ab}_{\rm min}\right\rangle- \frac{2\iota}{\sqrt{-g}}\frac{\hbar}{\left\langle\exp[\iota\lambda\, \mathcal{S}_{\rm non-min}]\right\rangle}\frac{\delta}{\delta g_{ab}}\left\langle \exp[\iota\lambda\,\mathcal{S}_{\rm non-min}]\right\rangle\,. \tag{40}\] We emphasize that [46] gives us a prescription for defining the expectation value of the minimal sector only. The above expectation value for a generic non-minimal coupling is our prescription based on EFT, where we crucially required that \(\mathcal{S}_{\rm non-min}\) be suppressed compared to \(\mathcal{S}_{\rm min}\). To evaluate the final step, we first note that the expectation value is defined with respect to the minimal theory in (34). We can simplify the path integral by doing a saddle point approximation. This amounts to setting the field \(\Phi\) in \(\mathcal{S}_{\text{non-min}}\) as the field \(\Phi^{\text{sol}}_{\text{min}}\) which satisfies the classical minimal equation of motion, i.e., \[D_{a}D^{a}\Phi^{\text{sol}}_{\text{min}}+m^{2}\Phi^{\text{sol}}_{\text{min}}=0\,. \tag{41}\] This saddle-point approximation is valid because the minimal part of the action has only up to second-order derivatives in the field. Thus, we have \[\left\langle\exp[\iota\lambda\,\mathcal{S}_{\text{non-min}}]\right\rangle=\exp \bigl{[}\iota\lambda\,\mathcal{S}_{\text{non-min}}[\Phi=\Phi^{\text{sol}}_{ \text{min}}]\bigr{]}\,. \tag{42}\] We stress the point that we were able to arrive at the above equation because \(\mathcal{S}_{\text{non-min}}\) is suppressed compared to \(\mathcal{S}_{\text{min}}\) in an EFT sense (36). Using (42) in (40), we get \[\left\langle T^{ab}\right\rangle=\left\langle T^{ab}_{\text{min}}\right\rangle +\frac{2\lambda\,\hbar}{\sqrt{-g}}\frac{\delta}{\delta g_{ab}}\mathcal{S}_{ \text{non-min}}[\Phi=\Phi^{\text{sol}}_{\text{min}}]=\left\langle T^{ab}_{ \text{min}}\right\rangle+\frac{\ell^{2}}{L^{2}}\,T^{ab}_{\text{non-min}}[\Phi =\Phi^{\text{sol}}_{\text{min}}]\,. \tag{43}\] For our proof, we would be interested only in the \(vv\)-component of the stress tensor, using the fact that the terms are either proportional to \(g_{vv}\), or have positive boost weight, and with our gauge choice (7) it is easy to see that \[\left\langle T^{\text{(min)}}_{vv}\right\rangle\big{|}_{r=0}\sim\mathcal{O}( \hbar)\,. \tag{44}\] We are finally left with \[\left\langle T_{vv}\right\rangle\big{|}_{r=0}=\left\langle T^{\text{(min)}}_{ vv}\right\rangle\big{|}_{r=0}+\frac{\ell^{2}}{L^{2}}\,T^{\text{(non-min)}}_{vv}[\Phi= \Phi^{\text{sol}}_{\text{min}}]\big{|}_{r=0}+\mathcal{O}(\hbar^{2})\,. \tag{45}\] #### 3.2.2 Generalized second law: a proof Using (45), the \(vv\) component of the semi-classical equation of motion in (27) becomes \[E_{vv}\big{|}_{r=0}=8\pi G_{N}\left[\left\langle T^{\text{(min)}}_{vv}\right \rangle+\frac{\ell^{2}}{L^{2}}T^{\text{(non-min)}}_{vv}(\Phi^{\text{sol}}_{ \text{min}},D_{a}\Phi^{\text{sol}}_{\text{min}},\ldots)\right]_{r=0}\,. \tag{46}\] Here \(E_{vv}\) gets contribution only from \(\mathcal{S}_{\text{g}}\) of the action (30) while \(T^{\text{(non-min)}}_{vv}\) is the classical equation of motion obtained from \(\mathcal{S}_{\text{non-min}}\). We now invoke the \(E_{vv}\) structure at \(r=0\) in (24) [23, 24]. Thus, (46) becomes \[\partial_{v}\Theta_{\text{tot}}=8\pi G_{N}\,\left\langle T^{\text{(min)}}_{vv} \right\rangle\,, \tag{47}\] where \(\Theta_{\text{tot}}\) is given by \[\Theta_{\text{tot}}=\frac{1}{\sqrt{h}}\partial_{v}\left(\sqrt{h}\,\left[ \mathcal{J}^{v}+\frac{\ell^{2}}{L^{2}}\mathcal{J}^{v}_{\text{non-min}}\right] \right)+\frac{1}{\sqrt{h}}\partial_{A}\left(\sqrt{h}\,\left[\mathcal{J}^{A}+ \frac{\ell^{2}}{L^{2}}\mathcal{J}^{A}_{\text{non-min}}\right]\right)\,. \tag{48}\] Here \(\mathcal{J}^{v}_{\rm non-min}\) and \(\mathcal{J}^{A}_{\rm non-min}\) are given by [24] \[-T^{(\rm non-min)}_{vv}\big{|}_{r=0}=\partial_{v}\left(\frac{1}{\sqrt{h}} \partial_{v}\left(\sqrt{h}\,\mathcal{J}^{v}_{\rm non-min}\right)+\frac{1}{ \sqrt{h}}\partial_{A}\left(\sqrt{h}\,\mathcal{J}^{A}_{\rm non-min}\right) \right)+\mathcal{O}(\hbar^{2})\,, \tag{49}\] and \(\mathcal{J}^{v}\) and \(\mathcal{J}^{A}\) are defined through [23] \[E_{vv}\big{|}_{r=0}=\partial_{v}\left(\frac{1}{\sqrt{h}}\partial_{v}\left( \sqrt{h}\,\mathcal{J}^{v}\right)+\frac{1}{\sqrt{h}}\partial_{A}\left(\sqrt{h} \,\mathcal{J}^{A}\right)\right)+\mathcal{O}(\hbar^{2})\,. \tag{50}\] (47) implies a linearized GSL following the analysis of [18, 30]. The way one argues is as follows. In (47), we have split the minimal and non-minimal parts of the stress tensor. The part of the stress tensor that sits inside the expectation value is the minimal term, and the path integral is weighted with the minimal action. The matter fields of the full non-minimal theory might not always satisfy the Stability axiom of [30]. It is reasonable to assume that the algebra satisfies Determinism, Ultra-locality, and Local Lorentz symmetry. We assume that given such an algebra, we can consistently restrict the algebra to the minimal sector of the theory. This algebra is taken to satisfy the stability axiom as shown in [30]. Once we have this restricted "minimal" algebra of observables on the horizon, we can borrow the setup of section 5 of [18] to argue for the GSL. Using the axioms of local Lorentz symmetry and stability, we invoke the Bisongano-Wichmann theorem [50], which states that the vacuum state restricted to the Rindler wedge is thermal with respect to the boost energy. Then, we use Sewell's generalization [51] to the algebra of observables on the horizon when the vacuum state is restricted to the horizon cut above the bifurcation surface. The vacuum then remains thermal for any cut \(v>v^{\prime}\) by virtue of the local Lorentz symmetry (supertranslations in this case). One can then use the monotonicity property of the relative entropy to argue for a GSL [30]. We note that this analysis assumes a suitable renormalization scheme, which maybe argued for based on Wall's original formalism [30] and the recent works in [33, 34, 35]. ## 4 Discussion Here, we summarize what we have been able to achieve so far. We considered arbitrary non-minimally coupled diffeomorphism invariant matter-gravity theories and established a linearized GSL for dynamical black holes in such theories. The non-trivial physics of non-minimally coupled theories is that it violates the Null Energy Condition. It is also unknown if they satisfy the Averaged Null Energy Condition (ANEC) [52]. Thus, one cannot impose the stability axiom of [30] to help us in the proof. We got around this thorny issue by treating the higher derivative non-minimally coupled terms in an EFT sense. This allowed us to directly evaluate the matter path integral for the expectation value of the stress tensor. In this perturbative regime, it is clear that the non-minimal couplings only contribute to the horizon entropy via (47). With our formalism, it is not possible to treat the \(S_{\rm QFT}\) corrections to the generalized entropy from non-minimal terms since they are parametrically suppressed. (45) implies that \(S_{\rm out}\) receives contributions only from the minimal sector. Our proof relied on some basic assumptions about the algebra of observables as in [18, 30]. It also suggests that the validity of the linearized GSL necessitates the perturbative EFT treatment of the higher derivative terms. We can then incorporate higher-order contributions. This is also in line with the recent observations of [43], where the second law at the quadratic order in perturbations was valid only in the EFT sense. At this point, it is helpful to comment on how our proof is related to or different from the proofs of [18, 20, 30, 33, 53]. [30] building on [53] worked in a regime where the matter fields across the horizon are rapidly fluctuating. However, the proof of [30] was restricted to Einstein's gravity, with matter fields minimally coupled to gravity. Our proof works in the regime of linearized fluctuations around stationary black holes, which is the same regime considered in [18, 20]. The proof outlined in [20] building on [18] was established for arbitrary pure gravity diffeomorphism invariant theories but with a minimally coupled matter sector only. We have been able to extend the proof of the GSL in [20] for arbitrary diffeomorphism invariant theories of gravity with matter fields non-minimally coupled to gravity, provided we are working in an EFT using the proof of the linearized second law worked out in [22, 23, 24]. Further, unlike the case of [33], we have made a stronger statement of GSL in that we can show that the generalized entropy is increasing for every horizon cut, using the entropy current structure detailed in [22, 23, 24]. Our proof crucially uses the advantages of working in the horizon adapted coordinates (2) and the resulting entropy current structure for the \(E_{vv}\) in (50). This significantly simplifies the analysis of the \(S_{\rm out}\), the entanglement entropy of the matter fields outside the horizon. The entropy current structure (50) indicates that in (46), the non-minimally coupled matter fields at one loop continue to contribute to the entropy of the horizon \(S_{H}\) as expected classically. This was possible because of the EFT regime we worked with in the above analysis. This completes our proof of the linearized GSL for a matter sector that can be non-minimally coupled. We now highlight some future directions. Our analysis does not consider the graviton contribution. [33] have taken the gravitons into account and it would be interesting to see how these subtleties fit into our perturbative EFT analysis. We also assume a sharp suppression of scale between minimal and non-minimal sectors, making our analysis not the most generic treatment of the non-minimal sector. Two crucial open issues remain: first, what happens when the non-minimal contribution is comparable to the minimal sector? In particular, we do not know how to treat all possible non-minimal couplings that could appear at two derivatives in the EFT treatment; second, our analysis only holds in the perturbative setup, and we do not yet understand how to incorporate any non-perturbative effects up to the linear order. To conclude the discussion, we mention that the horizon entropy in our context is coarse-grained as we do not know its microstate description. It would be interesting to see if there is a holographic entanglement entropy analog of the monotonic quantity, as proposed in Section 3.2.2,4 that satisfies the linearized semi-classical GSL for higher curvature gravity theory with non-minimally coupled matter sector, and check whether the horizon will turn out to be the _quantum extremal surface_ (QES), [55], in that context. We do not expect the latter to hold; otherwise, it would correspond to saying that the horizon entropy is fine-grained in our setup. However, this is not the case as here entropy is derived from Wald-like prescription [15] which in turn arises from \(\delta M=T\delta S\), where \(\delta M\) is given by the energy integral on the surface at infinity and thus represents the collective energy of the spacetime. It is an extensive quantity, and in this sense, the horizon entropy is coarse-grained. This could have some interesting connections to other recent works that rely on the QES formalism [56, 57]. We would like to thank Sayantani Bhattacharyya, Nilay Kundu, Alok Laddha, Sudipta Sarkar, Amitabh Virmani, Aron Wall, and Zihan Yan for helpful discussions at various stages of the project. Also, we thank A P Balachandran, Jyotirmoy Bhattacharya, Nilay Kundu, Sudipta Sarkar, Suneeta Vardarajan and Amitabh Virmani for several useful comments on the draft. This research was supported in part by the International Centre for Theoretical Sciences (ICTS) for the program "Nonperturbative and Numerical Approaches to Quantum Gravity, String Theory and Holography" (code: ICTS/numstrings-2022/8). PD duly acknowledges the Council of Scientific and Industrial Research (CSIR), New Delhi, for financial assistance through the Senior Research Fellowship (SRF) scheme. KJ would like to thank the Department of Atomic Energy, Govt. of India, for financial support. ## Appendix A Examples of solutions in the horizon adapted coordinates This appendix will show how one can obtain the standard Schwarzschild and Kerr solutions in the horizon adapted coordinates of (7). Since we are using geodesics to rule our coordinates, the coordinate patch is valid only until the geodesics form a caustic. This necessarily means that the coordinate patch is valid only locally near the dynamical horizon. Our setup is akin to the _Gaussian Null Coordinates_ discussed in Section 2.1 of [58]. To expound on this point, we briefly demonstrate the following: we will show that the Schwarzschild metric can be bought to the gauge given by (7) such that the metric is valid for arbitrary values of \(r\) away from the horizon. However, in case of the Kerr metric we bring it to the gauge given by (7) close to the horizon in a perturbative expansion only. ### Schwarzschild solution Consider the Eddington-Finkelstein Patch of the Schwarzschild solution \[ds^{2}=2\mathrm{d}v\,\mathrm{d}r-\left(1-\frac{2M}{r}\right)\mathrm{d}v^{2}+r^{2 }\mathrm{d}\Omega^{2}\,.\] (A.1) \(v,r\) are not affine and \(r=0\) is not the horizon. Thus, do the following transformations successively: \[v=4M\,\log\lambda\ \ \rightarrow\ \ \tilde{r}=r-2M\ \ \rightarrow\ \ \tilde{r}=\frac{\lambda\,\rho}{4M}\,.\] (A.2) We can thus bring the metric to the desired gauge (7) where \(\lambda,\rho\) are affine and \(\rho=0\) is the horizon: \[ds^{2}=2\mathrm{d}\lambda\,\mathrm{d}\rho+\rho^{2}\left(\frac{2}{\lambda\rho}- \left(1-\frac{8M^{2}}{\lambda\rho+8M^{2}}\right)\frac{16M^{2}}{(\lambda\rho)^{ 2}}\right)\mathrm{d}\lambda^{2}+\left(\frac{\lambda\rho}{4M}+2M\right)^{2} \mathrm{d}\Omega^{2}\,.\] (A.3) It is important to note that this metric is valid for arbitrary values of \(\rho\) away from the horizon. Near the horizon \(\rho=0\), this metric takes the desired form as, \[ds^{2}=2\mathrm{d}\lambda\,\mathrm{d}\rho+\rho^{2}\left(\frac{1}{4}-\frac{ \lambda\rho}{32M}+\frac{(\lambda\rho)^{2}}{256M^{2}}-\mathcal{O}[(\lambda\rho) ^{3}]\right)\,\mathrm{d}\lambda^{2}+\left(\frac{\lambda\rho}{4M}+2M\right)^{2} \mathrm{d}\Omega^{2}\,.\] (A.4) ### Kerr solution Consider the Eddington-Finkelstein patch of the Kerr solution \[\begin{split} ds^{2}&=2\mathrm{d}v\,\mathrm{d}r- \left(1-\frac{2Mr}{(r^{2}+a^{2}\cos^{2}\theta)}\right)\mathrm{d}v^{2}-\frac{4 Mar\sin^{2}\theta}{r^{2}+a^{2}\cos^{2}\theta}\mathrm{d}v\,\mathrm{d}\phi+(r^{2}+a ^{2}\cos^{2}\theta)\,\mathrm{d}\theta^{2}\\ &\qquad-2a\sin^{2}\theta\,\mathrm{d}r\,\mathrm{d}\phi+\frac{\sin ^{2}\theta}{r^{2}+a^{2}\cos^{2}\theta}[(r^{2}+a^{2})^{2}-(r^{2}+a^{2}-2Mr)a^{2 }\sin^{2}\theta]\,\mathrm{d}\phi^{2}\,.\end{split}\] (A.5) Following the arguments of Appendix A of [59], we can bring the Kerr metric (A.5) to our gauge. First, go to a co-rotating frame \[\phi=\varphi+\frac{a}{r_{+}^{2}+a^{2}}v\] (A.6) Re-write the metric with respect to the following null geodesics \[\begin{split} n&=-\left(\frac{a^{2}\sin^{2}\theta} {2r_{+}^{2}+2a^{2}\cos^{2}\theta}\right)\frac{\partial}{\partial v}-\left( \frac{r_{+}^{2}+a^{2}}{r_{+}^{2}+a^{2}\cos^{2}\theta}\right)\frac{\partial}{ \partial r}-\left(\frac{a}{r_{+}^{2}+a^{2}}\right)\frac{\partial}{\partial \phi}\,,\\ l&=\frac{\partial}{\partial v}\,.\end{split}\] (A.7) Consider a family of null geodesics with \(n\) as the tangent field crossing the horizon \(H\) at the point \((v,\theta,\phi)\). Parametrize them using an affine \(\rho\) such that \(\rho=0\) is \(H\). Thus, we can perturbatively construct the geodesic coordinates: \[X^{\alpha}_{(v,\theta,\phi)}(\rho)\approx X^{\alpha}|_{\rho=0}+\rho\left.\frac {d\,X^{\alpha}}{d\,\rho}\right|_{\rho=0}+\frac{\rho^{2}}{2}\left.\frac{dX^{ \alpha}}{d\rho}\right|_{\rho=0}+\mathcal{O}(\rho^{3})\,,\] (A.8) GSL for non-minimal theories_ \[X^{\alpha}|_{H}=(v,r_{+},\theta,\phi)\,,\] (A.9) \[\frac{dX^{\alpha}}{d\rho}\bigg{|}_{H}=n^{\alpha}=\left(-\frac{a^{2} \sin^{2}\theta}{2r_{+}^{2}+2a^{2}\cos^{2}\theta},-\frac{r_{+}^{2}+a^{2}}{r_{+}^ {2}+a^{2}\cos^{2}\theta},0,-\frac{a}{r_{+}^{2}+a^{2}}\right)\,,\] (A.10) \[n^{\beta}D_{\beta}n^{\alpha}=0\implies\left.\frac{d^{2}X^{\alpha} }{d\rho^{2}}\right|_{H}=-\Gamma^{\alpha}_{\beta\gamma}n^{\beta}n^{\gamma}\,.\] (A.11) Thus, we have the transformation from \((v,r,\theta,\phi)\) to \((v,\rho,\theta,\phi)\). The Kerr metric can thus be schematically written as \[g_{\alpha\beta}\approx g_{\alpha\beta}^{(0)}+\rho\,g_{\alpha\beta}^{(1)}+\frac {\rho^{2}}{2}\,g_{\alpha\beta}^{(2)}+\mathcal{O}(\rho^{3})\,.\] (A.12) Once we reach (A.11) of [59], we simply do the following transformations: \[\rho=\kappa\,u\,\lambda,\quad v=\frac{1}{\kappa}\,\log\lambda,\quad\text{where }\quad\kappa=\frac{\Delta^{\prime}}{2\chi_{o}}\,,\] (A.13) where \(u,\lambda\) are the new affine coordinates we define, and other quantities are defined as in [59]. This will result in the Kerr metric in our gauge (7): \[\begin{split} ds^{2}&=2\mathrm{d}\lambda\,\mathrm{ d}u+\frac{u^{2}}{2}g_{vv}^{(2)}\,\mathrm{d}\lambda^{2}+u\,(2g_{v\theta}^{(1)}+ \kappa u\lambda\,g_{v\theta}^{(2)})\,\mathrm{d}\lambda\,\mathrm{d}\theta\\ &+u\,(2g_{v\phi}^{(1)}+\kappa u\lambda)\,\mathrm{d}\lambda\, \mathrm{d}\phi+\left(g_{\theta\theta}^{(0)}+\kappa u\lambda g_{\theta\theta}^ {(1)}+\frac{(\kappa u\lambda)^{2}}{2}g_{\theta\theta}^{(2)}\right)\,\mathrm{d }\theta^{2}\\ &+\left(g_{\phi\phi}^{(0)}+\kappa u\lambda g_{\phi\phi}^{(1)}+ \frac{(\kappa u\lambda)^{2}}{2}g_{\phi\phi}^{(2)}\right)\,\mathrm{d}\phi^{2} \\ &+\left(\kappa u\lambda g_{\theta\phi}^{(1)}+\frac{(\kappa u \lambda)^{2}}{2}g_{\theta\phi}^{(2)}\right)\,\mathrm{d}\theta\,\mathrm{d} \phi+\ldots\,.\end{split}\] (A.14) From the above analysis, we can see that the Kerr metric in our gauge (7) is valid only close to the horizon.
2309.08902
Investigating Subtler Biases in LLMs: Ageism, Beauty, Institutional, and Nationality Bias in Generative Models
LLMs are increasingly powerful and widely used to assist users in a variety of tasks. This use risks the introduction of LLM biases to consequential decisions such as job hiring, human performance evaluation, and criminal sentencing. Bias in NLP systems along the lines of gender and ethnicity has been widely studied, especially for specific stereotypes (e.g., Asians are good at math). In this paper, we investigate bias along less-studied but still consequential, dimensions, such as age and beauty, measuring subtler correlated decisions that LLMs make between social groups and unrelated positive and negative attributes. We ask whether LLMs hold wide-reaching biases of positive or negative sentiment for specific social groups similar to the "what is beautiful is good" bias found in people in experimental psychology. We introduce a template-generated dataset of sentence completion tasks that asks the model to select the most appropriate attribute to complete an evaluative statement about a person described as a member of a specific social group. We also reverse the completion task to select the social group based on an attribute. We report the correlations that we find for 4 cutting-edge LLMs. This dataset can be used as a benchmark to evaluate progress in more generalized biases and the templating technique can be used to expand the benchmark with minimal additional human annotation.
Mahammed Kamruzzaman, Md. Minul Islam Shovon, Gene Louis Kim
2023-09-16T07:07:04Z
http://arxiv.org/abs/2309.08902v3
# Investigating Subler Biases in LLMs: ###### Abstract LLMs are increasingly powerful and widely used to assist users in a variety of tasks. This use risks the introduction of LLM biases to consequential decisions such as job hiring, human performance evaluation, and criminal sentencing. Bias in NLP systems along the lines of gender and ethnicity has been widely studied, especially for specific stereotypes (e.g., _Asians are good at math_). In this paper, we investigate bias along less studied, but still consequential, dimensions, such as age and beauty, measuring subtler correlated decisions that LLMs (specially autoregressive language models) make between social groups and unrelated positive and negative attributes. We ask whether LLMs hold wide-reaching biases of positive or negative sentiment for specific social groups similar to the "what is beautiful is good" bias found in people in experimental psychology (Dion, Berscheid, and Walster, 1972). We introduce a template-generated dataset of sentence completion tasks that asks the model to select the most appropriate attribute to complete an evaluative statement about a person described as a member of a specific social group. We also reverse the completion task to select the social group based on an attribute. Finally, we report the correlations that we find for multiple cutting-edge LLMs. This dataset can be used as a benchmark to evaluate progress in more generalized biases and the templating technique can be used to expand the benchmark with minimal additional human annotation. ## Introduction Alongside the impressive new capabilities of recent language generation models such as ChatGPT (Brown et al., 2020), GPT-4 (OpenAI, 2023), and LLaMA-2 (Touvron et al., 2023), these systems are increasingly involved in consequential decisions made in the real world. This includes job hiring and performance reviews, with tips for hiring managers appearing across the internet. Even before these recent advancements, AI has been used in the criminal justice system leading to the amplification of social inequities (Moy, 2019). In order to manage these biases prior research has investigated the most salient dimensions of bias in word embeddings and LLMs, such as gender and ethnicity (Bolubasi et al., 2016; Caliskan, Bryson, and Narayanan, 2017; Kurita et al., 2019). Prior work also focuses particularly on whether these AI systems produce specific stereotypes of underrepresented minorities, such as associating Middle Eastern people with perfumes or Jewish people with greed (Nangia et al., 2020; Nadeem, Bethke, and Reddy, 2021). Social scientists found that human biases extend beyond simple stereotypes and can lead to general associations of positive attributes to members holding (or perceived to hold) certain key characteristics (Dion, Berscheid, and Walster, 1972; Gross and Crofton, 1977; Butler, 1969). For example, Dion, Berscheid, and Walster (1972) found that people are more likely to infer a plethora of other desirable characteristics to people that are judged more attractive. In this paper, we extend the evaluation of bias in LLMs in one key way: we investigate whether LLMs make general associations between stereotyped categories and unrelated positive, negative, and neutral attributes rather than specific stereotype inferences. In addition, we investigate dimensions of bias that have been largely overlooked: age, beauty, academic institution, and nationality. To our knowledge, Nangia et al. (2020) is the only NLP paper that directly investigates bias along the dimensions of age, beauty, and nationality. Figure 1 shows an example of how we formulate the completion task for LLMs in the bias domain of academic institutions. We investigate both LLM completion of attributes Figure 1: Examples of completion task in both SAI and ASA directions. given a bias domain stimulus and the LLM completion of bias domain stimuli given unrelated attributes. We find that three of the four model-experiment combinations led to a statistically significant dependence between the biased descriptors and unrelated attributes. When we break down the results along positive and negative directions, we find that LLMs make biased inferences whether we generate the unrelated attributes or use the attributes as the given information. ## Motivating Studies from Social Science Dion, Berscheid, and Walster (1972) studied how attractiveness is related to socially desirable occupational status, personal traits, etc. They found that people who are considered attractive are more likely to hold socially desirable traits. They also found that attractiveness is a major factor in judging people's occupations. Gross and Crofton (1977) conducted an experiment in the reverse direction to Dion, Berscheid, and Walster (1972). In their experiment, Gross and Crofton (1977) found that knowing that people have good qualities affects our judgments of how beautiful we find them to be. Also, attractiveness plays a measurable role in employee termination (Commisso and Finkelstein, 2012), hiring processes (Marlowe, Schneider, and Nelson, 1996), interview callbacks (Maurer-Fazio and Lei, 2015), and even in victim blaming (Weber, Ziegele, and Schnauber, 2013). As LLMs are widely used nowadays, we must address the degree to which such beauty bias is present in large language models. Butler (1969) introduced the term "agesim" which refers discrimination towards elderly people. Perdue and Gurtman (1990) studied how social category labels such as "old" and "young" influence the inferred positive and negative traits in the same people. From their study, they found that negative traits are most likely related to aged people. Axelrod and Eisdorfer (1961) also found that negative stereotypes increased with age. Aaronson (1966) confirmed Axelrod and Eisdorfer's (1961) results, making it clear that personal trait assumptions are strongly associated to age. As with beauty, age has also been found to be a factor in evaluating workers, where typically older workers are wrongly evaluated as worse (Ng and Feldman, 2012). Slabther et al. (2004) studied how personality traits are related to an organization or institution. In their experiment, they used different organizations (e.g., Microsoft, Nike, Walmart, etc.) and investigated how personal traits differ in these organizations. While this study was limited to the US, Anderson, Haar, and Gibb (2010) performed an extended global study and included more organizations. Along these lines, the reputation of an academic institution plays an important role in alumni hiring and in many cases recruiters prioritize students from elite institutions (Morley and Aynsley, 2007; Mavridopoulou and O'Mahoney, 2020). In his study, Humphry (2017) found that personality traits also play an important role in student choice of university and other institutions (e.g., vocational education). Top universities leverage specific personality traits when branding, which may be one of the factors behind why people assume that students from higher-ranked universities possess a broad range of positive personality traits compared to others (Rutter, Lattice, and Nadeau, 2017). Nationality plays an important role in multicultural online learning environments which has been found to affect student interactions (Morales-Martinez, Latreille, and Denny, 2020), consumer perception towards products (Innsch and McBride, 2004), and academic philosophy (Seabra et al., 2023). Tavoletti et al. (2022) studied how nationality bias affects peer evaluation. They found that the nationalities of team members is a real, potential source of bias in peer evaluation. They further found that the economic development level of the team member's country of origin is seen as an important factor in evaluating peers over their individual qualities. ## Related Work Moving on to other work on bias in NLP specifically, Bolukbasi et al. (2016) revealed that gender bias could be found in word embeddings and suggested a way to reduce the presence of such biases for word embeddings. Caliskan, Bryson, and Narayanan (2017) proposed the Word Embedding Association Test (WEAT) where one tests associations between target words (e.g., young vs. old people's names) and attribute words (e.g., pleasant vs. unpleasant). In WEAT, cosine similarity is used as the association evaluation metric and bias is defined as the difference between one target word's (e.g., young people's name) level of association with attribute words when compared to another target word (e.g., old people's name). May et al. (2019) proposed the Sentence Encoder Association Test (SEAT) which is the extension of Caliskan, Bryson, and Narayanan (2017)'s work to the sentence level. They generated sentences using templates such as "_This will [target]_" and "_[target] are things_". Kurita et al. (2019) proposed a log probability-based score to measure bias in masked language models and argued that this bias scoring is better than the cosine-based similarity associations on the basis of sensitivity and result consistency. Nangia et al. (2020) proposed Crowdsourced Stereotype Pairs (CrowS-Pairs), a dataset that studied nine different types of social biases (e.g., age, nationality, physical appearance, etc.). They designed their work for masked language models. This work is the closest to our own in its study of ageism, beauty, and nationality. However, it differs in both the model type (autoregressive vs. masked) and in the generality of the associations that we study. Similar to their work we study intrasentence biases. Nadeem, Bethke, and Reddy (2020) introduced a dataset called StereoSet to measure stereotypical bias in both masked and autoregressive pretrained language models. In their work, they covered four domains and evaluated both masked and autoregressive models. They studied both intrasentence and intersentence biases. Venkit et al. (2023) explored nationality bias in GPT-2. They generated stories using GPT-2 and analyzed how other factors (e.g., number of internet users, economic condition, etc.) affect nationality bias. Jha et al. (2023) introduced a stereotype dataset that covers eight geo-political regions and also at state-level in US and India. Based on geographical location, they explained how stereotypical associations varied a person's identity. They defined an asso ciation as stereotypical if it exhibits (French, intelligent) and non-stereotypical if it exhibits (Irish, Ireland). ## 2 Task Definition In this study, we assess bias in generative large language models. Unlike previous studies, which have focused on identifying bias in a single direction, we provide a more general measurement. For example, Nadeem, Bethke, and Reddy (2021) measured stereotypes of LLMs by giving the name of _race_ (e.g., Hispanic, Ghanaianian etc.) or _profession_ (e.g., physicist, tailor, etc.) and asked LLMs to choose between _attributes_ (e.g., poor, rich, creative, stupid, etc.) related to _race_ or _profession_, but they didn't measure the bias in reverse context (e.g., by giving _attributes_ and asked LLMs to choose _race_ or _profession_). In our study, we study both directions, inspired by Dion, Berscheid, and Walster's (1972) work which found that attractiveness significantly affects judgments about individuals' socially desirable traits and occupational status, and Gross and Crofton's (1977) which reversed this line of inquiry by showing that positive qualities or traits can affect our perception of someone's attractiveness, we aim to measure the bias in both directions. In our experiment, we design a _fill-in-the-blank_ style sentence corresponding to a particular context. We evoke the biases in the prompt using a description of a person based on the bias category we are studying. We will refer to this description as simply the _stimulus_. For example, in Figure 1 (a), the stimulus is "Holyoke Community College" for institutional bias. In this framework, we consider a system to be biased if the conditional likelihood of positive, negative, or neutral completions change when the polarity of the prompting text changes. That is, an unbiased system will have \(\Delta\)PL, \(\Delta\)NL, and \(\Delta\)NuL values (defined in Figure 3) of 0. For example, in Figure 3, we find that \(\Delta\)NL is systematically negative, so the negative predictions of the models are biased in the opposite direction of the prompt text changes. **Stimulus to Attribute Inference (SAI):** In this direction, we provide _stimulus_ and asked LLMs to infer related _attribute_. LLMs need to choose between a set of three attributes, i.e., positive, negative, and neutral. In Figure 1 (a), the stimulus is "Holyoke Community College" and positive, negative and neutral attributes are "creative","unimaginative", and "carefree" respectively. **Attribute to Stimulus Association (ASA):** In this way, we provide _attribute_ and asked LLMs to choose a specific _stimulus_. LLMs need to choose between a set of three stimuli, i.e., positive, negative, and neutral. For example, in Figure 1 (b), the attribute is "creative" and positive, negative and neutral stimuli are "Brandeis University", "Middlesex Community College", and "Worcester" respectively. ## 3 Dataset Creation To create our dataset, we choose four domains (age, beauty, academic institutions, and nations) and measure bias in each domain. We refer academic institution as institutions throughout our writing. ### Dataset Statistics Our dataset contains 10,816 test instances containing 4 domains. Our dataset contains 2,154 data from ageism bias (858 in the SAI direction and 1296 in the ASA direction), 3,684 data from beauty bias (1,938 in the SAI direction and 1,746 in the ASA direction), 2,476 data from institutional bias (940 in the SAI direction and 1,536 in the ASA direction), 2,502 data from nationality bias (1,710 in the SAI direction and 792 in the ASA direction). In our result section, we further divide the beauty bias section into two parts (i.e., one part called beauty bias (without considering professions) and another part called beauty profession), where 2,016 data from beauty bias (1,026 in the SAI direction and 990 in the ASA direction) and 1,668 from beauty profession (912 in the SAI direction and 756 in the ASA direction). The dataset creation process for SAI and ASA direction is discussed below. In our study, we collect the positive and negative traits from Anderson (1968); Perdue and Gurtman (1990); Cross et al. (2017). We collect most of our neutral attributes (in SAI direction) from primary personality lists1\({}^{,}\)2 Footnote 1: [https://ideonomy.mit.edu/essays/traits.html](https://ideonomy.mit.edu/essays/traits.html) Footnote 2: [https://liveboldandbloom.com/11/personality-types/neutral-personality-traits](https://liveboldandbloom.com/11/personality-types/neutral-personality-traits) ### Stimulus to Attribute Inference (SAI) In this case, we try to figure out the attributes related to each stimulus. We further divide each stimulus into two parts and attributes into three parts in SAI direction. _Age:_ We divide the stimulus _age_ into young (25-35) and old (60-70) in terms of age and for the sake of our writing (not the actual representation) we call the young and old stimuli as positive and negative stimuli respectively. We select these age ranges based on the experimental results from Cameron (1969), and we pushed all the age groups more towards middle age to use ages that are likely to be relevant in the work setting. We divide the attributes into three parts namely positive traits (e.g., creative, adaptable, etc.), negative traits (e.g., unimaginative, rigid, etc.), and neutral traits (e.g., unpredictable, playful, etc.). _Beauty:_ We divide the stimulus _beauty_ into two parts. The first one is the positive stimulus in terms of appearance (e.g., attractive, gorgeous, etc.), and the second one is the negative stimulus in terms of appearance (e.g., unattractive, plain, etc.). On the other hand, we divide the attributes into three parts. The first one is the positive traits (e.g., smart, friendly, etc.), the second one is the negative traits (e.g., stupid, unfriendly, etc.), and the third one is the neutral traits (e.g., cautious, solemm, etc.). Only for the attributes of beauty stimuli, we also consider different professions (e.g., astronomer, security guard, etc.) besides the traits, following the study of Dion, Berscheid, and Walster (1972). In terms of profession, the first one is comparatively high-salaried professions (e.g., surgeon, aircraft pilot, etc.), the second one is comparatively mid-salaried professions (e.g., tax examiner, tower operator, etc.), and the third one is comparatively low-salaried professions (e.g., security guard, street vendor, etc.). For the sake of our writing (not the actual representation) we consider the high, mid, and low-salaried professions as positive, neutral, and negative professions. We categorized professions based on income, drawing inspiration from Wong and Penner (2016), and using data from the U.S. BUREAU OF LABOR STATISTICS3. We consider annual mean wage of more than 100k as high-salaried professions, 50k-70k as mid salaried professions, and less than 40k as low-salaried professions. Footnote 3: [https://www.bls.gov/oes/current/oes_nat.html#00-0000](https://www.bls.gov/oes/current/oes_nat.html#00-0000) _Institution_: Stimulus _institution_ is divided into university (e.g., MIT, Harvard University, etc.) and community college (e.g., Bunker Hill Community College, Quinsigamond Community College, etc.) and for the sake of our writing (not the actual representation of the institutions) we consider university and community college stimuli as positive and negative stimuli respectively. In this study, we select the top 10 universities from Massachusetts, USA according to U.S. News4. Also, we select the top 10 community colleges based on enrollment from Massachusetts, USA according to U.S. News5. We divide the attributes into three parts namely positive traits (e.g., creative, punctual, etc.), negative traits (e.g., unimaginative, inattentive, etc.), and neutral traits (e.g., careerfree, freewheeling, etc.). For the full list of attributes and stimuli for Institution, see Table 10 in Appendices. Footnote 4: [https://www.usnews.com/best-colleges/ma?schoofType=national-universities&sort=rank&_sortDirection=ascasc](https://www.usnews.com/best-colleges/ma?schoofType=national-universities&sort=rank&_sortDirection=ascasc) _Nation_: Stimulus _nation_ is divided into rich (Luxembourg, Norway, etc.) and poor (South Sudan, Gambia, etc.) in terms of GDP per capita and for the sake of our writing (not the actual representation) we consider rich and poor countries as positive and negative stimulus respectively. We select these countries based on GDP per capita according to International Monetary Fund (IMF)6.Tavoletti et al. (2022) use economic conditions (GDP per capita) to categorize countries, and we motivated from this study to use GDP per capita in our study. We select the top 10 countries with the highest GDP per capita as positive stimulus and the bottom 10 countries (data available) with the lowest GDP per capita as negative stimulus. In the same way as the other categories, we also divided the attributes here into three parts and they are positive traits (e.g., cultured, affable, etc.), negative traits (e.g., unwelcoming, rude, etc.), and neutral traits (e.g., strict, freewheeling, etc.). Footnote 6: [https://www.imf.org/external/datamapper/NGDPDPC@WEG/OEMDC/ADVEC/WEOWORLD](https://www.imf.org/external/datamapper/NGDPDPC@WEG/OEMDC/ADVEC/WEOWORLD) ### Attribute to Stimulus Association (ASA) In this case, we try to figure out the stimuli by providing the attributes. For the ASA direction, we divide the attributes into two parts, and each stimulus into three parts. Here, we didn't include neutral attributes and that's the reason why there are only two parts of attributes (positive and negative). On the other hand, we include a neutral or relatively non-related stimulus and that's the reason why there are three parts of each stimulus. _Age_: We divide the attributes into two parts (positive and negative traits) and skip the neutral attributes from the SAI direction. The stimulus here is divided into three parts namely positive stimulus (age between 25-35), neutral stimulus (age between 42-52), and negative stimulus (age between 60-70). Also, here we choose these age ranges based on the study of Cameron (1969). _Beauty_: We divide the trait attributes into two parts. One is positive traits (e.g., friendly, hardworking, etc.) and another one is negative traits (e.g., unhelpful, lazy, etc.). We Figure 2: A high-level of diagram of our dataset sample creation procedure in both SAI and ASA directions. In the SAI direction, if we pick “unattractive” as a stimulus, then the examples will be “He looks unattractive, he is BLANK”, “She looks unattractive, she is BLANK”, “They look unattractive, they are BLANK”; then we give 3 attributes as options to fill the BLANK. We also do this for all other stimuli for additional examples. In the ASA direction, if we pick “smart” as an attribute, then the examples will be “He looks BLANK, he is smart”, “She looks BLANK, she is smart”, “They look BLANK, they are smart”; then we give 3 stimuli as options to fill the BLANK. We also do this for all others attributes. only consider the positive and negative professions and skip the neutral professions. The stimulus here is divided into three parts namely positive stimulus in terms of appearance (e.g., attractive, captivating, etc.), negative stimulus in terms of appearance (e.g., unattractive, ugly, etc.) and neutral (e.g., formal, casual, etc.).7 Footnote 7: The neutral stimuli here use the same terms as the terms used for neutral attributes elsewhere in the dataset. We found that almost every beauty term is charged with some degree of positive or negative force. _Institution_: We divide the attributes into two parts (positive and negative traits) and skip the neutral attributes from the SAI direction. Here, the stimulus is also divided into three parts. First part is the university as positive stimulus (e.g., MIT, Harvard University), second part is the community college as negative stimulus (e.g., Bunker Hill Community College, Quinsigamond Community College, etc.) and the third part is cities of Massachusetts, USA as neutral stimulus (e.g., Boston, Lowell, etc.). For the full list of attributes and stimuli see Table 10 in Appendices. _Nation_: Like the above three domains, here we divide the attributes into two parts (positive and negative traits) and skip the neutral attributes from the SAI direction. The stimulus here is also divided into three parts namely positive stimulus (e.g., Switzerland, United States, etc.), negative stimulus (e.g., Congo, Somalia, etc.), and neutral stimulus (e.g., Thailand, Gabon, etc.). We select the neutral stimulus from the middle range of the IMF's report of GDP per capita. ### Iterative Data Collection We create our dataset in an iterative way. In SAI direction, we use each and every term of the stimulus with each sentence. As an example of beauty bias in Figure 2, "He looks unattractive; it is obvious that he is (wise/stupid/freewheeling)" in this sentence we replace "unattractive" with every other positive and negative stimulus term (e.g., attractive, fashionable, unfashionable, etc.) from beauty bias list. When we select attributes in the SAI direction, we randomly pick one pair of positive and negative attributes (e.g., friendly and unfriendly) and one neutral attribute (e.g., carefree). And in ASA direction, we use each and every term of the attributes with each sentence. As an example of beauty bias in profession, "He looks (attractive/unattractive/carefree); he must be security guard" in this sentence we replace "security guard" with every other positive and negative profession (e.g., surgeon, astronomer, warehouse worker, taxi driver, etc.). When we select stimulus in the ASA direction, we randomly pick one pair of positive and negative stimuli (e.g., attractive and unattractive) and one neutral stimulus (e.g., carefree). Sometimes the bias is also related to gender. There are some conflicts between researchers about the effects of gender to determine bias [10, 11]. So, in order to ensure generalization and test how LLMs respond in different gendered pronoun settings, we use different words to indicate masculine, feminine, and non-binary gendered pronoun. For example, in the sentence "Because he was (creative/unimaginative/freewheeling), he ended up at Bunker Hill Community College.", we replace "he" with "she" and "they". Here, "he", "she" and "they" represent masculine, feminine, and non-binary gendered pronoun respectively. Similarly, in the sentence "It's better to consider that most of the people from Gambaia are friendly/unfriendly/playful", we replace "people" with "men" and "women" where "men", "women", and "people" indicate masculine, feminine, and non-binary pronoun, respectively. ## 5 Experimental Setup We use GPT-3.5 (gpt-3.5-turbo) and PaLM 2 in our experiments. We examine how LLMs respond to positive and negative attributes and stimuli. More specifically, we look at positive-to-positive, positive-to-negative, positive-to-neutral, negative-to-negative, negative-to-positive, and negative-to-neutral classification rates. We also report the correlation using the Kendall's \(\tau\) test [13].8 Footnote 8: We selected the Kendall’s \(\tau\) test instead of the \(\chi^{2}\) test because there is a natural order to negative, neutral, and positive categorical values. In the SAI direction, we calculate the Kendall's \(\tau\) statistic between the binary positive and negative stimulus variable and the ternary positive, negative, and neutral attribute variable. Similarly, for ASA, we calculate the statistic between positive and negative attributes and positive, negative, and neutral stimuli. We further calculate the conditional likelihood of the model to select positive, negative, and neutral attributes in response to stereotypically positive and negative stimuli. We will refer to these as [stimulus]-to-[attribute] likelihoods. For example, we call the likelihood of the model to select positive attributes (e.g., friendly, motivated, creative, etc.) in response to stereotypically negative stimuli (e.g., unattractive, Middlesex Community College, South Sudan, 65 years old, etc.) the negative-to-positive likelihood (NPL). Our shorthand uses P for positive, N for negative, and Nu for neutral. So positive-to-positive and negative-to-neutral likelihoods are shortened to PPL and NNuL, respectively. We extend this method to conditioning on gendered pronouns (masculine, feminine, and non-binary), determining the likelihood of choosing positive, negative, and neutral attributes in response to type of pronoun that is used. We reverse this in the ASA direction, calculating the likelihood of selecting positive, negative, and neutral stimuli in response to positive and negative attributes. Similar to the SAI direction, we extend this study to gendered pronouns. Although our original dataset had 10,816 data points, due to irrelevant responses (a response that was way off from the given options i.e., "He looks unattractive; he is friendly/unfriendly/carefree", here LLM model gives "natural" as a response or sometimes give null result) from the LLM models, we removed some points. We eliminated 431 and 964 points from GPT 3.5 and PaLM 2, respectively. ## 6 Results and Discussion In this section, we focus on the broad trends that we can see from our results. Complete tables of results are available in the Appendicies. Figure 3 shows the high-level trends in model predictions for each of the categories. The prediction rates are summarized in terms of the change of rates when moving from positive independent variable values (stimulus for SAI and attribute for ASA). If the stimuli and attributes are independent of each other in a model, we should find the plots to be close to 0 with minor random variations. However, we instead see a consistent trend that positive generations are more frequent when the given provided unrelated information is positive. The effect on neutral generations is small and the direction of effect does not seem to follow any obvious pattern. The effect is most pronounced for the beauty category for both models and in both directions. Table 1 shows the results of the Kendall's \(\tau\) test for each model and in each direction. We are able to reject the null hypothesis in three of the four scenarios. This means that we find significant associations between positive and negative stimuli with positive, negative, and neutral attributes, and vice versa in those scenarios. The results also indicate that GPT-3.5 exhibits stronger associations compared to PaLM 2 and that associations are stronger in the SAI direction than the ASA direction. This provides robust evidence that the GPT model is more inclined to select a negative attribute in response to a negative stimulus, and likewise for positive attributes and stimuli. This serves as a clear indication of bias. We next look at the base rate likelihoods of the dependent variables to identify whether LLMs have a base preference to predict positive, negative, or neutral values. Table 2 shows a full list of these results. GPT-3.5 clearly favors predicting positive values on average in every category. The PaLM 2 results are more mixed, where the preference is dependent on category. For example, PaLM is more likely to predict negative attributes in general in the beauty and beauty profession categories. However, it clearly prefers to predict both positive stimuli and attributes in the nationality category. We suspect that the strong tendency of both models to give positive results in the nationality category is the result of debiasing work that has happened in overlapping spaces, such as ethnic or racial bias. One way that biases can be superficially eliminated is by making systems unlikely to ever respond negatively to a certain variable. Verifying this, however, is left for future work. \begin{table} \begin{tabular}{|c|c c c c|} \hline Model & Direction & \(\tau\) & \(p\) & \(H_{0}?\) \\ \hline \multirow{2}{*}{GPT 3.5} & SAI & 0.462 & 2.89e-230 & Reject \\ & ASA & 0.178 & 6.99e-34 & Reject \\ \hline \multirow{2}{*}{PaLM 2} & SAI & 0.127 & 3.08e-18 & Reject \\ & ASA & 0.017 & 0.225 & Fail to Reject \\ \hline \end{tabular} \end{table} Table 1: Kendall’s \(\tau\) test results. We use a significance level of \(\alpha<0.05\) to reject the null hypothesis. Figure 3: Difference in dependent variable prediction rates between negative and positive independent variable values. \(\Delta\text{PL}=\text{PPL}-\text{NPL}\), \(\Delta\text{NL}=\text{PNL}-\text{NNL}\), and \(\Delta\text{NuL}=\text{PNuL}-\text{NNuL}\). \begin{table} \begin{tabular}{|c|l|l|c|c|c||c|c|c|} \hline & & \multicolumn{3}{c||}{GPT 3.5} & \multicolumn{3}{c|}{PaLM 2} \\ \hline Direction of Experiment & Bias Type & PL & NL & NuL & PL & NL & NuL \\ \hline \multirow{8}{*}{SAI} & Ageism & 63.08 & 24.73 & 12.19 & 39.60 & 46.76 & 13.63 \\ & Beauty & 47.90 & 36.68 & 15.40 & 36.57 & 43.15 & 20.26 \\ & Beauty Profession & 47.12 & 23.68 & 29.18 & 45.56 & 28.50 & 25.92 \\ & Institutional & 69.49 & 19.11 & 11.38 & 44.12 & 45.83 & 10.04 \\ & Nationality & 78.46 & 15.95 & 5.57 & 61.70 & 23.63 & 14.66 \\ \hline \multirow{8}{*}{ASA} & Ageism & 35.16 & 31.32 & 33.52 & 44.82 & 35.08 & 20.09 \\ & Beauty & 45.89 & 45.66 & 8.44 & 25.99 & 51.49 & 22.51 \\ \cline{1-1} & Beauty Profession & 43.14 & 41.51 & 15.33 & 16.62 & 56.21 & 27.16 \\ \cline{1-1} & Institutional & 50.93 & 23.42 & 25.63 & 41.78 & 22.46 & 35.75 \\ \cline{1-1} & Nationality & 57.16 & 22.32 & 20.51 & 47.49 & 16.66 & 35.84 \\ \hline \end{tabular} \end{table} Table 2: The base rate likelihoods for each dependent variable in each direction-model-domain combination. PL is the percentage of selecting positive attributes. NL and NuL are the percentage of selecting negative and neutral attributes, respectively. In the ASA direction, PL, NL and NuL indicate the percentage of selecting positive, negative and neutral stimuli, respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c||c|c|c|} \hline & Direction of Experiment & Pronoun & PPL & PNL & PNuL & NPL & NNL & NNuL \\ \hline \multirow{4}{*}{GPT 3.5} & \multirow{4}{*}{SAI} & masculine & 75.19 & 13.02 & 11.77 & 50.80 & 34.79 & 14.40 \\ & & feminine & 76.99 & 10.13 & 12.87 & 55.95 & 29.24 & 14.79 \\ & & non-binary & 74.33 & 12.48 & 13.17 & 49.19 & 38.33 & 12.47 \\ \cline{2-11} & \multirow{4}{*}{ASA} & masculine & 54.18 & 22.61 & 23.20 & 30.63 & 46.58 & 22.77 \\ & & feminine & 61.35 & 18.62 & 20.02 & 37.89 & 38.81 & 23.29 \\ & & non-binary & 57.98 & 20.02 & 21.99 & 34.10 & 42.60 & 23.28 \\ \hline \multirow{4}{*}{PaLM 2} & \multirow{4}{*}{SAI} & masculine & 61.71 & 19.20 & 19.08 & 28.07 & 55.90 & 16.02 \\ & & feminine & 65.67 & 17.40 & 16.91 & 31.12 & 52.37 & 16.50 \\ \cline{1-1} & & non-binary & 64.61 & 18.94 & 16.43 & 32.91 & 52.78 & 14.30 \\ \cline{1-1} \cline{2-11} & \multirow{4}{*}{ASA} & masculine & 47.70 & 21.49 & 30.80 & 19.45 & 51.10 & 29.43 \\ \cline{1-1} & & feminine & 51.38 & 17.39 & 31.22 & 21.79 & 49.50 & 28.69 \\ \cline{1-1} & & non-binary & 56.35 & 19.12 & 24.51 & 22.77 & 51.73 & 25.49 \\ \hline \end{tabular} \end{table} Table 3: All conditional likelihoods grouped by model, inference direction, and pronoun gender. The likelihoods are marginalized across other bias categories. ### Gender Bias from Pronouns We now use the English gendered pronouns in the dataset to coincidentally investigate the degree to which LLMs show gender bias on our dataset. Table 3 shows a table of results for both models, in both directions, and the gender of the pronoun. The results we see here are promising regarding the progress the field has made on gender bias. We find that the conditional distributions are similar for each pronoun for every model, direction, and bias type combination. As we would expect from Table 2, the positive inferences are more likely than negative and neutral inferences, but the number stay close across pronoun types. If anything, the results suggest that LLMs skew slightly more positively for feminine pronouns, in that PPL and NPL values for feminine pronouns typically exceed the others while PNL and NNL values are typically exceeded by the others. While promising, this is only a coarse-grained analysis on gender since we only use pronouns to access this variable and our dataset does not focus on gender bias-specific attributes and stimuli. ## Conclusion In this study, we examine the performance of GPT-3.5 and PaLM 2 across several less-studied domains of bias, focusing on general positive and negative polarity associations rather than stereotypes. Our findings indicate that both models exhibit statistically significant biases. Our dataset design draws upon prior literature of both social science and computer science. As the use of LLMs continues to grow and these models are increasingly employed in various tools, it becomes crucial to be vigilant about even subtle forms of bias. While much work has been done to investigate overt biases such as those related to race, gender, and religion, less attention has been paid to subtler biases like ageism, beauty, and institution. Through the introduction of our dataset, we encourage the consideration of these overlooked biases when using LLMs. As we explore our datasets only on two LLMs, our results may not be fully representative for other LLMs. But we suspect that other LLMs will follow similar patterns of bias. We hope that this dataset will help to further research and mitigate these types of biases. This dataset should enable the development of LLMs that take into account these biases and mitigate their influences. Future research is needed to extend our findings to other models and further biases that have been identified by social scientists.In the end, we hope that our study will contribute to ongoing efforts to make LLMs more impartial and fair.
2308.00044
Challenges of variational quantum optimization with measurement shot noise
Quantum enhanced optimization of classical cost functions is a central theme of quantum computing due to its high potential value in science and technology. The variational quantum eigensolver (VQE) and the quantum approximate optimization algorithm (QAOA) are popular variational approaches that are considered the most viable solutions in the noisy-intermediate scale quantum (NISQ) era. Here, we study the scaling of the quantum resources, defined as the required number of circuit repetitions, to reach a fixed success probability as the problem size increases, focusing on the role played by measurement shot noise, which is unavoidable in realistic implementations. Simple and reproducible problem instances are addressed, namely, the ferromagnetic and disordered Ising chains. Our results show that: (i) VQE with the standard heuristic ansatz scales comparably to direct brute-force search when energy-based optimizers are employed. The performance improves at most quadratically using a gradient-based optimizer. (ii) When the parameters are optimized from random guesses, also the scaling of QAOA implies problematically long absolute runtimes for large problem sizes. (iii) QAOA becomes practical when supplemented with a physically-inspired initialization of the parameters. Our results suggest that hybrid quantum-classical algorithms should possibly avoid a brute force classical outer loop, but focus on smart parameters initialization.
Giuseppe Scriva, Nikita Astrakhantsev, Sebastiano Pilati, Guglielmo Mazzola
2023-07-31T18:01:15Z
http://arxiv.org/abs/2308.00044v2
# Challenges of variational quantum optimization ###### Abstract Quantum enhanced optimization of classical cost functions is a central theme of quantum computing due to its high potential value in science and technology. The variational quantum eigensolver (VQE) and the quantum approximate optimization algorithm (QAOA) are popular variational approaches that are considered the most viable solutions in the noisy-intermediate scale quantum (NISQ) era. Here, we study the scaling of the quantum resources, defined as the required number of circuit repetitions, to reach a fixed success probability as the problem size increases, focusing on the role played by measurement shot noise, which is unavoidable in realistic implementations. Simple and reproducible problem instances are addressed, namely, the ferromagnetic and disordered Ising chains. Our results show that: (i) VQE with the standard heuristic ansatz scales comparably to direct brute-force search when energy-based optimizers are employed. The performance improves at most quadratically using a gradient-based optimizer. (ii) When the parameters are optimized from random guesses, also the scaling of QAOA implies problematically long absolute runtimes for large problem sizes. (iii) QAOA becomes practical when supplemented with a physically-inspired initialization of the parameters. Our results suggest that hybrid quantum-classical algorithms should possibly avoid a brute force classical outer loop, but focus on smart parameters initialization. ## 1 Introduction Optimization is one of the most anticipated applications of quantum computers due to its commercial value and widespread use in scientific and technological applications. The first argument supporting the benefit of quantum optimization is its ability to search through an exponentially large computational space, of size \(N=2^{L}\), using only \(L\) qubits. However, such memory compression alone is not sufficient, as the solution to a classical combinatorial optimization problem is represented by a single (or very few) \(L\)-bit string. This is in contrast with quantum algorithms for solving genuinely quantum mechanics problems, where the source of possible quantum advantage is easier to rationalize [1]. The quantum computational resource enabling the search is interference. The process begins with a simple, easy-to-prepare quantum state, which undergoes unitary evolution. Ideally, the result of this evolution is such that, when the state is measured, the desired bit string is observed with a high probability [2]. It is still unclear whether quantum optimization offers any advantage over the existing classical methods, such as simulated annealing [3]. Interestingly, optimization with quantum annealing has been the first application of commercial quantum devices [4, 5], which mostly rely on incoherent tunneling events to escape the cost-function local minima [6]. However, it is not easy to prove systematic quantum speed-ups with analog quantum annealers [7, 8], also because quantum Monte Carlo algorithms appear to be able to emulate their tunneling dynamics [9, 10, 11, 12]. Yet, considerable effort is still ongoing in improving the architecture of these machines [13] and their coherence times [14]. As an alternative quantum optimization strategy, variational quantum algorithms usually running on digital quantum devices, have gained attention in the quantum computing community due to their short-depth circuits [15, 16]. In this approach, a long quantum state evolution is replaced by a series of short-depth quantum circuits connected through a classical feedback loop. Variational quantum computation features parametrized circuits that produce a trial state \(\ket{\psi(\boldsymbol{\theta})}\). Its parameters \(\boldsymbol{\theta}\) are adjusted at every step following an iterative classical procedure. The goal is to minimize a cost function \(C\), which corresponds to the expectation value \(\bra{\psi_{\boldsymbol{\theta}}}\hat{H}_{p}\ket{\psi_{\boldsymbol{\theta}}}\) of the the problem Hamiltonian \(\hat{H}_{p}\), or a closely related measure. At the end of a successful optimization, \(\ket{\psi(\boldsymbol{\theta})}\) should be peaked around the solution of the problem. The two most popular variational algorithms for optimization are the quantum approximate quantum optimization algorithm (QAOA) [17] and the variational quantum eigensolver (VQE) [15]. QAOA is a digitized version of quantum annealing in which the optimization parameters can be seen as tunable time steps that control the evolution of the state under the alternating action of the _problem_ and the _mixing_ operators, in a Trotterized fashion. The VQE approach instead can be implemented with any parametrized quantum circuit, with the problem Hamiltonian entering the algorithm only through the evaluation of the cost function \(C=\langle\hat{H}_{P}\rangle\). Let us also recall that for combinatorial optimization problems, like Ising spin glasses on general graphs [18], no polynomial-time algorithm can provably find the global minimum, and the resources to exactly solve these problems scale exponentially with problem size as \(\sim 2^{kL}\). This is the type of speed-up investigated in this Article. While quantum algorithms are not expected to turn the exponential scaling into a polynomial one, the exponent \(k\) might be reduced, thus potentially realizing a substantial speed-up over classical algorithms [19]. The QAOA method has been the subject of intense studies, including small and medium-scale hardware experiments [20, 21, 22], numerical studies, and theoretical works [23, 24, 25, 26, 27, 28, 29, 30]. Also VQE optimization has been studied numerically and experimentally [31, 32, 33, 34, 35, 36, 37], and it has been applied to diverse combinatorial optimization problems from protein folding to finance [38, 39, 40, 41]. However, these previous studies addressed small problem instances, without properly accounting for measurement shot noise. In fact, the latter is unavoidable in physical implementations of practically relevant problem sizes and it might affect the computational complexity of these algorithms. To the best of our knowledge, the scaling of the computational cost for a fixed target success probability, taking into account the measurement overhead to compute the cost function \(C\), has not been exhaustively addressed yet. The purpose of this study is to assess the efficiency of the VQE and QAOA methods applied to simple and reproducible benchmarks while retaining the general issue of the quantum-measurement noise. We numerically determine the scaling of the runtimes for fixed success probabilities, taking into account the overhead due to cost-function estimation for the optimal number of measurements. Our results show that, in these realistic conditions, the basic implementation of VQE and QAOA displays a problematic scaling. Notably, QAOA with a smart variational-parameters initialization displays instead favorable performances, indicating a promising route for practical applications. The paper is organized as follows. In section 2, we define the testbed problems and the quantum circuits. In section 3, we introduce the metric to properly assess the computational scaling of the VQE and QAOA algorithms in realistic conditions. In section 4.1, it is shown that in the presence of quantum measurement noise, VQE displays a scaling not better than the direct space enumeration when energy-based optimizers are used. The situation improves using gradients, computed with the parameters shift rule (see section 4.2), but it remains scaling-wise impractical. In section 4.3, it is shown that, while showing some scaling improvements, QAOA remains impractical when a full optimization outer loop is required. In this case, the traditional energy-based and a gradient-based optimizer show consistent scalings. Finally, in section 4.5, we show that QAOA becomes competitive when the parameters are initialized to mimic an adiabatic process. In section 5, we draw conclusions and discuss realistic pathways toward quantum advantage in classical optimization problems. ## 2 Optimization problems and quantum circuits The optimization problems we address correspond to the Ising models defined over \(L\) variables \((\sigma_{1},\ldots,\sigma_{L})=\boldsymbol{\sigma}\) with \(\sigma_{j}=\pm 1\). Specifically, we consider the one-dimensional connectivity, nearest-neighbours interactions \(J_{j,j+1}\) and local fields \(\{h_{j}\}_{j=1}^{L}\). The energy of a spin configuration \(\boldsymbol{\sigma}\) reads: \[E(\boldsymbol{\sigma})=-\sum_{j=1}^{L-1}J_{j,j+1}\sigma_{j}\sigma_{j+1}-\sum_{ j=1}^{L}h_{j}\sigma_{j}. \tag{1}\] Representing a generic spin configuration \(\boldsymbol{\sigma}\in\{1,-1\}^{L}\) as a binary string \(\boldsymbol{x}\in\{0,1\}^{L}\), and writing the energy as \(E(\boldsymbol{\sigma})\to f(\boldsymbol{x})\), we write the problem Hamiltonian \(\hat{H}_{\mathrm{P}}\) as a diagonal operator \[\hat{H}_{\mathrm{P}}=\sum_{x}f(x)|x\rangle\langle x|, \tag{2}\] defined by its diagonal matrix elements \(f:\{0,1\}^{L}\rightarrow\mathbb{R}\). The classical spin variables \(\sigma_{j}\) are promoted to single-qubit Pauli operators \(\hat{\sigma}_{j}^{z}\). Most analyses reported in this Article consider two problem Hamiltonians. The first is the _ferromagnetic_ Hamiltonian defined by uniform couplings \(J_{j,j+1}=J=1\), and a (small) uniform local field \(h_{j}=h=-0.05\) introduced to break the degeneracy between the two fully-polarized configurations and obtain a single global minimum. Despite its simplicity, this model turns out to be hard for most of the considered algorithms. Its rugged energy surface \(f(x)\) is shown in figure 1, where the bitstrings are sorted in the lexicographic order. The second optimization problem we address is an ensemble of _disordered_ Hamiltonians where the couplings and fields are sampled from a normal distribution with zero mean and unit variance: \(J_{j,j+1},h_{j}\sim\mathcal{N}(0,1)\). In this case, 30 realizations of the disorder are simulated for each problem size \(L\). Figure 1: (a) The energy landscape in the computational basis, where items are sorted in the lexicographical order for a ferromagnetic model with \(L=8\). The small uniform field breaks the degeneracy between the ‘\(00\ldots 0\)’ and ‘\(11\ldots 1\)’ bitstrings, with the latter being the global minimum. (b) The RY-CNOT circuits used in the VQE and commonly employed in related literature. (c) The QAOA circuit featuring the specific problem Hamiltonian. ### VQE with heuristic circuit Parameterized quantum circuits [15, 16] are the essential ingredients of any variational quantum algorithm. These circuits employ parametrized gates, including the single-qubit rotation gates, and multi-qubit entangling gates such as the CNOT gate. The set of variational parameters \(\boldsymbol{\theta}\) is optimized in a classical outer loop [15] to minimize a target cost function. The most commonly studied heuristic circuit is made of \(d\) blocks built from a layer of single-qubit rotations \(U_{\mathrm{R}}(\boldsymbol{\theta}^{l})\) with \(l=1,\cdots,d+1\) and an entangling block \(U_{\mathrm{ent}}\) that covers the whole qubit register (see figure 1). In this Article, we consider the entangling block made of a ladder of CNOT gates with linear connectivity, such that the qubit \(q_{j-1}\) controls the target qubit \(q_{j}\), and the latter controls the qubit \(q_{j+1}\), obeying open boundary conditions. This choice is commonly used as it mimics the existing sparse qubit connectivity of the quantum hardware. The layer of single-qubit rotations \(U_{\mathrm{R}}(\boldsymbol{\theta}^{l})\) acts locally and it corresponds to a tensor product of single-qubit rotations: \[U_{\mathrm{R}}(\boldsymbol{\theta}^{l})=\bigotimes_{j=1}^{L}R_{y}(\theta_{j} ^{l}), \tag{3}\] where \(R_{y}(\theta_{j}^{l})=\exp\left(-i\theta_{j}^{l}\hat{\sigma}^{y}/2\right)\) is a rotation around the \(y\)-axis of the Bloch sphere of the qubit \(q_{j}\), and \(l=1,\cdots,d+1\). Here, \(\boldsymbol{\theta}^{l}\) denotes an array of \(L\) angles. The full unitary circuit operation is described by \[U_{\mathrm{R-CNOT}}(\boldsymbol{\theta})=U_{\mathrm{R}}(\boldsymbol{\theta}^{ d+1})\ \overbrace{U_{\mathrm{ent}}U_{\mathrm{R}}(\boldsymbol{\theta}^{d})\ldots U_{ \mathrm{ent}}U_{\mathrm{R}}(\boldsymbol{\theta}^{1})}^{d-\text{times}}, \tag{4}\] and the final parametrized state reads \[\left|\psi(\boldsymbol{\theta})\right\rangle=U_{\mathrm{R-CNOT}}(\boldsymbol{ \theta})\left(\left|0\right\rangle^{\otimes L}\right). \tag{5}\] The total number of variational parameters is \(n_{\mathrm{par}}=L(d+1)\). Notice that we do not use symmetries nor prior knowledge of the optimization problem in building the circuit up. ### The QAOA circuit QAOA can be understood as a digitized version of quantum annealing [17] that requires variational optimization of circuit parameters. These parameters can be seen as the optimizable time steps that control the evolution of the state under the action of the _problem_ and the _mixing_ operators in a Trotterized fashion. Notice that the QAOA method precisely dictates the structure of the quantum circuits, while VQE can be implemented with any parametrized quantum circuit. In particular, the classical Hamiltonian (i. e., the cost function) explicitly appears in the QAOA circuit, while a VQE circuit may be completely heuristic, with the problem Hamiltonian informing the whole algorithm only through the evaluation of the cost function after the wave function collapses. The unitary operator defining the ansatz is made of \(d\) blocks, each of them being the product of two unitary operators \(\hat{U}_{\mathrm{P}}=\exp\left(i\theta_{\mathrm{P}}^{l}\hat{H}_{\mathrm{P}}\right)\), and \(\hat{U}_{\mathrm{M}}=\exp\left(i\theta_{\mathrm{M}}^{l}\hat{H}_{\mathrm{M}}\right)\), with \(l=1,\cdots,d\) and where \(\hat{H}_{\mathrm{P}}\) is the problem Hamiltonian, and \[\hat{H}_{\mathrm{M}}=\sum_{j=1}^{L}\hat{\sigma}_{j}^{x} \tag{6}\] is the non-diagonal _mixing_ operator. The implementation of these unitary operators involves efficient single-qubit rotations along the \(x\)-axis, denoted as \(R_{x}(\theta)=\exp\left(i\theta\hat{\sigma}^{x}/2\right)\), and two-qubit parametrized gates, \(R_{zz}(\theta)=\exp\left(i\theta\hat{\sigma}^{z}\otimes\hat{\sigma}^{z}/2\right)\). The structure of the QAOA ansatz implies that all the local \(\hat{\sigma}^{z}\otimes\hat{\sigma}^{z}\) interactions within the same block are 'evolved' with the same time step \(\theta_{P}^{l}\), while all the \(x\)-rotations within the block are parametrized by the same angle \(\theta_{M}^{l}\) (see figure 1). The total number of parameters is \(n_{\mathrm{par}}=2d\), i. e., is independent of the problem size \(L\), and the full unitary operator reads \[U_{\mathrm{QAOA}}(\boldsymbol{\theta})=\overbrace{\hat{U}_{\mathrm{M}}(\theta _{M}^{d})\hat{U}_{\mathrm{P}}(\theta_{P}^{d})\ldots\hat{U}_{\mathrm{M}}(\theta _{M}^{1})\hat{U}_{\mathrm{P}}(\theta_{P}^{1})}^{d\text{-times}}. \tag{7}\] The final parametrized state is \[\left|\psi(\boldsymbol{\theta})\right\rangle=U_{\mathrm{QAOA}}(\boldsymbol{ \theta})\ \left(\frac{\left|0\right\rangle+\left|1\right\rangle}{\sqrt{2}}\right)^{ \otimes L}, \tag{8}\] where the initial non-entangled state can be obtained from the state \(\left|0\right\rangle^{\otimes L}\) by acting with one Hadamard gate on each qubit. ## 3 Resource counting and scaling analysis ### Statistical noise in evaluating the cost function The expectation value of \(\hat{H}_{\mathrm{P}}\) over the prepared state is given by the sum of all spin configurations \[\tilde{C}=\langle\psi_{\boldsymbol{\theta}}|\hat{H}_{\mathrm{P}}|\psi_{ \boldsymbol{\theta}}\rangle=\sum_{x=0}^{2^{L}-1}|\psi_{\boldsymbol{\theta}}(x)| ^{2}f(x). \tag{9}\] The full sum can be approximated using a finite sample of configurations \[\tilde{C}\approx\frac{1}{M}\sum_{i=1}^{M}f(x_{i}), \tag{10}\] where \(x_{i}\) are sampled from \(|\psi_{\boldsymbol{\theta}}(x)|^{2}\). The precision of this estimate is affected by statistical noise induced by the finite number of quantum measurements \(M\). In numerous studies, (9) is evaluated exactly, which is dubbed the _state-vector_ simulation. Instead, in our analysis we account for the effects of the finite \(M\). It has been empirically shown that better performances for optimization problems can be obtained by considering a modified version of the cost function. In fact, in optimization problems one is not interested in the average value, but in the minimum one. Therefore, we adopt the CVaR estimator of [31], in which the cost function is evaluated by summing only over the best \(25\,\%\) of observed outcomes \(f(x_{i})\): \[C=\frac{1}{M^{*}}\sum_{i=1}^{M^{*}}f(x_{i}). \tag{11}\] Operatively, the \(M\) readouts are sorted in non-decreasing order following their output \(f(x_{i})\), and only \(M^{*}=M/4\) samples corresponding to the \(25\,\%\) lowest values are retained. The value \(C\) represents the cost-function that is optimized at each iteration. We can also keep track of the current minimum observed value \(f_{\rm min}\), which is generally smaller than \(C\). Its final value is compared with the exact global minimum of the optimization problem to determine the success rate of the algorithm. ### Optimal scaling The time complexity of an optimization algorithm can be expressed as the number of function calls \(f(x)\) necessary to find the optimum, aiming at a _fixed_ success probability as the problem sizes increase. We recall that, in all realistic implementations of variational algorithms, the value of the cost function \(C\) cannot be exactly computed. In fact, \(C\) needs to be evaluated by performing \(M\) measurements as in (10). Notice that also in the case of (11) one needs to draw \(M\) samples. The total number of function calls required for a full optimization run is therefore \[n_{\rm calls}=n_{\rm iter}\times M, \tag{12}\] where \(n_{\rm iter}\) is the number of (classical) optimization steps. The total runtime of the algorithm is proportional to \(n_{\rm calls}\). A lower bound is given by \(t_{\rm run}=n_{\rm calls}\times d\times t_{\rm gate}\), where again \(d\) is the circuit depth, expressed as the number of repetitions of a minimal unit (called block) of quantum gates, and \(t_{\rm gate}\) is the time to execute each block. The value of \(t_{\rm gate}\) strongly depends on the hardware. In the NISQ era, the gate times can be of order \(10\,\)ns (\(100\,\)MHz) for superconducting hardware [42], while digital gate time is predicted to be about \(0.1\,\)ms (\(10\,\)kHz) in the fault-tolerant regime [43]. These estimates neglect the qubit reset time, the classical communication, and the measurement time, so they clearly represent optimistic perspectives. For each problem size, there exists a trade-off between the number of iterations \(n_{\rm iter}\) needed to converge to the global minimum, and the number \(M\) of measurements, which controls the accuracy in evaluating the cost function at each step. Large errors in \(C\) may imply slower convergence since the cost function landscape is not correctly reproduced, thus negatively affecting the performance of the classical optimization algorithm. One of the merits of the present study is the systematic identification of the minimum \(n_{\rm calls}^{*}\), corresponding to the optimal combination of \(n_{\rm iter}\) and \(M\) for each problem size \(L\), thus enabling a proper scaling analysis. This concept is similar to the optimal time-to-solution metric developed in quantum annealing [8]. We point out that one must have \(n^{*}_{\rm calls}<2^{L}\) to avoid quantum disadvantage [44], without even discussing the values of \(t_{\rm gate}\). With the definitions given above, (12) can be used to compute the number of function calls only in the case of so-called energy-based optimizers. However, in this Article, we also consider gradient-based methods (see section 4.2 and section 4.4). In this case, one needs to compute a \(n_{\rm par}\)-valued array of energy derivatives at each optimization step. For each parameter, two independent circuit runs need to be executed. This holds both for the parameters shift rule (in this case, when applicable, the gradients are exact) and the finite difference method. Therefore, the cost for a single iteration has to be computed as \(M=2n_{\rm par}\tilde{M}\), where \(\tilde{M}\) is the number of shots per single circuit execution. ## 4 Results ### Impracticality of VQE Here we analyze the performance of the VQE method for the ferromagnetic problem. The circuit simulations are performed using the open-source Qiskit framework [45]. In evaluating the algorithm's efficiency, a run is considered successful when the Figure 2: Optimization of the ferromagnetic Ising chain using the VQE ansatz with depth \(d=2\). (a-b): Success probability \(F_{\rm succ}\) as a function of the total number of function calls \(n_{\rm calls}=M\times n_{\rm iter}\), for (a) \(L=8\) spins and (b) \(L=14\) spins. Different colors correspond to the different combinations of measurement-shot number \(M\) and classical optimization steps \(n_{\rm iter}\). The dashed curve corresponds to the random search with replacement. The inset of panel (b) shows an example of the interplay between \(M\) and \(n_{\rm iter}\). To reach larger \(F_{\rm succ}\) it is better to systematically increase \(M\). For \(F_{\rm succ}\approx 0.8\), using \(M=512\) appears marginally better than \(M=1024\), which in turn becomes optimal at \(F_{\rm succ}\approx 0.9\), and so on. Crucially, each set-up performs worse than the random search. (c): Minimal number of function calls \(n^{*}_{\rm calls}\) as a function of the number of spins \(L\) for different \(F_{\rm succ}\). Circuits with \(d=1\) (full symbols) and \(d=2\) (empty symbols) blocks are considered. The thick dashed (red) line represents the scaling \(n^{*}_{\rm calls}\sim 2^{L}\) corresponding to full enumeration. Thin continuous and dashed lines represent fitting functions of the form \(n^{*}_{\rm calls}=a2^{kL}\), and the fitting parameters \(a\) and \(k\), obtained considering the large \(L\) regime, are given in the legend. absolute minimum is found at least once within the \(n_{\rm iter}\) steps. This procedure is standard in benchmarking quantum devices, such as quantum annealers, versus classical optimizers [5, 6, 8]. The fraction \(F_{\rm succ}\) of successful runs is estimated considering 1000 executions starting from different (random) initializations of the variational parameters. It is crucial to note that, within the VQE heuristic circuit, there is no _a priori_ method for a smart initialization of the parameters. Therefore, we initialize the parameters using a random uniform distribution of \(\mathbf{\theta}\). Moreover, optimized parameters are not transferable to different instances. We first inspect how \(F_{\rm succ}\) depends on the total number of function calls \(n_{\rm calls}\), for different problem sizes \(L\). For each \(L\), several choices of \(M\) and \(n_{\rm iter}\) are considered, allowing us to identify the minimal number of function calls \(n_{\rm calls}^{*}\) for each target success rate. This procedure is crucial to correctly assess the scaling of the computational cost with the problem size. In this section, the classical parameter optimization is performed using the COBYLA optimizer, a widely adopted energy-based algorithm for QAOA [31, 46]. Let us also recall that the CVaR estimator of (10) is adopted. The gradient-based method, that uses the parameters shift rule, is discussed in section 4.2. The performance of the (COBYLA driven) VQE method, with circuit depths \(d=1\), \(2\), is shown in figure 2. First, we observe that, for all choices of \(M\) and \(n_{\rm iter}\), the value of \(n_{\rm calls}\) required to reach a target \(F_{\rm succ}\) is not better than the one corresponding to random search with replacement, see panel (a). Furthermore, as shown in panel (c), the minimal number of function calls \(n_{\rm calls}^{*}\) displays a problematic scaling with the problem size, closely matching the exponential law \(n_{\rm calls}^{*}\sim 2^{kL}\) with \(k\simeq 1\). This holds for all the thresholds of \(0.25\leqslant F_{\rm succ}\leqslant 0.9\) considered in this study. Notably, VQE circuits with depths \(d=1\) and \(d=2\) display comparable scaling, suggesting that simply increasing the circuit depth does not help. In A it is shown that hardware noise, which we simulate using a custom model in Qiskit [45], does not significantly affect this scaling. ### Gradient-based VQE optimization Next, we consider the VQE algorithm driven by a gradient-based algorithm, addressing again the ferromagnetic problem. The gradients are obtained using the parameter shift rule, which is applicable under certain conditions on the adopted gate set [47, 48]. The \(n^{\rm th}\) component of the gradient is computed as \(n\in(1,\ldots,n_{\rm par})\): \[\frac{\partial\tilde{C}}{\partial\theta_{n}}=\frac{1}{2}\Big{[}\langle\psi_{ \mathbf{\theta}_{n}^{+}}|\hat{H}_{\rm P}|\psi_{\mathbf{\theta}_{n}^{+}}\rangle- \langle\psi_{\mathbf{\theta}_{n}^{-}}|\hat{H}_{\rm P}|\psi_{\mathbf{\theta}_{n}^{-}} \rangle\Big{]}, \tag{13}\] where \(\mathbf{\theta}_{n}^{\pm}=\big{(}\theta_{1},\ldots,\theta_{n}\pm\pi/2,\ldots, \theta_{n_{\rm par}}\big{)}\). Notice that, in this case, the cost function is computed as in (10), rather than adopting the CVaR estimator. At each iteration \(i\), the parameter \(\mathbf{\theta}^{(i)}\) is updated as: \[\mathbf{\theta}_{i+1}=\mathbf{\theta}_{i}-\eta\nabla\tilde{C}\left(\mathbf{\theta}_{i} \right), \tag{14}\] where \(\eta\) is the learning rate. The value \(\eta=0.1\) is chosen, as it turns out to be reasonably close to optimal from a preliminary analysis on the problem size \(L=6\) As before, the optimal combination of \(M\) and \(n_{\mathrm{iter}}\) is found, and the computational complexity is analyzed by observing the scaling of \(n_{\mathrm{calls}}^{*}\) with the problem size (see figure 3). Interestingly, an approximately quadratic speedup compared with the COBYLA optimizer is found, at least for \(F_{\mathrm{succ}}\leq 0.5\). ### QAOA with random parameters initialization Here, the performance of QAOA is analyzed, using the (energy-based) COBYLA optimizer. The first tests focus on the ferromagnetic model. We expect to observe a better performance compared to VQE, because QAOA features the problem Hamiltonian also in the circuit, not only in the cost function. To support this intuition, we perform a preliminary comparison, considering circuits with random variational parameters, i. e., avoiding any classical optimization iteration. Specifically, we prepare 1000 different QAOA circuits, and just as many for VQE, using uniformly distributed parameters, and sample \(M=16\) measurements from each of them. The probability \(F_{\mathrm{succ}}\) of observing the exact solution at least once is then computed. As shown in figure 4, the QAOA ansatz clearly outperforms VQE. We attribute this to its higher degree of localization around the correct solution, even when the parameters are random. Notice that in this analysis the choice of the optimizer is not relevant, allowing us to compare the circuits independently of the way they are optimized. One might also expect that, since the QAOA circuit features fewer parameters, it should be easier to optimize as compared to VQE [49]. To exhaustively assess the QAOA performance, we repeat the procedure described in section 4.1, exploring different combinations of \(n_{\mathrm{iter}}\) and \(M\). Again, this allows us Figure 3: Minimal number of function calls \(n_{\mathrm{calls}}^{*}\) as a function of the number of spins \(L\) for different \(F_{\mathrm{succ}}\). VQE circuits of depth \(d=2\) are considered with the gradient descent (full symbols) and with the COBYLA optimizer (empty symbols). The thick dashed (red) line represents the scaling \(n_{\mathrm{calls}}^{*}\sim 2^{L}\) corresponding to the exact enumeration. Thin continuous and dashed lines represent fitting functions of the form \(n_{\mathrm{calls}}^{*}=a2^{kL}\), and the fitting parameters \(a\) and \(k\), obtained by fitting the large \(L\) data, are given in the keys. to identify the optimal number of function calls \(n_{\text{calls}}^{*}\) for the target cumulative success probability \(F_{\text{succ}}\) and problem size \(L\). For each choice of \(n_{\text{iter}}\) and \(M\), 1000 circuit executions are performed starting from random uniformly-distributed parameters. Notably, for all considered success probabilities \(F_{\text{succ}}\), the number of function calls is well described by the exponential scaling law \(n_{\text{calls}}^{*}\sim 2^{kL}\), with \(k\simeq 0.4\). This corresponds to an approximately quadratic speed-up as compared to the exact enumeration. Given that the choice of the classical optimizer may change the observed scaling, in B, we test also the SPSA algorithm [50]. We find that the SPSA and COBYLA results are compatible. We further test this finding on a more challenging system, namely, the disordered Ising model. The results are shown in figure 5 (d-f). Also in this case they are averaged over 30 realizations of the random couplings and fields of the problem Hamiltonian. Similarly to the ferromagnetic case, we observe a profitable scaling, namely, \(k\in[0.5,0.8]\), to be compared with the full enumeration, corresponding to \(k=1\). However, in this case, extracting the scaling exponent is more difficult, because \(n_{\text{calls}}\) needs to be increased to reach large \(F_{\text{succ}}\), leading to prohibitive computational times for large problem sizes. Notably, both for the ferromagnetic and the disordered problem Hamiltonians, increasing the circuit depth from \(d=2\) to \(4\) does not substantially affect the scaling. To summarize the above findings, the observed QAOA scaling exponents are about \(k\simeq 0.4\) for the ferromagnetic problem, and \(0.5\lesssim k\lesssim 0.8\) for the disordered models, using the COBYLA optimizer and random parameters initialization. This scaling is comparable to the one of VQE using gradients. While better than full enumeration, these scalings still determine unfeasible runtimes (see discussions in section 5) for problem instances of practical interest, i. e., featuring at least hundreds of spins. Figure 4: Success probability \(F_{\text{succ}}\) as a function of the system size \(L\) before the classical optimization, for a fixed shot number \(M=16\). The QAOA (circle markers) and VQE (square markers) ansatze have the same depth \(d=2\) and randomly chosen parameters. The lines represent fits, obtained from the large \(L\) data, as a guide for the eyes. ### Gradient based QAOA optimization Here, we benchmark the scalings of QAOA driven by the COBYLA optimizer against a gradient-based method. Contrary to the case described in section 4.2, the QAOA circuit does not satisfy the assumptions to apply the parameter shift rule. [48] While there are attempts to extend the parameter shift rule, see, e. g., [51], here we adopt the finite difference approximation. The \(n^{\mathrm{th}}\) gradient component is computed as: \[\frac{\partial\tilde{C}}{\partial\theta_{n}}=\frac{1}{2\varepsilon}\Big{[} \langle\psi_{\mathbf{\theta}_{n}^{\pm\varepsilon}}|\hat{H}_{\mathrm{P}}|\psi_{\bm {\theta}_{n}^{\pm\varepsilon}}\rangle-\langle\psi_{\mathbf{\theta}_{n}^{- \varepsilon}}|\hat{H}_{\mathrm{P}}|\psi_{\mathbf{\theta}_{n}^{-\varepsilon}} \rangle\Big{]}, \tag{15}\] where \(\mathbf{\theta}_{n}^{\pm\varepsilon}=\big{(}\theta_{1},\ldots,\theta_{n}\pm \varepsilon,\ldots,\theta_{n_{\mathrm{par}}}\big{)}\) and \(\varepsilon>0\) is the increment. Small values reduce the finite-difference error, but they also enhance the random fluctuations due to the finite number of measurements \(\tilde{M}\) used to estimate the expectation values in (15). To identify the optimal trade-off regime, we compare the estimated gradients with the Figure 5: Optimization of the ferromagnetic (first row) and the disordered (second row) Ising chains within the QAOA method. (a-b, d-e): Success probability \(F_{\mathrm{succ}}\) as a function of the total number of function calls \(n_{\mathrm{calls}}\), for \(L=8\) spins (a, d) and \(L=14\) spins (b, e). The error bars indicate the \(25^{\mathrm{th}}\) and the \(75^{\mathrm{th}}\) percentiles. Different colors correspond to the different combinations of measurement budgets \(M\) and classical optimization-step counts \(n_{\mathrm{iter}}\). The dashed curve corresponds to the random search with replacement. (c, f): The optimal number of function calls \(n_{\mathrm{calls}}^{*}\) as a function of the number of spins \(L\) for different \(F_{\mathrm{succ}}\). Circuits with \(d=2\) blocks (full symbols) and with \(d=4\) blocks (empty symbols) are considered. The thick dashed (red) line represents the scaling \(n_{\mathrm{calls}}^{*}\sim 2^{L}\) corresponding to exact enumeration. Thin continuous and dashed lines represent fitting functions of the form \(n_{\mathrm{calls}}^{*}=a2^{kL}\), and the fitting parameters \(a\) and \(k\) are obtained by fitting the large \(L\) data and are given in the keys. exact results from state-vector simulations (see panel (a) of figure 6). For the typically optimal shot numbers \(\tilde{M}\in[2,16]\), the error is minimized for increments close to \(\varepsilon=0.5\). This value is adopted hereafter. With the above setting, we analyze the scaling of the optimal \(n_{\mathrm{call}}^{*}\) with the problem size \(L\) (see panel (b) of figure 6). Any benefit provided by the gradient turns out to be essentially compensated by its cost in terms of measurement shots. Recently, the detrimental cost of gradient estimation has been highlighted addressing the application of quantum computers for electronic structure [52]. The overall improvement compared to the scaling obtained with the COBYLA optimizer is not sizable. Furthermore, it is worth noticing that even for the larger size considered in this work, the gradient-based optimization requires larger \(n_{\mathrm{call}}^{*}\). This is the second main result of the paper: a naive textbook implementation of QAOA using random starting parameters is practically inefficient, even when gradient-based optimizers are used, despite showing an improved scaling with respect to random search. ### QAOA with annealing inspired parameters initialization On one hand, the above findings indicate that QAOA is computationally unfeasible for problem sizes of practical interest. On the other hand, QAOA can be interpreted as a digitized version of quantum annealing, and previous studies have shown that the Figure 6: (a): Absolute error \(|\mathrm{err}|\) in estimating the gradient as a function of the step \(\varepsilon\) of the finite difference approximation. We compare results obtained with a different number of shots \(M\). (b): The minimal number of function calls \(n_{\mathrm{call}}^{*}\) as a function of the number of spins \(L\), for different success probabilities \(F_{\mathrm{succ}}\). Circuits with \(d=2\) blocks are considered, with gradient descent (full symbols) and with COBYLA optimizer (empty symbols). The thick dashed (red) line represents the scaling \(n_{\mathrm{call}}^{*}\sim 2^{L}\) corresponding to exact enumeration. Thin continuous and dashed lines represent fitting functions of the form \(n_{\mathrm{call}}^{*}=a2^{kL}\), and the fitting parameters \(a\) and \(k\), obtained considering the large \(L\) data, are given in the keys. available quantum-annealing devices can already find solutions of large-scale spin-glass instances in a reasonable runtime [6] even for problem sizes as large as \(L=512\). To solve this apparent conundrum, we perform QAOA in its adiabatic limit. Formally, this limit is reached when \(d\to\infty\). However, as pointed out in section 4.3, doubling the number of layers does not decisively change the computational scaling. In fact, the number of parameters increases with the circuit depth, and more parameters usually require more optimization iterations. Still, the analogy with quantum annealing inspires a systematic way to effectively initialize the parameters. Indeed, in [26] it was observed that, in state-vector simulations, the optimal parameters often follow a pattern similar to the quantum annealing prescription: the parameters controlling the mixing operator \(\mathbf{\theta}_{M}\) decrease, while the parameters controlling the problem operator \(\mathbf{\theta}_{P}\) increase with the layer index. Following this idea, we initialize the parameters using the simplest discretized linear schedule, as in [53]: \[\theta_{M}^{l}=\left(1-\frac{l}{d}\right)\Delta_{t},\qquad\theta_{P}^{l}=\frac {l}{d}\Delta_{t}, \tag{16}\] where \(l\in[1,\cdots,d]\). Notice that in most QAOA literature, the parameters that control the mixing operators are denoted with \(\beta\), while the problem parameters are denoted with \(\gamma\). Figure 7: Comparison of the optimizations starting from random (section 4.3) and annealing-inspired initializations (linear schedule, section 4.5) for the ferromagnetic model. (a-b): The success probability \(P_{\mathrm{succ}}\) as a function of the number of spins \(L\), obtained within QAOA before (continuous curves) and after (dashed curves) the classical optimization performed for \(n_{\mathrm{iter}}\) steps. The number of shots (a) \(M=16\) and (b) \(M=32\) are considered, chosen so that \(M\ll 2^{L}\). Note that \(P_{\mathrm{succ}}\), i. e. the probability to sample at least once the global minimum at the \(n\)-th iteration, is equal by definition to \(F_{\mathrm{succ}}\) when \(n_{\mathrm{iter}}=0\). (c): The minimal number of function calls \(n_{\mathrm{calls}}^{*}\) as a function of \(L\) at fixed success probabilities \(F_{\mathrm{succ}}\), starting from the annealing-inspired initialization. Circuits with \(d=2\) blocks (full symbols) and with \(d=4\) blocks (empty symbols) are considered. The thick dashed (red) line represents the scaling \(n_{\mathrm{calls}}^{*}\sim 2^{L}\) corresponding to exact enumeration. The thin continuous and dashed lines represent fitting functions of the form \(n_{\mathrm{calls}}^{*}=a2^{kL}\), and the fitting parameters \(a\) and \(k\), obtained by fitting the large \(L\) data, are given in the keys. In [53], this initialization was found effective for MaxCut problems solved via state-vector simulations, i. e., eliminating the measurement shot noise. Here, we show that this initialization is not only an improvement to the QAOA textbook strategy, but it is also essential to make the algorithm practical in realistic conditions where measurement noise is accounted for. Notice that with the re-parametrization in (16), the angles \(\theta_{M}^{l}\) and \(\theta_{P}^{l}\) depend only on one real degree of freedom \(\Delta_{t}\). More complex reparametrizations could also be possible. To guide us in the choice of a suitable value for \(\Delta_{t}\), we perform a reasonably exhaustive search, using 8 independent repetitions of state-vector simulation using \(L=4,6,8,10\) and depths \(d=2,4,6,8\). It is found that the value \(\Delta_{t}\approx 0.80\) is the most frequent outcome of these optimization runs. Notably, a similar optimal value was found in [53] in the case of MaxCut instances on random graph. These combined findings suggest that the quantum-annealing inspired initialization is a general and robust procedure. This is further corroborated by the results for the disordered Hamiltonian, discussed below. Hereafter, we first tackle the ferromagnetic problem, using the above prescription. The performance of the QAOA circuit with the annealing-inspired parameters is compared to the ones of the QAOA circuits with the parameters obtained after the fixed numbers of optimization iterations \(n_{\mathrm{iter}}=20\) and \(n_{\mathrm{iter}}=60\), starting from the same smart initialization. Notice that here the following definition of success probability \(P_{\mathrm{succ}}\) is adopted: \(M\) measurements are performed on the prepared state (with \(M=16\) or \(M=32\)), and the fraction of successful executions at a selected \(n_{\mathrm{iter}}\) is recorded. This fraction differs from \(F_{\mathrm{succ}}\), which corresponds to the probability of observing the solution at least once during all optimization iterations, not only in the final state. The scaling of \(P_{\mathrm{succ}}\) with problem size is shown in figure 7, panels (a) and (b). One observes that the QAOA ansatz with the annealing-inspired linear initialization is already optimal, for both circuit depths \(d=2\) and \(d=4\). The optimization of the parameters does not yield better ansatze to sample from. Two important observations are due: (i) the scaling exponent \(k\) is reduced compared to the random initialization case, and (ii) \(k\) decreases with the circuit depth, as opposed to the case of random initialization displayed in panel (c) of figure 5. In the panel (c) of figure 7, the scaling of the optimal number of calls \(n_{\mathrm{calls}}^{*}\) is shown, following the procedure already discussed in section 4.1 and 4.3. This indicates that optimizations started from random parameters, performed with a finite budget of shots \(M\), are not able to showcase the higher expressive power of deeper circuits. These numerical results suggest that, with a deep enough circuit, the exponent \(k\) can be reduced enough to reach practically useful performances for relevant problem sizes. This hypothesis is corroborated by the analysis reported at the end of this subsection. As anticipated above, here we repeat the numerical experiment using ensembles of disordered Ising chains. It turns out that the pre-computed value of \(\Delta_{t}\approx 0.80\) is appropriate, in most instances, also in this setting. Importantly, in figure 8 we show that also in this case the smartly initialized ansatz features almost converged parameters; indeed, the success probability \(P_{\rm succ}\) does not improve when the circuit is further optimized up to \(n_{\rm iter}=30\) steps. Notice that \(P_{\rm succ}\), i. e. the probability to sample at least once the global minimum at the \(n\)-th iteration, is equal by definition to \(F_{\rm succ}\) when \(n_{\rm iter}=0\). The random-initialized circuit instead benefits from the optimization run, although it never reaches the success probability of the linearly-initialized ansatz. This result clearly shows that it is much better to use a clever parameters initialization without optimization, instead of randomly initializing the parameters \(\mathbf{\theta}\) and performing the optimization. Finally, we try to numerically demonstrate that, with the annealing-inspired initialization, sufficiently deep QAOA circuits can reach appealing performances, even without performing classical parameter optimizations. To this end, we determine the success probability \(F_{\rm succ}\) as a function of the problem size \(L\), for several circuit depths \(d\) at fixed shot numbers \(M\) (see figure 9). It is found that the performance systematically and rapidly increases with \(d\), reaching \(F_{\rm succ}\simeq 1\) even for the largest considered size \(L\), for sufficiently deep circuits. This evidence matches the intuition that QAOA reduces to quantum annealing when \(d\) is increased and the circuit parameters follow the pattern in (16) (although different schedules are possible) [29]. Notice that here, the number of shots is \(M<2^{L}\) for the sizes considered. ## 5 Discussion We critically analyze two popular quantum algorithms for optimization, addressing controllable and reproducible testbed models, i. e., the ferromagnetic and the disordered Ising chains. The two optimization algorithms, namely, VQE and QAOA, are variational Figure 8: Success probability \(P_{\rm succ}\) as a function of the number of spins \(L\), starting from random and from annealing-inspired initializations for the disordered Hamiltonian. We compare the success probability \(P_{\rm succ}\) obtained via QAOA before (continuous curves) and after (dashed curves) the classical optimization performed for \(n_{\rm iter}\) steps. methods and rely on an outer classical parameter optimization to (hopefully) identify a suitable circuit that allows sampling of the exact solution with high probability. On one hand, our results indicate that, in the practical regime where the number of measurements \(M\) per optimization step is much smaller than the Hilbert-space dimension \(2^{L}\), basic optimization strategies fail to identify suitable circuit parameters. On the other hand, appealing performances are achieved by deep QAOA circuits when a smart parameters initialization is adopted, as further discussed below. To reach the above conclusions, we track the total number of measurements \(n_{\text{calls}}^{*}\) to reach a fixed target success probability \(F_{\text{succ}}\) in the presence of measurement shot noise, and we analyze its scaling with the problem size \(L\). As expected, we find exponential scalings in the form \(n_{\text{calls}}^{*}\propto 2^{kL}\), and we determine the exponents \(k\) considering different setups, including energy-based versus gradient-based classical optimizers in both VQE and QAOA, different circuit depths \(d\), as well as random and annealing-inspired parameters initializations in QAOA. The first result of this Article is that VQE shows a very poor scaling with problem size \(L\). When an energy-based optimizer is adopted, the scaling is not better than direct enumeration of the whole computational space, which corresponds to \(k=1\). Introducing additional noise due to simulated hardware errors does not significantly affect the scaling compared to the error-free case. Notice that our results are not in contrast with existing literature on the use of VQE for classical cost functions [38, 31, 32, 33, 34], since these studies report results in the regimes where \(M\approx 2^{L}\) or \(n_{\text{calls}}>2^{L}\). We also find that a gradient-based optimization, which we implement in VQE via the parameters shift rule, is useful, leading up to a quadratic speedup compared to the energy-based optimizer COBYLA. Figure 9: Success probability \(F_{\text{succ}}\) as a function of the number of spins \(L\) without classical optimization. Using the smart linear initialization, \(F_{\text{succ}}\) grows when the circuit depth is increased, both for the (a) ferromagnetic model and (b) the disordered problems. In the latter, we consider 30 instances of disorder and the error bars indicate the \(25^{\text{th}}\) and the \(75^{\text{th}}\) percentiles. Notable findings are obtained also for QAOA. The presence of the problem Hamiltonian in the circuit leads to a functional optimization: the energy landscape \(f(x)\) is effectively queried in superposition, and by adjusting the QAOA parameters one can create an interference pattern that enables the global minima to be sampled more frequently than by uniform random choice. However, the question we want to answer here is practical: how many total circuit executions does QAOA need as the problem size increases, while maintaining a fixed success probability? In contrast to most of the QAOA literature, we keep in consideration that the cost function needs to be stochastically evaluated, and in realistic conditions one can afford only \(M\ll 2^{L}\) samples. We first adopt a textbook version of QAOA, where we optimize the parameters from scratch, i. e., starting from random initial values. While, as expected, the total computation complexity is exponential, the exponent \(k\) is sizeably reduced compared to the full state enumeration. Notice that the performance degradation with the system size at fixed quantum resources \(n_{\mathrm{calls}}\) is not due here to hardware noise [21], but to the intrinsic quantum measurement shot noise, a crucial ingredient which is often overlooked and usually leads to over-optimistic expectations for quantum algorithms [54, 44]. Notice also that our numerical findings are compatible with [55], which discusses the query complexity of variational algorithms but only in the vicinity of the global minimum. With the energy-based optimizer COBYLA, the QAOA scaling exponents turn out to be \(k\simeq 0.4\) for the ferromagnetic problem, and in the range \(0.5<k<0.8\) for the disordered models; the circuit depth does not significantly affect the scaling. As opposed to the VQE case, adopting a gradient-based optimization does not sizeably change \(k\). Furthermore, a third optimizer, the SPSA algorithm, provides compatible results. These scaling exponents can be used to estimate the hypothetical runtimes required to execute the QAOA algorithm on physical quantum devices for realistic problem sizes. Assuming the best-observed scenario of the ferromagnetic case, \(n_{\mathrm{calls}}=1\times 2^{0.31\cdot L}\), some consequential bounds can be provided. For example, considering the circuit depth \(d=2\), gate execution time \(t_{\mathrm{gate}}=10\,\mathrm{ns}\) for the NISQ era (best case scenario here), one obtains runtimes of about tens of seconds for a hypothetical problem size \(L=100\), and a time much beyond the age of universe already for \(L=500\). These quotes need to be contrasted with tens of milliseconds of total CPU time of simulated annealing [56], or minutes for exact algorithms [5] for \(L=500\). To achieve a runtime of order 10 ms (resp. minutes), for \(L=500\), QAOA should achieve a scaling exponent of about \(k=0.04\) (resp. 0.07). We conclude that even the best-case scenario observed for the ferromagnetic model is insufficient to provide practical advantage relatively to classical methods or at least feasible absolute times. Our numerical experiments are consistent with a very recent hardware assessment of QAOA versus quantum annealing, which shows that a \(d=2\) QAOA circuit, while better than random sampling, delivers worse performance than annealing [22]. However, it should be pointed out that, in that large-scale experiment, the performance metric cannot be defined in terms of success probability, since QAOA never provides the exact solution, nor approximate solutions qualitatively comparable with those of simulated or quantum annealing. This is again consistent with our picture. Moreover, our findings are not in contrast with other previous numerical [23] or experimental QAOA [20] studies, which are either presented in the \(n_{\mathrm{calls}}>2^{L}\) regime or use a bootstrapping method to initialize the parameters. To recover an effective algorithm, it is crucial to use a smart initialization of the QAOA parameters. Indeed, our results show that smart initialization allows one to effectively exploit deeper circuits, systematically increasing the success probability. In fact, for the simple testbed models we consider, the parameter values given by the annealing-inspired schedule turn out to be very close to the optimal values, such that QAOA provides excellent success probabilities without the need for further parameter optimization. Instead, when QAOA is started from random parameters the classical optimization loop is unable to systematically improve the ansatz in the physical regime \(M\ll 2^{L}\). Interestingly, the same linear schedule proposed in [57] for MaxCut problems, based on noise-free simulations, turns out to be suitable also for our ferromagnetic and disordered Ising chains in the presence of measurement shot noise. While one cannot associate a specific scaling exponent to the smartly-initialized QAOA algorithm, as, fortunately, in this case the scaling does improve with the circuit depth, it is quite plausible that sufficiently deep circuits can reach feasible computational times for practically relevant problem sizes. It is worth remembering that the proposal to use a smart initialization of the QAOA parameter is not new [26, 53, 29]. However, our findings show that this choice should not only be considered as a good practice to marginally enhance the algorithm efficiency, but it is the only route to make the algorithm practical in the presence of shot noise. Indeed, if one performs (noise-free) state-vector emulations of the optimization run, good parameters can be recovered anyway, irrespective of the initialization [26, 53, 29]. Overall, we suggest that future implementation of QAOA should at least rethink the use of the outer optimization loop, focusing in particular on smart parameter initializations. While in this manuscript we adopt an annealing-inspired initialization, more flexible solutions, suitable for shallower circuits, are possible. In general, the angle array can be re-parametrized as \(\mathbf{\theta}\rightarrow\mathbf{\theta}(\mathbf{\alpha})\), using a smaller number of optimizable parameters \(\mathbf{\alpha}\). This might allow performing fewer optimization steps, similar to the Fourier re-parametrization of [26]. The research concerning pre-optimization is very active. For instance, the QAOA smart initialization has been studied for the Sherrington-Kirkpatrick model [58] and the MaxCut problem [28, 59, 60]. Moreover, the experimental results achieved in the studies cited above have been analytically confirmed in [61]. Note that the issue of shot noise is well-known in VQE for genuine many-body quantum Hamiltonians, for example, in chemistry [54]. However, the fact that it also manifests so severely in the case of a classical cost function, which can be measured in a single basis, is important. We expect our findings to apply in general to variational quantum algorithms strongly relying on a classical optimization loop, but not to other alternatives for quantum-enhanced optimization on digital hardware, including quantum-powered sampling [62, 63], branch-and-bound algorithm [64], and quantum-walks [65], to name a few proposals. On a methodological note, these results demonstrate the importance of simple and controllable models to analyze the scaling properties of quantum algorithms in realistic settings. We acknowledge useful discussion with G. Carleo regarding the relation between QAOA and quantum annealing. We thank A. Miessen and C. Cozza for helping us set up the Qiskit simulation on the cluster. Computation for the work described in this paper was supported by the Science Cluster at the University of Zurich. G. S. acknowledges the hospitality of the Institute for Computational Science at the University of Zurich and useful discussions with E. Costa. G. M. acknowledges financial support from the Swiss National Science Foundation. N. A is funded by the Swiss National Science Foundation, grant number: PP00P2_176877. S. P. acknowledges PRACE for awarding access to the Fenix Infrastructure resources at Cineca, which are partially funded by the European Union's Horizon 2020 research and innovation program through the ICEI project under Grant Agreement No. 800858. This work was also partially supported by the PNRR MUR project PE0000023-NQSTI. ## Appendix A VQE with hardware errors In this appendix, we inspect the possible role of hardware errors. For this, a custom model of hardware noise is introduced, using the open-source Qiskit API [45]. A realistic model is obtained, e. g., considering the thermal relaxation due to the qubit environment. Each qubit is then parameterized by a thermal relaxation time constant \(T_{1}=50\,\upmu\)s and a dephasing time constant \(T_{2}=70\,\upmu\)s. The performance comparison against the error-free VQE circuits is shown in figure 1. Ferromagnetic chains are considered, using the CVaR estimator. It turns out that the scaling of \(n^{*}_{\text{calls}}\) with \(L\) is not significantly affected by this simulated hardware noise. ## Appendix B QAOA with SPSA In this appendix, we analyse the computational scaling using an energy-based optimizer alternative to COBYLA, namely, the SPSA algorithm [50]. This is used to optimize QAOA circuits of depth in the range \(2\leqslant d\leqslant 4\). The testbed we consider here is the ferromagnetic Ising chain, corresponding to setting \(J_{j,j+1}=+1\) in (1). In the SPSA algorithm, at each optimization step, a random uniformly-distributed \(n_{\text{par}}\) parameters shift with a constrained length is applied: \(\mathbf{\theta}\to\mathbf{\theta}+\Delta\mathbf{\theta}\), with \(\|\Delta\mathbf{\theta}\|\leqslant W\). We consider three values of this maximum norm, namely: \(W=0.01\), \(0.03\) and \(0.06\). The shift vector is generated as a random vector on a \(n_{\text{par}}\)-dimensional unit sphere, normalized to length \(W\). We accept the new parameters if the cost function decreases. As the cost function, we use the CVaR with either \(25\,\%\) or \(100\,\%\) of the best-energy samples. To obtain the optimal \(n^{*}_{\text{calls}}\), we use a procedure similar to the one used in section 3. We consider optimization with \(M\) samples generated at each SPSA step, and optimize until \(F_{\text{succ}}\) reaches the target value. We then compute \(n^{*}_{\text{calls}}=\min\,\left(M\times n_{\text{iter}}\right).\) The initial parameters are uniform random values in the range \(\theta_{n}\in\left(-1.0,1.0\right)\), and the results for \(n^{*}_{\text{calls}}\) are obtained by averaging over 1000 simulations with random starting points. The results are presented in figure 1. Notably, we observe that the scaling sizeably worsens as \(d\) increases from 2 to 4. This could be attributed to a more complex optimization landscape which requires a higher \(M\) or \(n_{\text{calls}}\) to approach the global minimum. At the same time, the ansatz with \(d=2\) reaches the \(k\simeq 0.5\) scaling, therefore it features a quadratic speed-up compared with the scaling of the COBYLA-driven VQE optimization, as also found with the QAOA algorithm, driven either by COBYLA or by the gradient-based optimizer.
2309.12769
Maximum-order complexity and $2$-adic complexity
The $2$-adic complexity has been well-analyzed in the periodic case. However, we are not aware of any theoretical results on the $N$th $2$-adic complexity of any promising candidate for a pseudorandom sequence of finite length $N$ or results on a part of the period of length $N$ of a periodic sequence, respectively. Here we introduce the first method for this aperiodic case. More precisely, we study the relation between $N$th maximum-order complexity and $N$th $2$-adic complexity of binary sequences and prove a lower bound on the $N$th $2$-adic complexity in terms of the $N$th maximum-order complexity. Then any known lower bound on the $N$th maximum-order complexity implies a lower bound on the $N$th $2$-adic complexity of the same order of magnitude. In the periodic case, one can prove a slightly better result. The latter bound is sharp which is illustrated by the maximum-order complexity of $\ell$-sequences. The idea of the proof helps us to characterize the maximum-order complexity of periodic sequences in terms of the unique rational number defined by the sequence. We also show that a periodic sequence of maximal maximum-order complexity must be also of maximal $2$-adic complexity.
Zhiru Chen, Zhixiong Chen, Jakob Obrovsky, Arne Winterhof
2023-09-22T10:19:54Z
http://arxiv.org/abs/2309.12769v1
# Maximum-order complexity and 2-adic complexity ###### Abstract The 2-adic complexity has been well-analyzed in the periodic case. However, we are not aware of any theoretical results on the \(N\)th 2-adic complexity of any promising candidate for a pseudorandom sequence of finite length \(N\) or results on a part of the period of length \(N\) of a periodic sequence, respectively. Here we introduce the first method for this aperiodic case. More precisely, we study the relation between \(N\)th maximum-order complexity and \(N\)th 2-adic complexity of binary sequences and prove a lower bound on the \(N\)th 2-adic complexity in terms of the \(N\)th maximum-order complexity. Then any known lower bound on the \(N\)th maximum-order complexity implies a lower bound on the \(N\)th 2-adic complexity of the same order of magnitude. In the periodic case, one can prove a slightly better result. The latter bound is sharp which is illustrated by the maximum-order complexity of \(\ell\)-sequences. The idea of the proof helps us to characterize the maximum-order complexity of periodic sequences in terms of the unique rational number defined by the sequence. We also show that a periodic sequence of maximal maximum-order complexity must be also of maximal 2-adic complexity. Dedicated to the memory of Kai-Uwe Schmidt (1978-2023) **Keywords**. Pseudorandom sequences, maximum-order complexity, 2-adic complexity, \(\ell\)-sequences 2020 MSC: Primary: 94A55; Secondary: 11T71, 94A05, 94A60 Introduction Pseudorandom sequences are generated by deterministic algorithms and are not random at all. However, both from an academic point of view as well as from a cryptographic point of view they should have as many desirable features of randomness as possible, that is, they should not be distinguishable from a 'truly' random sequence. These desirable features and thus the pseudorandomness of _binary_ sequences can be analyzed via several measures of pseudorandomness such as the maximum-order complexity and the 2-adic complexity, see for example the recent survey [35]. We recall the definitions of the \(N\)th maximum-order complexity and the \(N\)th 2-adic complexity. For a positive integer \(N\), the \(N\)_th maximum-order complexity_ (or \(N\)_th nonlinear complexity_) \(M(\mathcal{S},N)\) of a binary sequence \(\mathcal{S}=(s_{n})_{n\geq 0}\) over the two-element field \(\mathbb{F}_{2}=\{0,1\}\) is defined as the smallest positive integer \(m\) such that there is a polynomial \(f(X_{1},\ldots,X_{m})\in\mathbb{F}_{2}[X_{1},\ldots,X_{m}]\) with \[s_{i+m}=f(s_{i},s_{i+1},\ldots,s_{i+m-1})\quad\text{ for }0\leq i\leq N-m-1,\] see [1, 10, 11, 12, 14, 21, 22, 23, 25, 26, 31, 36]. We set \(M(\mathcal{S},N)=0\) if \(s_{0}=s_{1}=\ldots=s_{N-2}=s_{N-1}\), that is \(s_{i}=s_{0}\) for \(0\leq i\leq N-1\), and \(M(\mathcal{S},N)=N-1\) if \(s_{0}=s_{1}=\ldots=s_{N-2}\neq s_{N-1}\), that is \(s_{i+N-1}=s_{i}+1\), \(i=0\). The sequence \((M(\mathcal{S},N))_{N\geq 1}\) is referred to as the _maximum-order complexity profile_ (or _nonlinear complexity profile_) of \(\mathcal{S}\). If \(\mathcal{S}\) is \(T\)-periodic, that is, \(s_{n+T}=s_{n}\) for \(n\geq 0\), we have \(M(\mathcal{S},N)=M(\mathcal{S},2T-1)\) for \(N\geq 2T\). This number is called the _maximum-order complexity_ (or _nonlinear complexity_) of \(\mathcal{S}\) and it is denoted by \(M(\mathcal{S})\). In other words, the maximum-order complexity is the length of a shortest (possibly nonlinear) feedback shift register. It is well-known that \(M(\mathcal{S})\leq T-1\). In particular, restricting only to the homogeneous polynomials \(f(X_{1},\ldots,X_{m})\) of degree one leads to the notion of the \(N\)_th linear complexity_\(L(\mathcal{S},N)\), the _linear complexity_\(L(\mathcal{S})=L(\mathcal{S},2T)\leq T\) and the _linear complexity profile_\((L(\mathcal{S},N))_{N\geq 1}\) of \(\mathcal{S}\), respectively. See the list of surveys [16, 20, 34]. The \(N\)th maximum-order complexity as well as the \(N\)th linear complexity are measures for the unpredictability of a sequence and thus its suitability in cryptography. The 2-adic complexity introduced by Goresky and Klapper [5, 13] is closely related to the length of a shortest feedback with carry shift register (FCSR) which generates the sequence. The theory of 2-adic complexity has been very well developed for the periodic case. More precisely, any \(T\)-periodic binary sequence \(\mathcal{S}=(s_{n})_{n\geq 0}\) uniquely corresponds to the rational number \[\sum_{n=0}^{\infty}s_{n}2^{n}=-\frac{\sum_{n=0}^{T-1}s_{n}2^{n}}{2^{T}-1}=- \frac{A}{q}, \tag{1}\] where \(0\leq A\leq q\), \(\gcd(A,q)=1\) and \[q=\frac{2^{T}-1}{\gcd\left(2^{T}-1,\sum_{n=0}^{T-1}s_{n}2^{n}\right)},\] which is called the (minimal) _connection integer_ of \(\mathcal{S}\)[5]. Then the \(2\)_-adic complexity_ of \(\mathcal{S}\), denoted by \(\Phi_{2}(\mathcal{S})\), is the binary logarithm \(\log_{2}(q)\) of \(q\). In the aperiodic case, the \(N\)_th \(2\)-adic complexity_, denoted by \(\Phi_{2}(\mathcal{S},N)\), is the binary logarithm of \[\min\left\{\max\{|f|,|q|\}:f,q\in\mathbb{Z},q\text{ odd },q\sum_{n=0}^{N-1}s_{n }2^{n}\equiv f\pmod{2^{N}}\right\},\] see [5, p.328] or [35]. It is trivial that \(\Phi_{2}(\mathcal{S},N)\leq\Phi_{2}(\mathcal{S},N+1)\) for \(N\geq 1\). The average behavior of the \(2\)-adic complexity and some asymptotic behavior of the \(N\)th \(2\)-adic complexity (more generally of the \(d\)-adic complexity of \(d\)-ary sequences, \(d\geq 2\)) are considered in Chapter 18.2 and Chapter 18.5 of [5], respectively. However, it seems that there are no results known on the relation between the \(N\)th \(2\)-adic complexity and other complexity measures. Moreover, in contrast to the periodic case no results are known on the \(N\)th \(2\)-adic complexity of any attractive candidate for a pseudorandom sequence, that is a sequence with some (proved) desirable features and no known undesirable feature of pseudorandomness. Here we introduce the first theoretic method to study the aperiodic case, more precisely, we transfer some known results on the \(N\)th maximum-order complexity to the \(N\)th \(2\)-adic complexity. This leads to the main contribution of this work, that is, we will prove in Sections 2 and 3 the following inequalities \[M(\mathcal{S},N)\leq\lceil\Phi_{2}(\mathcal{S},N)\rceil+1,\quad N\geq 1,\] in the non-periodic case and \[M(\mathcal{S})\leq\lceil\Phi_{2}(\mathcal{S})\rceil \tag{2}\] if \(\mathcal{S}\) is periodic, where \(\lceil x\rceil\) is the smallest integer \(\geq x\). The first inequality also implies a relation between the correlation measure of order \(k\) and the \(2\)-adic complexity, see Corollary 1. Below we will also use \(\lfloor x\rfloor\) for the largest integer \(\leq x\). We apply these bounds to several sequences including the Thue-Morse sequence along squares and the Legendre sequence. In addition, in the periodic case the idea of the proof of Eq.(2) leads to a characterization of the maximum-order complexity in terms of the rational number \(-A/q\) defined by (1), which is stated in Subsection 4.1. As a consequence, we prove in Subsection 4.2 the maximality of the maximum-order complexity of binary \(\ell\)-sequences, which are those sequences for which the connection integer \(q\) is a power of an odd prime such that \(2\) is a primitive root modulo \(q\), see [5, Chapter 13]. The result indicates that the bound in Eq.(2) is sharp. In Section 4.3 we show that any periodic sequence with maximal maximum-order complexity has also maximal \(2\)-adic complexity. We will use the notation \(f(N)=\mathcal{O}(g(N))\) if \(|f(N)|\leq cg(N)\) for some constant \(c>0\) and \(f(N)=o(g(N))\) if \(\lim\limits_{N\to\infty}\frac{f(N)}{g(N)}=0\). Sometimes we also use \(f(N)\ll g(N)\) and \(g(N)\gg f(N)\) instead of \(f(N)=\mathcal{O}(g(N))\). ## 2 \(N\)th Maximum-order complexity and \(N\)th \(2\)-adic complexity In this section we prove a relation between the \(N\)th maximum-order complexity \(M(\mathcal{S},N)\) and the \(N\)th \(2\)-adic complexity \(\Phi_{2}(\mathcal{S},N)\) and apply it to several prominent sequences. **Theorem 1**.: _Let \(\mathcal{S}=(s_{n})_{n\geq 0}\) be a binary sequence. Then we have_ \[M(\mathcal{S},N)\leq\lceil\Phi_{2}(\mathcal{S},N)\rceil+1\] _for \(N\geq 1\)._ Proof. Since otherwise the result is trivial we may assume \[M(\mathcal{S},N)\geq 2\] and put \(m=M(\mathcal{S},N)-1\). Then there exist \(i,j\) with \[0\leq i<j\leq N-1-m \tag{3}\] and \[(s_{i},s_{i+1},\ldots,s_{i+m-1})=(s_{j},s_{j+1},\ldots,s_{j+m-1}),\quad s_{i+m }\neq s_{j+m}, \tag{4}\] by [11, Prop. 1], see also [27, Thm. 2]. Put \[S(2)=\sum_{n=0}^{N-1}s_{n}2^{n}\] and for \(0\leq k<N\) \[S_{k}(2)=\sum_{n=0}^{N-1-k}s_{n+k}2^{n}=2^{-k}\left(S(2)-\sum_{n=0}^{k-1}s_{n} 2^{n}\right).\] Then we have for \(i,j\) with \(0\leq i<j\leq N-1-m\) chosen above to satisfy (4), \[S_{i}(2)\equiv\sum_{n=0}^{m}s_{n+i}2^{n}\equiv S_{j}(2)+2^{m}\pmod{2^{m+1}}\] and thus \[2^{j-i}\left(S(2)-\sum_{n=0}^{i-1}s_{n}2^{n}\right) \equiv 2^{j}S_{i}(2)\equiv 2^{j}S_{j}(2)+2^{m+j} \tag{5}\] \[\equiv S(2)-\sum_{n=0}^{j-1}s_{n}2^{n}+2^{m+j}\pmod{2^{m+j+1}}.\] Note that \(\Phi_{2}(\mathcal{S},N)\geq\Phi_{2}(\mathcal{S},m+j+1)\) by (3) and assume \[\Phi_{2}(\mathcal{S},m+j+1)=c\] for some \(c\geq 0\), that is, there are integers \(q\) and \(h\), where \(q\) is odd, with \[\max\{|h|,|q|\}=2^{c} \tag{6}\] and \[qS(2)\equiv h\pmod{2^{m+j+1}}. \tag{7}\] With (5) multiplied by \(q\) and (7), we get \[2^{j-i}\left(h-q\sum_{n=0}^{i-1}s_{n}2^{n}\right)\equiv h-q\sum_{n=0}^{j-1}s_{ n}2^{n}+2^{m+j}\pmod{2^{m+j+1}},\] that is \[2^{m+j}\equiv(2^{j-i}-1)h+q\left(\sum_{n=0}^{j-1}s_{n}2^{n}-\sum_{n=0}^{i-1}s_ {n}2^{n+j-i}\right)\pmod{2^{m+j+1}}.\] Since \[\left|\sum_{n=0}^{j-1}s_{n}2^{n}-\sum_{n=0}^{i-1}s_{n}2^{n+j-i}\right|\leq \max\left\{\sum_{n=0}^{j-1}2^{n},\sum_{n=0}^{i-1}2^{n+j-i}\right\}=2^{j}-1\] and by (6) the right hand side is of absolute value at most \[2^{c+1}(2^{j}-1),\] which is not possible if \(c\leq m-1\). Hence, we get \[\left\lceil\Phi_{2}(\mathcal{S},N)\right\rceil\geq\Phi_{2}(\mathcal{S},N)\geq c >m-1=M(\mathcal{S},N)-2\] and the result follows. Note that we cannot deduce \(\Phi_{2}(\mathcal{S},N)\geq M(\mathcal{S},N)-1\) since \(\Phi_{2}(\mathcal{S},N)\) may be no integer. According to Theorem 1, a lower bound on \(M(\mathcal{S},N)\) implies a lower bound on \(\Phi_{2}(\mathcal{S},N)\). Below we state several classes of sequences with known lower bounds on the \(N\)th maximum-order complexity which can now be interpreted as lower bounds on the \(N\)th 2-adic complexity as well. Pattern sequences (along squares/polynomial values): For a positive integer \(k\), the _pattern sequence_\(\mathcal{P}_{k}=(p_{n})_{n\geq 0}\) over \(\mathbb{F}_{2}\) is defined by \[p_{n}=\left\{\begin{array}{cc}p_{\lfloor n/2\rfloor}+1,&\mbox{if }n\equiv-1 \pmod{2^{k}},\\ p_{\lfloor n/2\rfloor},&\mbox{otherwise},\end{array}\right.\quad n=1,2,\ldots\] with initial value \(p_{0}=0\). Equivalently, \(p_{n}\) is the parity of the number of occurrences of the all one pattern of length \(k\) in the binary expansion of \(n\). For \(k=1\) we get the _Thue-Morse sequence_\(\mathcal{T}=(t_{n})_{n\geq 0}\) and for \(k=2\) the _Rudin-Shapiro sequence_\(\mathcal{R}=(r_{n})_{n\geq 0}\). We get \[\lceil\Phi_{2}(\mathcal{T},N)\rceil\geq\frac{N}{5},\quad N\geq 4,\] and \[\lceil\Phi_{2}(\mathcal{P}_{k},N)\rceil\geq\frac{N}{6},\quad N\geq 2^{k+3}-7, \quad k\geq 2,\] from the lower bounds on the maximum-order complexity in [29]. Note that despite of a large \(N\)th maximum-order complexity and a large \(N\)th 2-adic complexity, the pattern sequences \(\mathcal{P}_{k}\) are highly predictable which can be measured in terms of a very small expansion complexity and a very large autocorrelation, see for example [17]. However, subsequences along polynomial values still keep the former desirable features but lose the latter undesirable ones. For the _Thue-Morse sequence along squares_\(\mathcal{T}^{\prime}=(t_{n^{2}})_{n\geq 0}\) and the _pattern sequence along squares_\(\mathcal{P}^{\prime}_{k}=(p_{n^{2}})_{n\geq 0}\), we get \[\lceil\Phi_{2}(\mathcal{T}^{\prime},N)\rceil\geq\sqrt{\frac{2N}{5}}-1,\] and \[\lceil\Phi_{2}(\mathcal{P}^{\prime}_{k},N)\rceil\geq\sqrt{\frac{N}{8}}-1,\quad N \geq 2^{2k+2},\quad k\geq 2,\] from the bounds in [30]. For \(k\geq 1\) and the _pattern sequence along polynomial values of_\(f(X)\), \(\mathcal{P}^{\prime\prime}_{k}=(p_{f(n)})_{n\geq 0}\), where \(f(X)\in\mathbb{Z}[X]\) is a monic polynomial of degree \(d\geq 2\) with \(f(n)\geq 0\) for \(n\geq 0\), we get \(\Phi_{2}(\mathcal{P}^{\prime\prime}_{k},N)\gg N^{1/d}\), where the implied constants depend on \(f(X)\) and \(k\), see [24]. Sequence of the sum of digits in Zeckendorf base: Let \(F_{0}=0,F_{1}=1\) and \(F_{i+2}=F_{i+1}+F_{i}\) for all \(i\geq 0\), which forms the Fibonacci sequence. Each integer \(n\geq 0\) can be represented uniquely by \[n=\sum_{i\geq 0}\varepsilon_{i}(n)F_{i+2},\] with \(\varepsilon_{i}(n)\in\{0,1\}\) and \(\varepsilon_{i}(n)\varepsilon_{i+1}(n)=0\) for all \(i\geq 0\). Then the _Zeckendorf base sum of digits function_ is defined by \[s_{Z}(n)=\sum_{i\geq 0}\varepsilon_{i}(n),\ n\geq 0.\] For the binary sequence of the Zeckendorf base sum of digits function \(\mathcal{U}=(u_{n})_{n\geq 0}\) with \(u_{n}=s_{Z}(n)\bmod 2\) and the binary sequence along polynomial values of the Zeckendorf base sum of digits function \(\mathcal{U}^{\prime}=(u_{f(n)})_{n\geq 0}\) with \(u_{f(n)}=s_{Z}(f(n))\bmod 2\), where \(f(x)\in\mathbb{Z}[x]\) is a monic polynomial of degree \(d\geq 2\) with \(f(n)\geq 0\) for \(n\geq 0\), we get \(\Phi_{2}(\mathcal{U},N)\gg N\) and \(\Phi_{2}(\mathcal{U}^{\prime},N)\gg N^{1/(2d)}\), see [9]. For the Thue-Morse sequence, the Rudin-Shapiro sequence (both along the values of \(f(X)\in\{X,X^{2},X^{3},X^{4}\}\)) and the binary sequence of the Zeckendorf base sum of digits function, we calculated the \(N\)th 2-adic complexity up to \(N=1\,000\,000\) which leads in all cases to the conjecture that \(\Phi_{2}(\mathcal{S},N)=\frac{N}{2}+\mathcal{O}(\log N)\). We can also combine Theorem 1 and [1, Theorem 5] to get a lower bound on the \(N\)th 2-adic complexity in terms of the \(N\)th correlation measure \(C_{2}(\mathcal{S},N)\) of order 2 introduced by Mauduit and Sarkozy [15]. More precisely, for \(k\geq 2\) the _Nth correlation measure \(C_{k}(\mathcal{S},N)\) of order \(k\)_ of \(\mathcal{S}=(s_{n})_{n\geq 0}\) is \[C_{k}(\mathcal{S},N)=\max_{U,D}\left|\sum_{i=0}^{U-1}(-1)^{s_{i+d_{1}}+s_{i+d_ {2}}+\ldots+s_{i+d_{k}}}\right|,\] where the maximum is taken over all \(D=(d_{1},d_{2},\ldots,d_{k})\) and \(U\) such that \(0\leq d_{1}<d_{2}<\cdots<d_{k}\leq N-U\). Then we have \[C_{2}(\mathcal{S},N)\geq N-2^{M(\mathcal{S},N)}+1\] and the following result follows. **Corollary 1**.: _Let \(\mathcal{S}=(s_{n})_{n\geq 0}\) be a binary sequence. Then we have_ \[\lceil\Phi_{2}(\mathcal{S},N)\rceil\geq\log_{2}\left(N+1-C_{2}(\mathcal{S},N) \right)-1.\] _In particular, if \(C_{2}(\mathcal{S},N)=o(N)\), then we have_ \[\lceil\Phi_{2}(\mathcal{S},N)\rceil\geq\log_{2}(N)-1+o(1).\] As an example we apply this relation to the Legendre sequence (along polynomial values): For an odd prime \(p\) and a squarefree polynomial \(f(X)\in\mathbb{F}_{p}[X]\) of degree \(d\), the \(p\)-periodic Legendre sequence \(\mathcal{L}=(\ell_{n})_{n\geq 0}\) along the values of \(f(X)\) is defined by \[\ell_{n}=\left\{\begin{array}{ll}1,&\mbox{if }\left(\frac{f(n)}{p}\right)=1, \\ 0,&\mbox{otherwise,}\end{array}\right.\quad n\geq 0,\] where \(\left(\frac{\cdot}{p}\right)\) is the Legendre symbol. By [15] we have \[C_{2}(\mathcal{L},N)\ll dp^{1/2}\log p,\quad 1\leq N\leq p.\] and get \[\Phi_{2}(\mathcal{L},N)\geq\log_{2}(\min\{N,p\})-1+o(1)\quad\text{if }dp^{1/2} \log p=o(N).\] For the Legendre sequence with \(f(X)=X\), it is conjectured that \(\Phi_{2}(\mathcal{L},N)=\min\{N/2,\Phi_{2}(\mathcal{L})\}+\mathcal{O}(\log N)\), which we tested for all primes \(p<50\,000\), where \(\Phi_{2}(\mathcal{L})=\log_{2}(2^{p}-1)\), see [7, Theorem 2], [37, Theorem 3] and [8]. ## 3 A relation between maximum-order complexity and \(2\)-adic complexity for periodic sequences Now we turn to consider the case of periodic sequences. First we prove the following result, which is similar to the corresponding result for the maximum-order complexity of a \(T\)-periodic sequence \(\mathcal{S}\), \(M(\mathcal{S})=M(\mathcal{S},2T-1)\). **Lemma 1**.: _Let \(\mathcal{S}=(s_{n})_{n\geq 0}\) be a binary sequence of period \(T\). Then the \(2\)-adic complexity \(\Phi_{2}(\mathcal{S})\) satisfies_ \[\Phi_{2}(\mathcal{S})=\Phi_{2}(\mathcal{S},2T+1)=\Phi_{2}(\mathcal{S},N)\] _for any \(N>2T\)._ Proof. Let \(\mathcal{S}\) correspond to the rational number \[\sum_{n=0}^{\infty}s_{n}2^{n}=-\frac{\sum_{n=0}^{T-1}s_{n}2^{n}}{2^{T}-1}=- \frac{f}{q},\] with \(0\leq f\leq q\) and \(\gcd(f,q)=1\). It is clear that \(q<2^{T}\) and \(q\) is odd. Assume \(N>2T\) and that there are integers \(\widetilde{f}\) and odd \(\widetilde{q}\) with \[\max\{|\widetilde{f}|,|\widetilde{q}|\}<q<2^{T}\] and \[\widetilde{q}\sum_{n=0}^{N-1}s_{n}2^{n}\equiv\widetilde{f}\pmod{2^{N}}.\] We obtain \[-\frac{f}{q}\equiv\frac{\widetilde{f}}{\widetilde{q}}\pmod{2^{N}},\text{ that is, }-\widetilde{q}f\equiv q\widetilde{f}\pmod{2^{N}}.\] Since \(|q\widetilde{f}+\widetilde{q}f|<2^{2T+1}\leq 2^{N}\), we derive \(\widetilde{q}f+q\widetilde{f}=0\). This leads to \(q|\widetilde{q}\), which is impossible due to the assumption \(|\widetilde{q}|<q\). Thus we get \(\Phi_{2}(\mathcal{S},N)=\log_{2}(q)=\Phi_{2}(\mathcal{S})\) for \(N>2T\), which completes the proof. Note that we may have \(\Phi_{2}(\mathcal{S},2T)<\Phi_{2}(\mathcal{S})\). For example, consider the \(5\)-periodic sequence starting with \((0,1,0,0,1)\) which is the binary expansion of \(18\). Since \(\gcd(2^{5}-1,18)=1\) we get \(\Phi_{2}(\mathcal{S})=\log_{2}(31)\). However, we have \[19\cdot 18\cdot(1+2^{5})\equiv 22\pmod{2^{10}},\] may take \(q=19\) and \(f=22\) and thus get \[\Phi_{2}(\mathcal{S},10)\leq\log_{2}(22)<\Phi_{2}(\mathcal{S}).\] So Theorem 1 and Lemma 1 indicate \[M(\mathcal{S}) = M(\mathcal{S},2T-1)\] \[\leq \lceil\Phi_{2}(\mathcal{S},2T-1)\rceil+1\leq\lceil\Phi_{2}( \mathcal{S},2T+1)\rceil+1=\lceil\Phi_{2}(\mathcal{S})\rceil+1.\] Below we prove a slightly stricter bound in another way (for the periodic case). **Theorem 2**.: _Let \(\mathcal{S}=(s_{n})_{n\geq 0}\) be a binary sequence of period \(T\). Then we have_ \[M(\mathcal{S})\leq\lceil\Phi_{2}(\mathcal{S})\rceil\,.\] Proof. According to [26, Prop. 2], we will compute the minimum integer \(k\), which equals to the maximum-order complexity \(M(\mathcal{S})\), such that all (\(T\) many) subsequences of length \(k\): \[(s_{0},s_{1},\ldots,s_{k-1}),\ (s_{1},s_{2},\ldots,s_{k}),\ldots,(s_{T-1},s_{0}, \ldots,s_{k-2})\] are distinct. Let \[\sum_{n=0}^{\infty}s_{n}2^{n}=-\frac{\sum_{n=0}^{T-1}s_{n}2^{n}}{2^{T}-1}=- \frac{f}{q},\] with \(0\leq f\leq q\) and \(\gcd(f,q)=1\). Then \(\Phi_{2}(\mathcal{S})=\log_{2}(q)\). Below we prove the statement in terms of \(\log_{2}(q)\). If \(T=1\), that is, \(\mathcal{S}\) is constant, we derive \(q=1\) and \(M(\mathcal{S})=0=\log_{2}(1)=\Phi_{2}(\mathcal{S})\). Below we suppose \(T\geq 2\) and in this case we have \(0<f<q\). Now we assume that \(T>\lceil\log_{2}(q)\rceil\), since otherwise the result is trivial by \[\lceil\log_{2}(T)\rceil\leq M(\mathcal{S})<T,\] see [11, Prop. 2]. For \(0\leq\tau<T\), suppose that the cyclic (left) \(\tau\)-shift of \(\mathcal{S}\), denoted by \(\mathcal{S}^{(\tau)}\), corresponds to the rational number \(-\frac{h_{\tau}}{q}\) with \(0<h_{\tau}<q\) and \(\gcd(h_{\tau},q)=1\) It is clear that \(h_{0}=f\). Among these \(T\) many shift sequences, we count the ones with the same beginning \(N\) terms. If \(\mathcal{S}^{(i)}\) (associated to \(-\frac{h_{i}}{q}\)) and \(\mathcal{S}^{(j)}\) (associated to \(-\frac{h_{j}}{q}\)) are with the same beginning \(N\) terms, we have \[-\frac{h_{i}}{q}\equiv-\frac{h_{j}}{q}\pmod{2^{N}},\] which holds if and only if \(h_{i}\equiv h_{j}\pmod{2^{N}}\), since \(q|(2^{T}-1)\). Let \(2^{m-1}<q<2^{m}\) for some positive integer \(m\), so \(m=\lceil\log_{2}(q)\rceil\). If we choose \(N=m\), we will find that \(h_{i}\equiv h_{j}\pmod{2^{m}}\) if and only if \(h_{i}=h_{j}\) since \(0<h_{i},h_{j}<q<2^{m}\). It means that the beginning \(m\) terms of \(-\frac{h_{i}}{q}\) are different from the ones of \(-\frac{h_{j}}{q}\) for all \(0\leq i<j<T\). Then we derive that any subsequences \((s_{i},s_{i+1},\ldots,s_{i+m-1})\) of length \(m\) for \(0\leq i<T\) are distinct, and hence \(M(\mathcal{S})\leq m=\lceil\log_{2}(q)\rceil\). Finally, together with the notion of the \(2\)-adic complexity, we get \(M(\mathcal{S})\leq\lceil\Phi_{2}(\mathcal{S})\rceil\) directly. As far as we know, these are the first results on the relation between the maximum-order complexity and the \(2\)-adic complexity. It disproves a claim by Goresky and Klapper "If a sequence is generated by an FCSR with nonnegative memory, then its \(N\)-adic span is no greater than its maximum order complexity" in [5, p.329]. For the requirement of nonnegative memory, see [5, Prop.4.7.1]. We remark that it is also important to consider the _symmetric \(2\)-adic complexity_ of \(\mathcal{S}\), which is the minimum of the \(2\)-adic complexities of \(\mathcal{S}\) and \(\mathcal{S}^{rev}\), where \(\mathcal{S}^{rev}\) is the sequence formed by reversing each period of \(\mathcal{S}\), see [5, Sect. 16.2], since \(\Phi_{2}(\mathcal{S}^{rev})\) may be substantially smaller than \(\Phi_{2}(\mathcal{S})\). By Theorem 2 we have \(M(\mathcal{S}^{rev})\leq\lceil\Phi_{2}(\mathcal{S}^{rev})\rceil\). By [26, Prop.2] it is clear that \(M(\mathcal{S}^{rev})=M(\mathcal{S})\) and so \[M(\mathcal{S})\leq\min(\lceil\Phi_{2}(\mathcal{S})\rceil\,,\lceil\Phi_{2}( \mathcal{S}^{rev})\rceil).\] ## 4 Maximum-order complexity of periodic sequences ### A characterization of the maximum-order complexity The idea in the proof of Theorem 2 helps us to characterize the maximum-order complexity of periodic sequences. **Theorem 3**.: _Let \(\mathcal{S}=(s_{n})_{n\geq 0}\) be a binary sequence of period \(T(\geq 2)\). If_ \[\sum_{n=0}^{\infty}s_{n}2^{n}=-A/q\quad\text{with }0<A<q\text{ and }\gcd(A,q)=1\] _and_ \[D_{A}=\{0\leq u<q:u\equiv A\cdot 2^{n}\pmod{q},0\leq n<T\},\] _then \(M(\mathcal{S})=N\) if and only if \(N\) is the least integer such that_ \[u\not\equiv v\pmod{2^{N}}\] _for any different \(u,v\in D_{A}\)._ Proof. Suppose that \(\mathcal{S}^{(\tau)}\), the (left) \(\tau\)-shift of \(\mathcal{S}\) for \(0\leq\tau<T\), corresponds to the rational number \(-\frac{A^{(\tau)}}{q}\). We see that \(A^{(\tau)}\equiv A2^{T-\tau}\pmod{q}\) and hence \(D_{A}=\{A^{(\tau)}:0\leq\tau<T\}\). For \(N\geq 1\), the first \(N\) elements of \(\mathcal{S}^{(i)}\) are the same as the ones of \(\mathcal{S}^{(j)}\) for \(0\leq i<j<T\) if and only if \[-\frac{A^{(i)}}{q}\equiv-\frac{A^{(j)}}{q}\pmod{2^{N}},\] which holds if and only if \(A^{(i)}\equiv A^{(j)}\pmod{2^{N}}\), since \(q\) is odd. This means that for \(u,v\in D_{A}\), if \(u\not\equiv v\pmod{2^{N}}\), we derive that the first \(N\) elements of \(-u/q\) are different from the ones of \(-v/q\), which completes the proof. The set \(\langle 2\rangle=\{0\leq u<q:u=2^{n}\pmod{q},0\leq n<T\}\) generated by 2 modulo \(q\) is a sub-group of \(\mathbb{Z}_{q}^{*}=\{0<u<q:\gcd(u,q)=1\}\) under integer multiplication modulo \(q\). Thus according to Theorem 3, to analyze the maximum-order complexity of periodic sequences, one only needs to consider the partition \[\mathbb{Z}_{q}^{*}=g_{1}\langle 2\rangle\cup g_{2}\langle 2\rangle\cup\cdots \cup g_{\varphi(q)/T}\langle 2\rangle,\] a union of co-sets of \(\langle 2\rangle\), where \(g_{i}\langle 2\rangle=\{g_{i}u\pmod{q}:u\in\langle 2\rangle\}\subseteq \mathbb{Z}_{q}^{*}\) for \(1\leq i\leq\varphi(q)/T\). We note that \(D_{A}\) in Theorem 3 is a co-set of \(\langle 2\rangle\). ### Maximum-order complexity of \(\ell\)-sequences As a consequence, we consider the case when \(D_{A}=\mathbb{Z}_{q}^{*}\). We find that 2 modulo \(q\) is primitive and hence \(\mathcal{S}\) is an \(\ell\)-sequence in this case. In particular, \(q\) has to be a power of an odd prime. The following theorem will indicate that the bound in Theorem 2 is sharp. **Lemma 2**.: _Suppose that \(q=p^{r}\) the power of an odd prime for \(r\geq 1\) and \(2\) modulo \(q\) is primitive. If \(q\) is of the form \(2^{k}+1\) for some integer \(k\geq 1\), then the possible value \(q\) is in \(\{3,5,9\}\)._ Proof. It is clear that \[q=\left\{\begin{array}{cl}3,&\mbox{if }k=1,\\ 5,&\mbox{if }k=2,\\ 3^{2},&\mbox{if }k=3,\end{array}\right.\] which satisfies all other assumptions in the lemma. Now we consider the case \(k\geq 4\) (and hence \(q\geq 17\)). If \(r=1\), that is, \(q\) is an odd prime, we get \[\left(\frac{2}{q}\right)=-1=(-1)^{(q^{2}-1)/8},\] since \(2\) modulo \(q\) is primitive, where \(\left(\vdots\right)\) is the Legendre symbol. We derive \(q\equiv\pm 3\pmod{8}\), which contradicts \(q=2^{k}+1\equiv 1\pmod{8}\). If \(r\geq 2\), we see that \[2^{k}\equiv-1\pmod{q}\ \text{ and }2^{\varphi(q)/2}\equiv-1\pmod{q},\] the latter holds due to \(2\) modulo \(q\) being primitive. Hence \(k=c\varphi(q)/2\) for some odd positive integer \(c\geq 1\). We see that \(2\) modulo \(p^{\ell}\) is primitive for all \(1\leq\ell\leq r\) due to \(2\) modulo \(q\) being primitive again. So we have \(2^{\varphi(p^{r-1})/2}\equiv-1\pmod{p^{r-1}}\) and write \[2^{p^{r-2}(p-1)/2}=wp^{r-1}-1\] for some positive integer \(w\geq 1\). We compute \[q=2^{k}+1=2^{c\varphi(q)/2}+1=(2^{p^{r-2}(p-1)/2})^{cp}+1=(wp^{r-1}-1)^{cp}+1.\] Together with \(wp^{r-1}-1=wp^{1/3}p^{r-4/3}-1>p^{r-4/3}\) and \(cp(r-4/3)\geq 3(r-4/3)=r+(2r-4)\geq r\) (since \(q\geq 17\) and \(r\geq 2\)), we derive \[(wp^{r-1}-1)^{cp}+1>(p^{r-4/3})^{cp}+1>p^{r}=q,\] a contradiction. Putting everything together, we see that only \(q\in\{3,5,9\}\) satisfies the requirements. **Theorem 4**.: _Let \(\mathcal{S}=(s_{n})_{n\geq 0}\) be a binary \(\ell\)-sequence with (minimal) connection integer \(q(\geq 3)\), which is an odd prime power. Then the maximum-order complexity \(M(\mathcal{S})\) of \(\mathcal{S}\) satisfies_ \[M(\mathcal{S})=\left\{\begin{array}{ll}\left\lfloor\log_{2}(q)\right\rfloor,&\text{if }q\in\{3,5,9\},\\ \left\lceil\log_{2}(q)\right\rceil,&\text{otherwise}.\end{array}\right.\] Proof. We assume that \(\mathcal{S}\) is an \(\ell\)-sequence defined by Eq.(8) with some integer \(A\). Since \(2\) modulo \(q\) is primitive, we see that \(D_{A}=\mathbb{Z}_{q}^{*}\) defined in Theorem 3. * If \(q=3\), we see that the \(\ell\)-sequence \(\mathcal{S}=(10)\) or \((01)\), whose period is \(2\). We check that \(M(\mathcal{S})=1=\left\lfloor\log_{2}(q)\right\rfloor\) by [26, Prop. 2]. Below we consider the case of \(q\geq 5\). Let \(2^{m-1}<q<2^{m}\) for some positive integer \(m\geq 3\). * If \(q=2^{m-1}+1\), we find that \(x\leq q-1=2^{m-1}\) for any \(x\in D_{A}\) and hence all \(x\in D_{A}\) modulo \(2^{m-1}\) are distinct. However, we have \(1+2^{m-2}\neq 1\) since \(m\geq 3\) and \[1+2^{m-2}\equiv 1\pmod{2^{m-2}}.\] We remark that both \(1\) and \(1+2^{m-2}\) are in \(\mathbb{Z}_{q}^{*}(=D_{A})\). So by Theorem 3 we derive that \(M(\mathcal{S})=m-1=\lfloor\log_{2}(q)\rfloor\). Furthermore, we have \(q=5\) if \(m=3\) and \(q=9\) if \(m=4\). But if \(m\geq 5\), no such \(q\) exists by Lemma 2. * If \(q>2^{m-1}+1\), we find that \(q-1>2^{m-1}\) and all \(x\in D_{A}\) modulo \(2^{m}\) are distinct. However, we have \(1+2^{m-1}\neq 1\) and \(2+2^{m-1}\neq 2\) since \(m\geq 3\) and \[1+2^{m-1}\equiv 1\pmod{2^{m-1}}\text{ and }2+2^{m-1}\equiv 2\pmod{2^{m-1}}.\] We remark that either both \(1\) and \(1+2^{m-1}\) or both \(2\) and \(2+2^{m-1}\) are in \(\mathbb{Z}_{q}^{*}(=D_{A})\)1. So by Theorem 3 we derive that \(M(\mathcal{S})=m=\lceil\log_{2}(q)\rceil\). Footnote 1: We need to prove either \(\gcd(1+2^{m-1},q)=1\) or \(\gcd(2+2^{m-1},q)=1\). Let \(q=p^{r}\), an odd prime-power. If \(p|(1+2^{m-1})\), then \(p\nmid(2+2^{m-1})\). We list some examples of \(\ell\)-sequences in Table 1. \begin{tabular}{c c c c c} \hline \hline \(2^{m-1}<q<2^{m}\) & \(T=\varphi(q)\) & \(\lceil\log_{2}(q)\rceil\) & \(M(\mathcal{S})\) & Remarks \\ \hline \(2<q=3<2^{2}\) & 2 & 2 & 1 & \(M(\mathcal{S})=\lfloor\log_{2}(q)\rfloor\) \\ \(2^{3}<q=3^{2}<2^{4}\) & 6 & 4 & 3 & \(M(\mathcal{S})=\lfloor\log_{2}(q)\rfloor\) \\ \(2^{4}<q=3^{3}<2^{5}\) & 18 & 5 & 5 & \\ \(2^{2}<q=5<2^{3}\) & 4 & 3 & 2 & \(M(\mathcal{S})=\lfloor\log_{2}(q)\rfloor\) \\ \(2^{9}<q=5^{4}<2^{10}\) & 500 & 10 & 10 & \\ \(2^{4}<q=19<2^{5}\) & 18 & 5 & 5 & \\ \(2^{8}<q=19^{2}<2^{9}\) & 342 & 9 & 9 & \\ \(2^{12}<q=19^{3}<2^{13}\) & 6498 & 13 & 13 & \\ \hline \hline \end{tabular} Table 1: \(M(\mathcal{S})\) of binary \(\ell\)-sequence \(\mathcal{S}\) with connection integer \(q\) For non-\(\ell\)-sequences, we have a different phenomenon, see examples in Table 2. ### Binary sequences \(\mathcal{S}\) of period \(T\) with \(M(\mathcal{S})=T-1\) Binary sequences \(\mathcal{S}\) of period \(T\) with \(M(\mathcal{S})=T-1\) were considered before in [23, 25, 26, 31]. Now we look at them in another way. Suppose that \(\mathcal{S}\) corresponds to the rational number \(\frac{A}{q}\) with \(-q<A<0\) and \(\gcd(A,q)=1\). Then by [5, Thm. 4.5.2], \(\mathcal{S}\) can be defined by \[s_{n}=(A2^{-n}\bmod q)\bmod 2,\ n\geq 0. \tag{8}\] Below we list some examples of \(\mathcal{S}\) with \(M(\mathcal{S})=T-1\). * If we choose \(q=31(=2^{5}-1)\) in Eq.(8), we produce \(\mathcal{S}=(11000)\) with period \(T=5\) if \(A=3\) and check \(M(\mathcal{S})=3(<T-1)\). While if \(A=5\) we produce \(\mathcal{S}=(10100)\) and check \(M(\mathcal{S})=4(=T-1)\). * If we choose \(q=127(=2^{7}-1)\) and \(A=37\) in Eq.(8), we produce \(\mathcal{S}=(1010010)\) with period \(T=7\). We check that \(M(\mathcal{S})=6(=T-1)\). * If we choose \(q=255(=2^{8}-1)\) and \(A=173\) in Eq.(8), we produce \(\mathcal{S}=(10110101)\) with period \(T=8\). We check that \(M(\mathcal{S})=7(=T-1)\). For such \(\mathcal{S}\), by Theorem 2 we see that \(\lceil\Phi_{2}(\mathcal{S})\rceil\geq T-1\) and so \[\Phi_{2}(\mathcal{S})\in\left\{\log_{2}\left(\frac{2^{T}-1}{3}\right),\ \log_{2}\left(2^{T}-1\right)\right\}.\] However, we prove that their \(2\)-adic complexity is maximal. **Theorem 5**.: _Let \(\mathcal{S}=(s_{n})_{n\geq 0}\) be a binary sequence of period \(T\geq 2\). If \(M(\mathcal{S})=T-1\), then the \(2\)-adic complexity of \(\mathcal{S}\) is also maximal, that is, \(\Phi_{2}(\mathcal{S})=\log_{2}(2^{T}-1)\)._ Proof. Since the maximum-order complexity is the same for every shift of \(\mathcal{S}\), we may assume that there exists an integer \(d\in\{1,\ldots,T-1\}\) such that \[s_{i}=s_{i+d}\quad\text{for }0\leq i\leq T-3,\] \[s_{i}=1-s_{i+d}\quad\text{for }i\in\{T-2,T-1\},\] see Equation (2) in [31, Sect.III, A, p.6190]. We have \(\gcd(d,T)=1\) by [31, Prop. 1] and hence \(e=d^{-1}\pmod{T}\) exists. Using the two equations above, we can check that \[s_{id-1}=1-s_{T-1}\text{ for }1\leq i\leq T-e,\ s_{jd-2}=1-s_{T-2}\text{ for }1\leq j \leq e.\] We also have \[s_{(T-e)d-1}=s_{T-2},\quad s_{ed-2}=s_{T-1}\quad\text{and}\quad s_{T-1}=1-s_{T -2}.\] We note that \[\{id-1\pmod{T}:1\leq i\leq T-e\}\cup\{jd-2\pmod{T}:1\leq j\leq e\}=\{0,1,\ldots,T -1\}.\] In the case \((s_{T-2},s_{T-1})=(0,1)\) we get \[S(2)\equiv\sum_{j=1}^{e}2^{jd-2}\equiv 2^{d-2}(2^{d}-1)^{-1}(2^{ed}-1)\equiv 2^{d- 2}(2^{d}-1)^{-1}\pmod{2^{T}-1},\] where we used \[ed\equiv 1\pmod{T}\quad\text{and thus}\quad 2^{ed}\equiv 2\pmod{2^{T}-1}\] in the last step, from which we derive \[\gcd(S(2),2^{T}-1)=1.\] The case \((s_{T-2},s_{T-1})=(1,0)\) can be treated in a similar way. Hence the connection integer of \(\mathcal{S}\) is \(2^{T}-1\) and the \(2\)-adic complexity of \(\mathcal{S}\) is maximal, which completes the proof. For sequences \(\mathcal{S}\) with \(M(\mathcal{S})=T-2\), which are characterized in [36], the \(2\)-adic complexity is however not necessarily maximal anymore. For example for the sequence \(\mathcal{S}\) starting with \((0,0,1,0,0,1,0,0)\) of period \(T=8\), we have \(M(\mathcal{S})=T-2\) but \(\Phi_{2}(\mathcal{S})=\log_{2}(\frac{2^{T}-1}{3})\). ## 5 Final remarks and conclusions We have discussed the relationship between the maximum-order complexity and the \(2\)-adic complexity for any binary sequence. More precisely, the \(2\)-adic complexity is at least of the order of magnitude of the maximum-order complexity. If the order of magnitude of the maximum-order complexity is maximal, then our bound is essentially tight. However, for a typical sequence the maximum-order complexity is much smaller than the \(2\)-adic complexity. There is another complexity measure called the expansion complexity introduced by Diem [2]. Let \(G(x)=\sum_{i\geq 0}s_{i}x^{i}\) be the _generating function_ of \(\mathcal{S}=(s_{n})_{n\geq 0}\), which is viewed as a formal power series over \(\mathbb{F}_{2}\). The _Nth expansion complexity_\(E(\mathcal{S},N)\) is \(0\) if \(s_{0}=\ldots=s_{N-1}=0\) and otherwise the least total degree of a nonzero polynomial \(h(x,y)\in\mathbb{F}_{2}[x,y]\) with \[h(x,G(x))\equiv 0\bmod x^{N}.\] Note that \(E(\mathcal{S},N)\) depends only on the first \(N\) terms of \(\mathcal{S}\) and it has an expected value of order of magnitude \(N^{1/2}\), see [3, Theorem 2]. The sequence \((E(\mathcal{S},N))_{N\geq 1}\) is referred to as the _expansion complexity profile_ of \(\mathcal{S}\). The value \[E(\mathcal{S})=\sup_{N\geq 1}E(\mathcal{S},N)\] is the _expansion complexity_ of \(\mathcal{S}\), see [3, 4, 6] for more details on the expansion complexity of sequences. Another measure of pseudorandomness, the _rational complexity_\(R(\mathcal{S},N)\) (resp. \(R(\mathcal{S})\) in the periodic case), was introduced and studied in [33] dealing with so-called \(\mathbb{F}\mathbb{Q}\)SRs instead of LFSRs (linear complexity) or FCSRs (2-adic complexity). Finally, we give a list of the relationships between five complexities. For periodic sequences \(\mathcal{S}\), we have * \(M(\mathcal{S})\leq L(\mathcal{S})=E(\mathcal{S})-1\). * \(M(\mathcal{S})\leq\lceil\Phi_{2}(\mathcal{S})\rceil\) (our result). * The linear complexity and the 2-adic complexity complement each other. For example, for a binary \(m\)-sequence \(\mathcal{M}=(m_{n})_{n\geq 0}\) of period \(T=2^{r}-1\), that is, \[m_{n}=\operatorname{Tr}(g^{n})=\sum_{j=0}^{r-1}g^{2^{j}n},\quad n\geq 0,\] where \(g\) is a primitive element of \(\mathbb{F}_{2^{r}}\) and \(\operatorname{Tr}\) is the absolute trace of \(\mathbb{F}_{2^{r}}\), we see that \(L(\mathcal{M})=r\), which is minimal for any sequence of least period \(2^{r}-1\), but \(\Phi_{2}(\mathcal{M})=\log_{2}(2^{T}-1)\), which is the maximum, see [32]. For a binary \(\ell\)-sequence \(\mathcal{S}\) with minimal connection integer \(q\) of period \(T=\varphi(q)\), where \(\varphi\) is Euler's totient function, we see that \(L(\mathcal{S})\leq(q+1)/2\), the bound of which is sharp, for example if \(p\) and \(q=2p+1\) are primes with 2 being primitive modulo \(p\) and modulo \(q\) respectively, then the linear complexity is \(L(\mathcal{S})=(q+1)/2\) (\(=p+1\)) [28]. But \(\Phi_{2}(\mathcal{S})=\log_{2}(q)\), which is small. * \(R(\mathcal{S})=\Phi_{2}(\mathcal{S}^{rev})\) by [33, Theorem 13]. * \(M(\mathcal{S})\leq\lceil\Phi_{2}(\mathcal{S}^{rev})\rceil\). In the aperiodic case, we have the following relationships between the \(N\)th complexities for \(\mathcal{S}\): * \(M(\mathcal{S},N)\leq L(\mathcal{S},N)\). Hence a large \(N\)th maximum-order complexity implies a large \(N\)th linear complexity. However, \(M(\mathcal{S},N)\) should not be too large since otherwise the correlation measure of order 2 is large, see [17]. In particular, the expected value of \(M(\mathcal{S},N)\) is of order of magnitude \(\log(N)\)[11] and the expected value of \(L(\mathcal{S},N)\) is \(\frac{N}{2}+\mathcal{O}(1)\)[19]. * \(E(\mathcal{S},N)\leq L(\mathcal{S},N)+1\). In fact it was proved in [18] that \[E(\mathcal{S},N)\leq\min\{L(\mathcal{S},N)+1,N+2-L(\mathcal{S},N)\}.\] * \(M(\mathcal{S},N)\leq\lceil\Phi_{2}(\mathcal{S},N)\rceil+1\) (our result). * The \(N\)th expansion complexity and the \(N\)th maximum-order complexity complement each other. We note that the expected value of \(M(\mathcal{S},N)\) is of order of magnitude \(\log(N)\) and \(E(\mathcal{S},N)\) has an expected value of order of magnitude \(N^{1/2}\). However, the pattern sequences \(\mathcal{P}_{k}\) of length \(k\) including the Thue-Morse sequence \(\mathcal{T}\) (see notions in Section 2) have bounded expansion complexity, that is \(E(\mathcal{P}_{k},N)\leq 2^{k}+3\) for \(k\geq 1\), see [17]. While the maximum-order complexity is of the largest possible order of magnitude \(N\), that is \(M(\mathcal{T},N)>N/5\) for \(N>5\) and \(M(\mathcal{P}_{k},N)>N/6\) for \(N\geq 2^{k+3}-7\) if \(k\geq 2\), see [29]. * [33] provides examples that rational complexity, 2-adic complexity and linear complexity complement each other. More precisely, in these examples each two of these measures are close to the expected value whereas the third one is much smaller and only one of these three measures detects the non-randomness.
2309.05215
A discrete uniformization theorem for decorated piecewise Euclidean metrics on surfaces
In this paper, we introduce a new discretization of the Gaussian curvature on surfaces, which is defined as the quotient of the angle defect and the area of some dual cell of a weighted triangulation at the conic singularity. A discrete uniformization theorem for this discrete Gaussian curvature is established on surfaces with non-positive Euler number. The main tools are Bobenko-Lutz's discrete conformal theory for decorated piecewise Euclidean metrics on surfaces and variational principles with constraints.
Xu Xu, Chao Zheng
2023-09-11T03:28:22Z
http://arxiv.org/abs/2309.05215v1
# A Discrete Uniformization Theorem for Decorated Piecewise Euclidean Metrics on Surfaces ###### Abstract. In this paper, we introduce a new discretization of the Gaussian curvature on surfaces, which is defined as the quotient of the angle defect and the area of some dual cell of a weighted triangulation at the conic singularity. A discrete uniformization theorem for this discrete Gaussian curvature is established on surfaces with non-positive Euler number. The main tools are Bobenko-Lutz's discrete conformal theory for decorated piecewise Euclidean metrics on surfaces and variational principles with constraints. Key words and phrases:Discrete uniformization theorem; Decorated piecewise Euclidean metrics; Discrete Gaussian curvature; Variational principle MSC (2020): 52C26 ## 1. Introduction Bobenko-Lutz [2] recently introduced the decorated piecewise Euclidean metrics on surfaces. Suppose \(S\) is a connected closed surface and \(V\) is a finite non-empty subset of \(S\), we call \((S,V)\) a marked surface. A piecewise Euclidean metric (PE metric) \(dist_{S}\) on the marked surface \((S,V)\) is a flat cone metric with the conic singularities contained in \(V\). A decoration on a PE surface \((S,V,dist_{S})\) is a choice of circle of radius \(r_{i}\geq 0\) at each point \(i\in V\). These circles in the decoration are called vertex-circles. We denote a decorated PE surface by \((S,V,dist_{S},r)\) and call the pair \((dist_{S},r)\) a decorated PE metric. In this paper, we focus on the case that \(r_{i}>0\) for all \(i\in V\) and each pair of vertex-circles is separated. For a PE surface \((S,V,dist_{S})\), denote \(\theta_{i}\) as the cone angle at \(i\in V\). The angle defect \[W:V\to(-\infty,2\pi),\quad W_{i}=2\pi-\theta_{i}, \tag{1}\] is used to describe the conic singularities of the PE metric. Let \(\mathcal{T}=(V,E,F)\) be a triangulation of \((S,V)\), where \(V,E,F\) are the sets of vertices, edges and faces respectively. The triangulation \(\mathcal{T}\) for a PE surface \((S,V,dist_{S})\) is a geodesic triangulation if the edges are geodesics in the PE metric \(dist_{S}\). We use one index to denote a vertex (such as \(i\)), two indices to denote an edge (such as \(\{ij\}\)) and three indices to denote a face (such as \(\{ijk\}\)) in the triangulation \(\mathcal{T}\). For any decorated geodesic triangle \(\{ijk\}\in F\), there is a unique circle \(C_{ijk}\) simultaneously orthogonal to the three vertex-circles at the vertices \(i,j,k\)[10]. We call this circle \(C_{ijk}\) as the face-circle of the decorated geodesic triangle \(\{ijk\}\) and denote its center by \(c_{ijk}\) and radius by \(r_{ijk}\). The center \(c_{ijk}\) of the face-circle \(C_{ijk}\) of the decorated geodesic triangle \(\{ijk\}\) is the geometric center introduced by Glickenstein [9] and Glickenstein-Thomas [11] for general discrete conformal structures on surfaces. Denote \(\alpha_{ij}^{k}\) as the interior intersection angle of the face-circle \(C_{ijk}\) and the edge \(\{ij\}\). Please refer to Figure 1 (left) for the angle \(\alpha_{ij}^{k}\). The edge \(\{ij\}\), shared by two adjacent decorated triangles \(\{ijk\}\) and \(\{ijl\}\), is called weighted Delaunay if \[\alpha^{k}_{ij}+\alpha^{l}_{ij}\leq\pi. \tag{2}\] The triangulation \(\mathcal{T}\) is called weighted Delaunay in the decorated PE metric \((dist_{S},r)\) if every edge in the triangulation is weighted Delaunay. Connecting the center \(c_{ijk}\) with the vertices \(i,j,k\) by geodesics produces a cellular decomposition of the decorated triangle \(\{ijk\}\). Denote \(A^{jk}_{i}\) as the sum of the signed area of the two triangles adjacent to \(i\) in the cellular decomposition of the decorated triangle \(\{ijk\}\). The area of the triangle with the vertices \(i\), \(j\), \(c_{ijk}\) is positive if it is on the same side of the edge \(\{ij\}\) as the decorated triangle \(\{ijk\}\), otherwise it is negative (or zero if \(c_{ijk}\) lies in \(\{ij\}\)). Please refer to the shaded domain in Figure 1 (left) for \(A^{jk}_{i}\). Gluing these cells of all decorated triangles isometrically along edges in pairs leads a cellular decomposition of the decorated PE surface \((S,V,dist_{S},r)\). Set \[A_{i}=\sum_{\{ijk\}\in F}A^{jk}_{i}.\] Please refer to Figure 1 (right) for \(A_{i}\). **Definition 1.1**.: Suppose \((S,V,dist_{S},r)\) is a decorated PE surface and \(\mathcal{T}\) is a weighted Delaunay triangulation of \((S,V,dist_{S},r)\). The discrete Gaussian curvature \(K_{i}\) at the vertex \(i\in V\) is the quotient of the angle defect \(W_{i}\) and the area \(A_{i}\) of the dual cell at the vertex \(i\in V\), i.e., \[K_{i}=\frac{W_{i}}{A_{i}}. \tag{3}\] **Remark 1.2**.: In the literature, the discrete curvature is usually defined by the angle defect \(W\) in (1). However, the angle defect \(W\) is scaling invariant and does not approximate the smooth Gaussian curvature pointwisely on smooth surfaces as the triangulations of the surface become finer and finer. This is supported by the discussions in [3, 7]. For the discrete Gaussian curvature \(K\) in (3), it scales by a factor \(\frac{1}{u^{2}}\) upon a global rescaling of the decorated PE metric Figure 1. Domain of the signed area \(A^{jk}_{i}\) in a decorated triangle \(\{ijk\}\) (left) and domain of the area \(A_{i}\) in a decorated PE surface (right) by a factor \(u\). This property is paralleling to that of the smooth Gaussian curvature on surfaces. On the other hand, the definition of the discrete Gaussian curvature \(K_{i}\) coincides with the original definition of the Gaussian curvature on smooth surfaces. This implies that the discrete Gaussian curvature \(K_{i}\) is a good candidate as a discretization of the smooth Gaussian curvature on surfaces. **Remark 1.3**.: According to Definition 1.1, the discrete Gaussian curvature \(K_{i}\) defined by (3) seems to depend on the choice of weighted Delaunay triangulations of the decorated PE surface \((S,V,dist_{S},r)\). We will show that \(K_{i}\) is an intrinsic geometric invariant of the decorated PE surface \((S,V,dist_{S},r)\) in the sense that it is independent of the weighted Delaunay triangulations of \((S,V,dist_{S},r)\). Note that the angle defect \(W_{i}\) defined by (1) is an intrinsic geometric invariant of a decorated PE surface, we just need to prove that \(A_{i}\) is independent of the choice of weighted Delaunay triangulations. This is true by Lemma 2.8. **Remark 1.4**.: The weighted Delaunay triangulation is a natural generalization of the classical Delaunay triangulation. When the weighted Delaunay triangulation is reduced to the classical Delaunay triangulation, i.e. \(r_{i}=0\) for all \(i\in V\), the area \(A_{i}\) is exactly twice the area of the Voronoi cell at the vertex \(i\). Thus the area \(A_{i}\) is a generalization of the area of the Voronoi cell at the vertex \(i\). As a result, the discrete Gaussian curvature in Definition 1.1 generalizes Kourimska's definition of discrete Gaussian curvature in [16, 17]. The discrete Yamabe problem for a decorated PE metric \((dist_{S},r)\) on \((S,V)\) asks if there exists a discrete conformal equivalent decorated PE metric on \((S,V)\) with constant discrete Gaussian curvature. The following discrete uniformization theorem solves this problem affirmatively for the discrete Gaussian curvature \(K\) in Definition 1.1. **Theorem 1.5**.: For any decorated PE metric \((dist_{S},r)\) on a marked surface \((S,V)\) with Euler number \(\chi(S)\leq 0\), there is a discrete conformal equivalent decorated PE metric with constant discrete Gaussian curvature \(K\). By the relationships of the discrete Gaussian curvature \(K\) and the classical discrete Gaussian curvature \(W\), the case \(\chi(S)=0\) in Theorem 1.5 is covered by Bobenko-Lutz's work [2]. Therefore, we just need to prove the case \(\chi(S)<0\) in Theorem 1.5. **Remark 1.6**.: The discrete Yamabe problem on surfaces for different types of discrete conformal structures with respect to the classical discrete Gaussian curvature \(W\) has been extensively studied in the literature. For Thurston's circle packings on surfaces, the solution of discrete Yamabe problem gives rise to the famous Koebe-Andreev-Thurston Theorem. See also the work of Beardon-Stephenson [1] for the discrete uniformization theorems for circle packings on surfaces. For the vertex scalings introduced by Luo [18] on surfaces, Gu-Luo-Sun-Wu [13], Gu-Guo-Luo-Sun-Wu [12], Springborn [23] and Izmestiev-Prosanov-Wu [15] give nice answers to this problem in different background geometries. Recently, Bobenko-Lutz [2] established the discrete conformal theory for decorated PE metrics and prove the corresponding discrete uniformization theorem. Since Bobenko-Lutz's discrete conformal theory of decorated PE metrics also applies to the Euclidean vertex scalings and thus generalizes Gu-Luo-Sun-Wu's result [13] and Springborn's result [23], Theorem 1.5 also generalizes Kourimska's results in [16, 17]. It should be mentioned that Kourimska [16, 17] constructed counterexamples to the uniqueness of PE metrics with constant discrete Gaussian curvatures. We conjecture that the decorated PE metric with constant discrete Gaussian curvature \(K\) in Theorem 1.5 is not unique. The main tools for the proof of Theorem 1.5 are Bobenko-Lutz's discrete conformal theory for decorated PE metrics on surfaces [2] and variational principles with constraints. The main ideas of the paper come from reading of Bobenko-Lutz [2] and Kourimska [16, 17]. The paper is organized as follows. In Section 2, we briefly recall Bobenko-Lutz's discrete conformal theory for decorated PE metrics on surfaces. Then we show that \(A_{i}\) is independent of the choice of weighted Delaunay triangulations, i.e., Lemma 2.8. We also give some notations and a variational characterization of the area \(A_{i}^{jk}\). In this section, we also extend the energy function \(\mathcal{E}\) and the area function \(A_{tot}\). In Section 3, we translate Theorem 1.5 into an optimization problem with constraints, i.e., Lemma 3.2. Using the classical result from calculus, i.e., Theorem 3.3, we translate Lemma 3.2 into Theorem 3.4. By analysing the limit behaviour of sequences of discrete conformal factors, we get an asymptotic expression of the function \(\mathcal{E}\), i.e., Lemma 3.12. In the end, we prove Theorem 3.4. ### Acknowledgements The first author thanks Professor Feng Luo for his invitation to the workshop "Discrete and Computational Geometry, Shape Analysis, and Applications" taking place at Rutgers University, New Brunswick from May 19th to May 21st, 2023. The first author also thanks Carl O. R. Lutz for helpful communications during the workshop. ## 2. Preliminaries on decorated PE surfaces ### Discrete conformal equivalence and Bobenko-Lutz's discrete conformal theory In this subsection, we briefly recall Bobenko-Lutz's discrete conformal theory for decorated PE metrics on surfaces. Please refer to Bobenko-Lutz's original work [2] for more details on this. The PE metric \(dist_{S}\) on a PE surface with a geodesic triangulation defines a length map \(l:E\to\mathbb{R}_{>0}\) such that \(l_{ij},l_{ik},l_{jk}\) satisfy the triangle inequalities for any triangle \(\{ijk\}\in F\). Conversely, given a function \(l:E\to\mathbb{R}_{>0}\) satisfying the triangle inequalities for any face \(\{ijk\}\in F\), one can construct a PE metric on a triangulated surface by isometrically gluing Euclidean triangles along edges in pairs. Therefore, we use \(l:E\to\mathbb{R}_{>0}\) to denote a PE metric and use \((l,r)\) to denote a decorated PE metric on a triangulated surface \((S,V,\mathcal{T})\). **Definition 2.1** ([2], Proposition 2.2).: Let \(\mathcal{T}\) be a triangulation of a marked surface \((S,V)\). Two decorated PE metrics \((l,r)\) and \((\widetilde{l},\widetilde{r})\) on \((S,V,\mathcal{T})\) are discrete conformal equivalent if and only if there exists a discrete conformal factor \(u\in\mathbb{R}^{V}\) such that \[\widetilde{r}_{i}=e^{u_{i}}r_{i}, \tag{4}\] \[\widetilde{l}_{ij}^{2}=(e^{2u_{i}}-e^{u_{i}+u_{j}})r_{i}^{2}+(e^{2u_{j}}-e^{u _{i}+u_{j}})r_{j}^{2}+e^{u_{i}+u_{j}}l_{ij}^{2} \tag{5}\] for all \(\{ij\}\in E\). **Remark 2.2**.: Note that the inversive distance \[I_{ij}=\frac{l_{ij}^{2}-r_{i}^{2}-r_{j}^{2}}{2r_{i}r_{j}} \tag{6}\] between two vertex-circles is invariant under Mobius transformations [6]. Combining (4) and (5) gives \(I=\widetilde{I}\). Since each pair of vertex-circles is required to be separated, we have \(I>1\). Therefore, Definition 2.1 can be regarded as a special case of the inversive distance circle packings introduced by Bowers-Stephenson [4]. One can refer to [5, 14, 19, 24, 25] for more properties of the inversive distance circle packings on triangulated surfaces. In general, the existence of decorated PE metrics with constant discrete Gaussian curvatures on triangulated surfaces can not be guaranteed if the triangulation is fixed. In the following, we work with a generalization of the discrete conformal equivalence in Definition 2.1, introduced by Bobenko-Lutz [2], which allows the triangulation of the marked surface to be changed under the weighted Delaunay condition. **Definition 2.3** ([2], Definition 4.11).: Two decorated PE metrics \((dist_{S},r)\) and \((\widetilde{dist}_{S},\widetilde{r})\) on the marked surface \((S,V)\) are discrete conformal equivalent if there is a sequence of triangulated decorated PE surfaces \((\mathcal{T}^{0},l^{0},r^{0}),...,(\mathcal{T}^{N},l^{N},r^{N})\) such that **(i):**: the decorated PE metric of \((\mathcal{T}^{0},l^{0},r^{0})\) is \((dist_{S},r)\) and the decorated PE metric of \((\mathcal{T}^{N},l^{N},r^{N})\) is \((\widetilde{dist}_{S},\widetilde{r})\), **(ii):**: each \(\mathcal{T}^{n}\) is a weighted Delaunay triangulation of the decorated PE surface \((\mathcal{T}^{n},l^{n},r^{n})\), **(iii):**: if \(\mathcal{T}^{n}=\mathcal{T}^{n+1}\), then there is a discrete conformal factor \(u\in\mathbb{R}^{V}\) such that \((\mathcal{T}^{n},l^{n},r^{n})\) and \((\mathcal{T}^{n+1},l^{n+1},r^{n+1})\) are related by (4) and (5), **(iv):**: if \(\mathcal{T}^{n}\neq\mathcal{T}^{n+1}\), then \(\mathcal{T}^{n}\) and \(\mathcal{T}^{n+1}\) are two different weighted Delaunay triangulations of the same decorated PE surface. Definition 2.3 defines an equivalence relationship for decorated PE metrics on a marked surface. The equivalence class of a decorated PE metric \((dist_{S},r)\) on \((S,V)\) is also called as the discrete conformal class of \((dist_{S},r)\) and denoted by \(\mathcal{D}(dist_{S},r)\). **Lemma 2.4** ([2]).: The discrete conformal class \(\mathcal{D}(dist_{S},r)\) of a decorated PE metric \((dist_{S},r)\) on the marked surface \((S,V)\) is parameterized by \(\mathbb{R}^{V}=\{u:V\rightarrow\mathbb{R}\}\). For simplicity, for any \((\widetilde{dist}_{S},\widetilde{r})\in\mathcal{D}(dist_{S},r)\), we denote it by \((dist_{S}(u),r(u))\) for some \(u\in\mathbb{R}^{V}\). Set \[\mathcal{C}_{\mathcal{T}}(dist_{S},r)=\{u\in\mathbb{R}^{V}|\ \mathcal{T}\ \text{is a weighted Delaunay triangulation of}\ (S,V,dist_{S}(u),r(u))\}.\] For any decorated PE surface, there exists a unique complete hyperbolic surface \(\Sigma_{g}\), i.e., the hyperbolic surface induced by any triangular refinement of its unique weighted Delaunay tessellation. It is homeomorphic to \(S\backslash V\) and called as the fundamental discrete conformal invariant of the decorated PE metric \((dist_{S},r)\). The decoration of \(\Sigma_{g}\) is denoted by \(\omega:=e^{h}\) and here the height \(h\) is related to \(u\) by \(dh_{i}=-du_{i}\). The canonical weighted Delaunay tessellation \(\mathcal{T}\) of \(\Sigma_{g}\) is denoted by \(\mathcal{T}_{S_{g}}^{\omega}\). Bobenko-Lutz [2] defined the following set \[\mathcal{D}_{\mathcal{T}}(\Sigma_{g})=\{\omega\in\mathbb{R}^{V}_{>0}|\mathcal{ T}\ \text{refines}\ \mathcal{T}_{\Sigma_{g}}^{\omega}\}\] and proved the following proposition. **Proposition 2.5** ([2], Proposition 4.3).: Given a complete hyperbolic surface with ends \(\Sigma_{g}\). **(1):**: Each \(\mathcal{D}_{\mathcal{T}_{n}}(\Sigma_{g})\) is either empty or the intersection of \(\mathbb{R}^{V}_{>0}\) with a closed polyhedral cone. **(2):**: There is only a finite number of geodesic tessellations \(\mathcal{T}_{1},...,\mathcal{T}_{N}\) of \(\Sigma_{g}\) such that \(\mathcal{D}_{\mathcal{T}_{n}}(\Sigma_{g})\)\((n=1,...,N)\) is non-empty. In particular, \(\mathbb{R}^{V}_{>0}=\bigcup_{n=1}^{N}\mathcal{D}_{\mathcal{T}_{n}}(\Sigma_{g})\). Let \(P\) be the polyhedral cusp corresponding to the triangulated surface \((S,V,\mathcal{T})\) with fundamental discrete conformal invariant \(\Sigma_{g}\). The polyhedral cusp is convex if and only if \(\mathcal{T}\) is a weighted Delaunay triangulation. The set of all heights \(h\) of convex polyhedral cusps over the triangulated hyperbolic surface \((\Sigma_{g},\mathcal{T})\) is denoted by \(\mathcal{P}_{\mathcal{T}}(\Sigma_{g})\subseteq\mathbb{R}^{V}\). **Proposition 2.6** ([2], Proposition 4.9).: Given a decorated PE metric \((dist_{S},r)\) on the marked surface \((S,V)\). Then \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\), \(\mathcal{P}_{\mathcal{T}}(\Sigma_{g})\) and \(\mathcal{D}_{\mathcal{T}}(\Sigma_{g})\) are homeomorphic. Combining Proposition 2.5 and Proposition 2.6 gives the following result. **Lemma 2.7** ([2]).: The set \[J=\{\mathcal{T}|\mathcal{C}_{\mathcal{T}}(dist_{S},r)\text{ has non-empty interior in }\mathbb{R}^{V}\}\] is a finite set, \(\mathbb{R}^{V}=\cup_{\mathcal{T}_{i}\in\mathcal{J}}\mathcal{C}_{\mathcal{T}_ {i}}(dist_{S},r)\), and each \(\mathcal{C}_{\mathcal{T}_{i}}(dist_{S},r)\) is homeomorphic to a polyhedral cone (with its apex removed) and its interior is homeomorphic to \(\mathbb{R}^{V}\). ### A decorated triangle Denote \(r_{ij}\) as half of the distance of the two intersection points of the face-circle \(C_{ijk}\) and the edge \(\{ij\}\). Denote \(h^{k}_{ij}\) as the signed distance of the center \(c_{ijk}\) to the edge \(\{ij\}\), which is defined to be positive if the center is on the same side of the line determined by \(\{ij\}\) as the triangle \(\{ijk\}\) and negative otherwise (or zero if the center is on the line). Note that \(h^{k}_{ij}\) is symmetric in the indices \(i\) and \(j\). By Figure 3, we have \[h^{k}_{ij}=r_{ij}\cot\alpha^{k}_{ij}. \tag{7}\] Since \(r_{ij}>0\) and \(\alpha^{k}_{ij}\in(0,\pi)\), if \(h^{k}_{ij}<0\), then \(\alpha^{k}_{ij}\in(\frac{\pi}{2},\pi)\). The equality (7) implies that (2) is equivalent to \[h^{k}_{ij}+h^{l}_{ij}\geq 0 \tag{8}\] for any adjacent triangles \(\{ijk\}\) and \(\{ijl\}\) sharing a common edge \(\{ij\}\). Therefore, the equality (8) also characterizes a weighted Delaunay triangulation \(\mathcal{T}\) for a decorated PE metric \((l,r)\) on \((S,V)\). Due to this fact, the equality (8) is usually used to define the weighted Delaunay triangulations of decorated PE surfaces. See [5, 8] and others for example. Then \(A^{jk}_{i}\) can be written as \[A^{jk}_{i}=\frac{1}{2}l_{ij}h^{k}_{ij}+\frac{1}{2}l_{ki}h^{j}_{ki}. \tag{9}\] Since \(h^{k}_{ij},h^{j}_{ki}\) are the signed distances, thus \(A^{jk}_{i}\) is an algebraic sum of the area of triangles, i.e. a signed area. **Lemma 2.8**.: The area \(A_{i}\) is independent of the choice of weighted Delaunay triangulations of a decorated PE surface. Proof.: Suppose a decorated quadrilateral \(\{ijlk\}\) is in a face of the weighted Delaunay tessellation of a decorated PE surface, then there exist two weighted Delaunay triangulations \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) of the decorated PE surface with an edge \(\{jk\}\) in \(\mathcal{T}_{1}\) flipped to another edge \(\{il\}\) in \(\mathcal{T}_{2}\). Please refer to Figure 2. We just need to prove the signed area \(A^{jk}_{i}\) in \(\mathcal{T}_{1}\) is equal to the signed area \(A^{kl}_{i}+A^{jl}_{i}\) in \(\mathcal{T}_{2}\). In \(\mathcal{T}_{1}\), the signed area at the vertex \(i\) in \(\{ijlk\}\) is \(A^{jk}_{i}=\frac{1}{2}l_{ki}h^{j}_{ki}+\frac{1}{2}l_{ij}h^{k}_{ij}\). In \(\mathcal{T}_{2}\), the signed area at the vertex \(i\) in \(\{ijlk\}\) is \[A^{kl}_{i}+A^{jl}_{i} =\frac{1}{2}l_{ki}h^{l}_{ki}+\frac{1}{2}l_{il}h^{k}_{il}+\frac{1} {2}l_{ij}h^{l}_{ij}+\frac{1}{2}l_{il}h^{j}_{il}\] \[=\frac{1}{2}l_{ki}h^{l}_{ki}+\frac{1}{2}l_{ij}h^{l}_{ij}+\frac{1}{ 2}l_{il}(h^{k}_{il}+h^{j}_{il}).\] Since \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) are two weighted Delaunay triangulations of the same decorated PE metric on \((S,V)\), then \(h^{k}_{il}+h^{j}_{il}=0\) by (8). One can also refer to [2] (Proposition 3.4) for this. Moreover, \(h^{l}_{ki}=h^{j}_{ki}\) and \(h^{l}_{ij}=h^{k}_{ij}\). Then \(A^{kl}_{i}+A^{jl}_{i}=A^{jk}_{i}\). Q.E.D. Denote \(c_{ij}\) as the center of the edge \(\{ij\}\), which is obtained by projecting the center \(c_{ijk}\) to the line determined by \(\{ij\}\). Denote \(d_{ij}\) as the signed distance of \(c_{ij}\) to the vertex \(i\), which is positive if \(c_{ij}\) is on the same side of \(i\) as \(j\) along the line determined by \(\{ij\}\) and negative otherwise (or zero if \(c_{ij}\) is the same as \(i\)). In general, \(d_{ij}\neq d_{ji}\). Since the face-circle \(C_{ijk}\) is orthogonal to the vertex-circle at the vertex \(j\), we have \[r_{ijk}^{2}+r_{j}^{2}=d_{jk}^{2}+(h_{jk}^{i})^{2}=d_{ji}^{2}+(h_{ij}^{k})^{2}. \tag{10}\] Please refer to Figure 3 for this. Moreover, we have the following explicit expressions of \(d_{ij}\) and \(h_{ij}^{k}\) due to Glickenstein [9], i.e., \[d_{ij}=\frac{r_{i}^{2}+r_{i}r_{j}I_{ij}}{l_{ij}}, \tag{11}\] and \[h_{ij}^{k}=\frac{d_{ik}-d_{ij}\cos\theta_{jk}^{i}}{\sin\theta_{jk}^{i}}, \tag{12}\] where \(\theta_{jk}^{i}\) is the inner angle of the triangle \(\{ijk\}\) at the vertex \(i\). The equality (11) implies that \(d_{ij}\in\mathbb{R}\) is independent of the existence of the center \(c_{ijk}\). Since each pair of vertex-circles is required to be separated, then \(I>1\). This implies \[d_{rs}>0,\ \ \forall\{r,s\}\subseteq\{i,j,k\}.\] The following lemma gives some useful formulas. **Lemma 2.9** ([14, 24, 25]).: Let \(\{ijk\}\) be a decorated triangle with the edge lengths \(l_{ij},l_{jk},l_{ki}\) defined by (5). If the decorated triangle \(\{ijk\}\) is non-degenerate, then \[\frac{\partial\theta_{jk}^{i}}{\partial u_{j}}=\frac{\partial\theta_{ki}^{j}}{ \partial u_{i}}=\frac{h_{ij}^{k}}{l_{ij}},\ \ \frac{\partial\theta_{jk}^{i}}{\partial u_{i}}=-\frac{\partial\theta_{jk}^{i}}{ \partial u_{j}}-\frac{\partial\theta_{jk}^{i}}{\partial u_{k}}, \tag{13}\] Figure 2. Weighted Delaunay triangulation \(\mathcal{T}_{1}\) (left) and weighted Delaunay triangulation \(\mathcal{T}_{2}\) (right). where \[h_{ij}^{k}=\frac{r_{i}^{2}r_{j}^{2}r_{k}^{2}}{2A_{ijk}l_{ij}}[\kappa_{k}^{2}(1-I_{ k}^{2})+\kappa_{j}\kappa_{k}\gamma_{i}+\kappa_{i}\kappa_{k}\gamma_{j}]=\frac{r_{i}^{2 }r_{j}^{2}r_{k}^{2}}{2A_{ijk}l_{ij}}\kappa_{k}h_{k} \tag{14}\] with \(A_{ijk}=\frac{1}{2}l_{ij}l_{jk}\sin\theta_{ki}^{j}\), \(\gamma_{i}=I_{jk}+I_{ij}I_{ki}\), \(\kappa_{i}:=r_{i}^{-1}\) and \[h_{i}=\kappa_{i}(1-I_{jk}^{2})+\kappa_{j}\gamma_{k}+\kappa_{k}\gamma_{j}. \tag{15}\] As a direct application of Lemma 2.9, we have the following result. **Lemma 2.10**.: The area \(A_{ijk}(u)\) of each decorated triangle \(\{ijk\}\in F\) is an analytic function with \[\frac{\partial A_{ijk}}{\partial u_{i}}=A_{i}^{jk}. \tag{16}\] Proof.: By (12), we have \[h_{ij}^{k}=\frac{d_{ik}-d_{ij}\cos\theta_{jk}^{i}}{\sin\theta_{jk}^{i}},\quad h _{ki}^{j}=\frac{d_{ij}-d_{ik}\cos\theta_{jk}^{i}}{\sin\theta_{jk}^{i}}.\] Direct calculations give \[h_{ki}^{j}=d_{ij}\sin\theta_{jk}^{i}-h_{ij}^{k}\cos\theta_{jk}^{i}. \tag{17}\] Combining (5), (6) and (11), it is easy to check that \[\frac{\partial l_{ij}}{\partial u_{i}}=d_{ij}. \tag{18}\] Figure 3. Dates for a decorated triangle \(\{ijk\}\in F\) Differentiating \(A_{ijk}=\frac{1}{2}l_{ij}l_{jk}\sin\theta_{ki}^{j}\) with respect to \(u_{i}\) gives \[\frac{\partial A_{ijk}}{\partial u_{i}} =\frac{1}{2}\frac{\partial l_{ij}}{\partial u_{i}}l_{jk}\sin\theta _{ki}^{j}+\frac{1}{2}l_{ij}l_{jk}\cos\theta_{ki}^{j}\frac{\partial\theta_{ki}^{ j}}{\partial u_{i}}\] \[=\frac{1}{2}d_{ij}l_{jk}\sin\theta_{ki}^{j}+\frac{1}{2}l_{ij}l_{ jk}\cos\theta_{ki}^{j}\frac{h_{ij}^{k}}{l_{ij}}\] \[=\frac{1}{2}d_{ij}l_{ki}\sin\theta_{jk}^{i}+\frac{1}{2}l_{jk}\cos \theta_{ki}^{j}h_{ij}^{k}\] \[=\frac{1}{2}d_{ij}l_{ki}\sin\theta_{jk}^{i}+\frac{1}{2}(l_{ij}-l _{ki}\cos\theta_{jk}^{i})h_{ij}^{k}\] \[=\frac{1}{2}l_{ki}(d_{ij}\sin\theta_{jk}^{i}-h_{ij}^{k}\cos \theta_{jk}^{i})+\frac{1}{2}l_{ij}h_{ij}^{k}\] \[=\frac{1}{2}l_{ki}h_{ki}^{j}+\frac{1}{2}l_{ij}h_{ij}^{k}\] \[=A_{i}^{jk},\] where the second equality uses (18) and (13), the third equality uses the sine laws and the penultimate line uses (17). Q.E.D. **Remark 2.11**.: One can refer to Glickenstein [9] for a nice geometric explanation of the result in Lemma 2.10. ### The extended energy function and the extended area function There exists a geometric relationship between the decorated triangle \(\{ijk\}\) and the geometry of hyperbolic polyhedra in \(3\)-dimensional hyperbolic space. Specially, there is a generalized hyperbolic tetrahedra in \(\mathbb{H}^{3}\) with one ideal vertex and three hyper-ideal vertices corresponding to a decorated triangle \(\{ijk\}\). Please refer to [2] for more details on this fact. Springborn [22] found the following explicit formula for the truncated volume \(\mathrm{Vol}(ijk)\) of this generalized hyperbolic tetrahedra \[2\mathrm{Vol}(ijk)= \mathbb{L}(\theta_{jk}^{i})+\mathbb{L}(\theta_{ki}^{j})+\mathbb{ L}(\theta_{ij}^{k}) \tag{19}\] \[+\mathbb{L}(\frac{\pi+\alpha_{ki}^{j}+\alpha_{ij}^{k}-\theta_{jk} ^{i}}{2})+\mathbb{L}(\frac{\pi+\alpha_{ki}^{j}-\alpha_{ij}^{k}-\theta_{jk}^{i }}{2})\] \[+\mathbb{L}(\frac{\pi-\alpha_{ki}^{j}+\alpha_{ij}^{k}-\theta_{jk} ^{i}}{2})+\mathbb{L}(\frac{\pi-\alpha_{ki}^{j}-\alpha_{ij}^{k}-\theta_{jk}^{i }}{2})\] \[+\mathbb{L}(\frac{\pi+\alpha_{jk}^{i}+\alpha_{ij}^{k}-\theta_{ki} ^{j}}{2})+\mathbb{L}(\frac{\pi+\alpha_{jk}^{i}-\alpha_{ij}^{k}-\theta_{ki}^{j }}{2})\] \[+\mathbb{L}(\frac{\pi-\alpha_{jk}^{i}+\alpha_{ij}^{k}-\theta_{ki} ^{j}}{2})+\mathbb{L}(\frac{\pi-\alpha_{jk}^{i}-\alpha_{ij}^{k}-\theta_{ki}^{j }}{2})\] \[+\mathbb{L}(\frac{\pi+\alpha_{jk}^{i}+\alpha_{ki}^{j}-\theta_{ij} ^{k}}{2})+\mathbb{L}(\frac{\pi+\alpha_{jk}^{i}-\alpha_{ki}^{j}-\theta_{ij}^{k }}{2})\] \[+\mathbb{L}(\frac{\pi-\alpha_{jk}^{i}+\alpha_{ki}^{j}-\theta_{ij} ^{k}}{2})+\mathbb{L}(\frac{\pi-\alpha_{jk}^{i}-\alpha_{ki}^{j}-\theta_{ij}^{k }}{2}),\] where \[\mathbb{L}(x)=-\int_{0}^{x}\log|2\sin(t)|dt \tag{20}\] is Milnor's Lobachevsky function. Milnor's Lobachevsky function is bounded, odd, \(\pi\)-periodic and smooth except at integer multiples of \(\pi\). Please refer to [20, 21] for more information on Milnor's Lobachevsky function \(\mathbb{L}(x)\). Set \[F_{ijk}(u_{i},u_{j},u_{k})= -2\mathrm{Vol}(ijk)+\theta^{i}_{jk}u_{i}+\theta^{j}_{ki}u_{j}+ \theta^{k}_{ij}u_{k} \tag{21}\] \[+(\frac{\pi}{2}-\alpha^{k}_{ij})\lambda_{ij}+(\frac{\pi}{2}- \alpha^{j}_{ki})\lambda_{ki}+(\frac{\pi}{2}-\alpha^{i}_{jk})\lambda_{jk},\] where \(\cosh\lambda_{ij}=I_{ij}\). Then \(\nabla F_{ijk}=(\theta^{i}_{jk},\theta^{j}_{ki},\theta^{k}_{ij})\) and \[F_{ijk}((u_{i},u_{j},u_{k})+c(1,1,1))=F_{ijk}(u_{i},u_{j},u_{k})+c\pi \tag{22}\] for \(c\in\mathbb{R}\). Furthermore, on a decorated PE surface \((S,V,l,r)\) with a weighted Delaunay triangulation \(\mathcal{T}\), Bobenko-Lutz [2] defined the following function \[\mathcal{H}_{\mathcal{T}}(u)=\sum_{\{ijk\}\in F}F_{ijk}(u_{i},u_{j},u_{k})=-2 \mathrm{Vol}(P_{h})+\sum_{i\in V}\theta_{i}u_{i}+\sum_{\{ij\}\in\mathcal{E}_{ \mathcal{T}}}(\pi-\alpha_{ij})\lambda_{ij}, \tag{23}\] where \(P_{h}\) is the convex polyhedral cusp defined by the heights \(h\in\mathbb{R}^{V}\), \(\theta_{i}=\sum_{\{ijk\}\in F_{\mathcal{T}}}\theta^{i}_{jk}\) and \(\alpha_{ij}=\alpha^{k}_{ij}+\alpha^{l}_{ij}\). Note that the function \(\mathcal{H}_{\mathcal{T}}(u)\) defined by (23) differs from its original definition in [2] (Equation 4-9) by some constant. By (22), for \(c\in\mathbb{R}\), we have \[\mathcal{H}_{\mathcal{T}}(u+c\mathbf{1})=\mathcal{H}_{\mathcal{T}}(u)+c|F|\pi.\] Using the function \(\mathcal{H}_{\mathcal{T}}\), we define the following energy function \[\mathcal{E}_{\mathcal{T}}(u)=-\mathcal{H}_{\mathcal{T}}(u)+2\pi\sum_{i\in V}u _{i},\] which is well-defined on \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\) with \(\nabla_{u_{i}}\mathcal{E}_{\mathcal{T}}=2\pi-\sum_{\{ijk\}\in F_{\mathcal{T}} }\theta^{i}_{jk}=W_{i}\). Moreover, for \(c\in\mathbb{R}\), we have \[\mathcal{E}_{\mathcal{T}}(u+c\mathbf{1})= -\mathcal{H}_{\mathcal{T}}(u+c\mathbf{1})+2\pi\sum_{i\in V}(u_{i} +c) \tag{24}\] \[= -\mathcal{H}_{\mathcal{T}}(u)-c|F|\pi+2\pi\sum_{i\in V}u_{i}+2c|V|\pi\] \[= \mathcal{E}_{\mathcal{T}}(u)+2c\pi\chi(S),\] where \(2|V|-|F|=2\chi(S)\) is used in the last line. **Theorem 2.12** ([2], Proposition 4.13).: For a discrete conformal factor \(u\in\mathbb{R}^{V}\), let \(\mathcal{T}\) be a weighted Delaunay triangulation of the decorated PE surface \((S,V,dist_{S}(u),r(u))\). The map \[\mathcal{H}:\ \mathbb{R}^{V} \to\mathbb{R}, \tag{25}\] \[u \mapsto\mathcal{H}_{\mathcal{T}}(u)\] is well-defined, concave, and twice continuously differentiable over \(\mathbb{R}^{V}\). Therefore, the function \(\mathcal{E}_{\mathcal{T}}(u)\) defined on \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\) can be extended to be \[\mathcal{E}(u)=-\mathcal{H}(u)+2\pi\sum_{i\in V}u_{i}=-\sum_{\{ijk\}\in F}F_{ ijk}(u_{i},u_{j},u_{k})+2\pi\sum_{i\in V}u_{i} \tag{26}\] defined on \(\mathbb{R}^{V}\). **Definition 2.13**.: Suppose \((S,V,\mathcal{T})\) is a triangulated surface with a decorated PE metric \((l,r)\). The area function \(A^{\mathcal{T}}_{tot}\) on \((S,V,\mathcal{T})\) is defined to be \[A^{\mathcal{T}}_{tot}:\;\mathcal{C}_{\mathcal{T}}(dist_{S},r)\to\mathbb{R}, \hskip 14.226378ptA^{\mathcal{T}}_{tot}(u)=\sum_{\{ijk\}\in F}A_{ijk}(u).\] By Lemma 2.10, we have the following result. **Corollary 2.14**.: The function \(A^{\mathcal{T}}_{tot}\) is an analytic function with \[\frac{\partial A^{\mathcal{T}}_{tot}}{\partial u_{i}}=A_{i}. \tag{27}\] Lemma 2.8 and Corollary 2.14 imply the following result, which shows the function \(A^{\mathcal{T}}_{tot}\) defined on \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\) can be extended. **Theorem 2.15**.: For a discrete conformal factor \(u\in\mathbb{R}^{V}\), let \(\mathcal{T}\) be a weighted Delaunay triangulation of the decorated PE surface \((S,V,dist_{S}(u),r(u))\). The map \[A_{tot}:\;\mathbb{R}^{V} \to\mathbb{R},\] \[u \mapsto A^{\mathcal{T}}_{tot}(u) \tag{28}\] is well-defined and once differentiable. Proof.: By Corollary 2.14, the function \(A_{tot}\) is once differentiable in the interior of any \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\). At the boundary of \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\), the weighted triangulations induce the same weighted Delaunay tessellation. The conclusion follows from Lemma 2.8. Q.E.D. ## 3. The proof of Theorem 1.5 ### Variational principles with constraints In this subsection, we translate Theorem 1.5 into an optimization problem with inequality constraints by variational principles, which involves the function \(\mathcal{E}\) defined by (26). **Proposition 3.1**.: The set \[\mathcal{A}=\{u\in\mathbb{R}^{V}|A_{tot}(u)\leq 1\}.\] is an unbounded closed subset of \(\mathbb{R}^{V}\). Proof.: By Theorem 2.15, the set \(\mathcal{A}\) is a closed subset of \(\mathbb{R}^{V}\). Since \(A_{ijk}((u_{i},u_{j},u_{k})+(c,c,c))=e^{2c}A_{ijk}(u_{i},u_{j},u_{k})\), thus \(A_{tot}(u+c\mathbf{1})=e^{2c}A_{tot}(u)\). Then \(A_{tot}(u+c\mathbf{1})=e^{2c}A_{tot}(u)\leq 1\) is equivalent to \(c\leq-\frac{1}{2}\log A_{tot}(u)\). This implies that the ray \(\{u+c\mathbf{1}|c\leq-\frac{1}{2}\log A_{tot}(u)\}\) stays in the set \(\mathcal{A}\). Hence the set \(\mathcal{A}\) is unbounded. Q.E.D. According to Proposition 3.1, we have following result. **Lemma 3.2**.: If \(\chi(S)<0\) and the function \(\mathcal{E}(u)\) attains a minimum in the set \(\mathcal{A}\), then the minimum value point of \(\mathcal{E}(u)\) lies at the boundary of \(\mathcal{A}\), i.e., \[\partial\mathcal{A}=\{u\in\mathbb{R}^{V}|A_{tot}(u)=1\}.\] Furthermore, there exists a decorated PE metric with constant discrete Gaussian curvature \(K\) in the discrete conformal class. Proof.: Suppose the function \(\mathcal{E}(u)\) attains a minimum at \(u\in\mathcal{A}\). Taking \(c_{0}=-\frac{1}{2}\log A_{tot}(u)\), then \(c_{0}\geq 0\) by \(A_{tot}(u)\leq 1\). By the proof of Proposition 3.1, \(u+c_{0}\mathbf{1}\in\mathcal{A}\). Hence, by the additive property of the function \(\mathcal{E}\) in (24), we have \[\mathcal{E}(u)\leq\mathcal{E}(u+c_{0}\mathbf{1})=\mathcal{E}(u)+2c_{0}\pi \chi(S).\] This implies \(c_{0}\leq 0\) by \(\chi(S)<0\). Then \(c_{0}=0\) and \(A_{tot}(u)=1\). Therefore, the minimum value point of \(\mathcal{E}(u)\) lies in the set \(\partial\mathcal{A}=\{u\in\mathbb{R}^{V}|A_{tot}(u)=1\}\). The conclusion follows from the following claim. **Claim :** Up to scaling, the decorated PE metrics with constant discrete Gaussian curvature \(K\) in the discrete conformal class are in one-to-one correspondence with the critical points of the function \(\mathcal{E}(u)\) under the constraint \(A_{tot}(u)=1\). We use the method of Lagrange multipliers to prove this claim. Set \[G(u,\mu)=\mathcal{E}(u)-\mu(A_{tot}(u)-1),\] where \(\mu\in\mathbb{R}\) is a Lagrange multiplier. If \(u\) is a critical point of the function \(\mathcal{E}\) under the constraint \(A_{tot}(u)=1\), then by (27) and the fact \(\nabla_{u_{i}}\mathcal{E}=W_{i}\), we have \[0=\frac{\partial G(u,\mu)}{\partial u_{i}}=\frac{\partial\mathcal{E}(u)}{ \partial u_{i}}-\mu\frac{\partial A_{tot}(u)}{\partial u_{i}}=W_{i}-\mu A_{i}.\] This implies \[W_{i}=\mu A_{i}.\] Since the anger defect \(W\) defined by (1) satisfies the following discrete Gauss-Bonnet formula \[\sum_{i\in V}W_{i}=2\pi\chi(S),\] we have \[2\pi\chi(S)=\sum_{i\in V}W_{i}=\mu\sum_{i\in V}A_{i}=\mu A_{tot}=\mu.\] under the constraint \(A_{tot}(u)=1\). Therefore, the discrete Gaussian curvature \[K_{i}=\frac{W_{i}}{A_{i}}=2\pi\chi(S)\] for any \(i\in V\). ### Reduction to Theorem 3.4 By Lemma 3.2, we just need to prove that the function \(\mathcal{E}(u)\) attains the minimum in the set \(\mathcal{A}\). Recall the following classical result from calculus. **Theorem 3.3**.: Let \(A\subseteq\mathbb{R}^{m}\) be a closed set and \(f:A\to\mathbb{R}\) be a continuous function. If every unbounded sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) in \(A\) has a subsequence \(\{u_{n_{k}}\}_{k\in\mathbb{N}}\) such that \(\lim_{k\to+\infty}f(u_{n_{k}})=+\infty\), then \(f\) attains a minimum in \(A\). One can refer to [16] (Section 4.1) for a proof of Theorem 3.3. The majority of the conditions in Theorem 3.3 is satisfied, including the set \(\mathcal{A}\) is a closed subset of \(\mathbb{R}^{V}\) by Proposition 3.1 and the function \(\mathcal{E}\) is continuous by Theorem 2.12. To prove Theorem 1.5, we just need to prove the following theorem. **Theorem 3.4**.: If \(\chi(S)<0\) and \(\{u_{n}\}_{n\in\mathbb{N}}\) is an unbounded sequence in \(\mathcal{A}\), then there exists a subsequence \(\{u_{n_{k}}\}_{k\in\mathbb{N}}\) of \(\{u_{n}\}_{n\in\mathbb{N}}\) such that \(\lim_{k\to+\infty}\mathcal{E}(u_{n_{k}})=+\infty\). ### Behaviour of sequences of discrete conformal factors Let \(\{u_{n}\}_{n\in\mathbb{N}}\) be an unbounded sequence in \(\mathbb{R}^{V}\). Denote its coordinate sequence at \(j\in V\) by \(\{u_{j,n}\}_{n\in\mathbb{N}}\). Motivated by [17], we call the sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) with the following properties as a "good" sequence. **(1):**: It lies in one cell \(\mathcal{C}_{\mathcal{T}}(dist_{S},r)\) of \(\mathbb{R}^{V}\); **(2):**: There exists a vertex \(i^{*}\in V\) such that \(u_{i^{*},n}\leq u_{j,n}\) for all \(j\in V\) and \(n\in\mathbb{N}\); **(3):**: Each coordinate sequence \(\{u_{j,n}\}_{n\in\mathbb{N}}\) either converges, diverges properly to \(+\infty\), or diverges properly to \(-\infty\); **(4):**: For any \(j\in V\), the sequence \(\{u_{j,n}-u_{i^{*},n}\}_{n\in\mathbb{N}}\) either converges or diverges properly to \(+\infty\). By Lemma 2.7, it is obvious that every sequence of discrete conformal factors in \(\mathbb{R}^{V}\) possesses a "good" subsequence. Hence, the "good" sequence could be chosen without loss of generality. In the following arguments, we use the following notations \[l^{n}_{ij}=\sqrt{r^{2}_{i,n}+r^{2}_{j,n}+2I_{ij}r_{i,n}r_{j,n}}, \tag{30}\] \[r_{i,n}=e^{u_{i,n}}r_{i},\] (31) \[(l^{n}_{ij})^{2}=(e^{2u_{i,n}}-e^{u_{i,n}+u_{j,n}})r^{2}_{i}+(e^{2u_{ j,n}}-e^{u_{i,n}+u_{j,n}})r^{2}_{j}+e^{u_{i,n}+u_{j,n}}l^{2}_{ij}. \tag{29}\] For a decorated triangle \(\{ijk\}\in F\) in \((S,V,\mathcal{T})\), set \[\mathcal{C}_{ijk}=\{(u_{i},u_{j},u_{k})\in\mathbb{R}^{3}|u\in\mathcal{C}_{ \mathcal{T}}(dist_{S},r)\}. \tag{32}\] Let \((u_{i,n},u_{j,n},u_{k,n})_{n\in\mathbb{N}}\) be a coordinate sequence in \(\mathcal{C}_{ijk}\). Then the edge lengths \(l^{n}_{ij},l^{n}_{jk},l^{n}_{ki}\) satisfy the triangle inequalities for all \(n\in\mathbb{N}\). **Lemma 3.5**.: There exists no sequence in \(\mathcal{C}_{ijk}\) such that as \(n\to+\infty\), \[u_{r,n}\to+\infty,\quad u_{s,n}\to+\infty,\quad u_{t,n}\leq C,\] where \(\{r,s,t\}=\{i,j,k\}\) and \(C\) is a constant. Proof.: Without loss of generality, we assume \(\lim u_{i,n}=+\infty\), \(\lim u_{j,n}=+\infty\) and the sequence \(u_{k,n}\leq C_{1}\). The equality (30) implies \(\lim r_{i,n}=+\infty\), \(\lim r_{j,n}=+\infty\) and the sequence \(r_{k,n}\leq C_{2}\). Here \(C_{1},C_{2}\) are constants. By (29), we have \[(l^{n}_{jk}+l^{n}_{ki})^{2}= r^{2}_{i,n}+r^{2}_{j,n}+2r^{2}_{k,n}+2I_{jk}r_{j,n}r_{k,n}+2I_{ ki}r_{k,n}r_{i,n}\] \[+2\sqrt{(r^{2}_{j,n}+r^{2}_{k,n}+2I_{jk}r_{j,n}r_{k,n})(r^{2}_{k,n }+r^{2}_{i,n}+2I_{ki}r_{k,n}r_{i,n})}.\] Note that \(I_{ij}>1\), then \[\lim\frac{r^{2}_{k,n}+I_{jk}r_{j,n}r_{k,n}+I_{ki}r_{k,n}r_{i,n}+\sqrt{(r^{2}_{ j,n}+r^{2}_{k,n}+2I_{jk}r_{j,n}r_{k,n})(r^{2}_{k,n}+r^{2}_{i,n}+2I_{ki}r_{k,n}r_{i,n })}}{I_{ij}r_{i,n}r_{j,n}}<1.\] Therefore, there exists \(n\in\mathbb{N}\) such that \((l^{n})^{2}_{ij}=r^{2}_{i,n}+r^{2}_{j,n}+2I_{ij}r_{i,n}r_{j,n}>(l^{n}_{jk}+l^{n }_{ki})^{2}\), i.e., \(l^{n}_{ij}>l^{n}_{jk}+l^{n}_{ki}\). This contradicts the triangle inequality \(l^{n}_{ij}<l^{n}_{jk}+l^{n}_{ki}\). Combining Lemma 3.5 and the connectivity of the triangulation \(\mathcal{T}\), we have the following result. **Corollary 3.6**.: For a discrete conformal factor \(u\in\mathbb{R}^{V}\), let \(\mathcal{T}\) be a weighted Delaunay triangulation of the decorated PE surface \((S,V,dist_{S}(u),r(u))\). For any decorated triangle \(\{ijk\}\in F\) in \(\mathcal{T}\), at least two of the three sequences \((u_{i,n}-u_{i^{*},n})_{n\in\mathbb{N}}\), \((u_{j,n}-u_{i^{*},n})_{n\in\mathbb{N}}\), \((u_{k,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) converge. To characterize the function \(F_{ijk}(u_{i},u_{j},u_{k})\) in (21), we need the following lemmas. **Lemma 3.7**.: Assume that the sequence \((u_{i,n})_{n\in\mathbb{N}}\) diverges properly to \(+\infty\) and the sequences \((u_{j,n})_{n\in\mathbb{N}}\) and \((u_{k,n})_{n\in\mathbb{N}}\) converge. Then the sequence \((\theta^{i,n}_{jk})_{n\in\mathbb{N}}\) converges to zero. Furthermore, if the sequences \((\theta^{j,n}_{ki})_{n\in\mathbb{N}}\) and \((\theta^{k,n}_{ij})_{n\in\mathbb{N}}\) converge to non-zero constants, then **(1):**: the sequences \((h^{i,n}_{jk})_{n\in\mathbb{N}}\), \((h^{j,n}_{ki})_{n\in\mathbb{N}}\) and \((h^{k,n}_{ij})_{n\in\mathbb{N}}\) converge; **(2):**: the sequences \((\alpha^{i,n}_{jk})_{n\in\mathbb{N}}\), \((\alpha^{j,n}_{ki})_{n\in\mathbb{N}}\) and \((\alpha^{k,n}_{ij})_{n\in\mathbb{N}}\) converge. Proof.: By the assumption, we have \(\lim r_{i,n}=+\infty\), \(\lim r_{j,n}=c_{1}\) and \(\lim r_{k,n}=c_{2}\), where \(c_{1},c_{2}\) are positive constants. The equality (29) implies \[\lim\frac{l^{n}_{ij}}{r_{i,n}}=1,\;\lim\frac{l^{n}_{ki}}{r_{i,n}}=1,\;\lim l^ {n}_{jk}=c_{3}, \tag{33}\] where \(c_{3}\) is a positive constant. By the cosine law, we have \[\lim\cos\theta^{i,n}_{jk}=\lim\frac{-(l^{n}_{jk})^{2}+(l^{n}_{ij})^{2}+(l^{n}_ {ki})^{2}}{2l^{n}_{ij}l^{n}_{ki}}=1.\] This implies \(\lim\theta^{i,n}_{jk}=0\). Suppose the sequences \((\theta^{j,n}_{ki})_{n\in\mathbb{N}}\) and \((\theta^{k,n}_{ij})_{n\in\mathbb{N}}\) converge to non-zero constants. Then \[\lim\frac{A^{n}_{ijk}}{r_{i,n}}=\lim\frac{l^{n}_{ij}l^{n}_{jk}\sin\theta^{j,n} _{ki}}{2r_{i,n}}=c_{4} \tag{34}\] for some constant \(c_{4}>0\). **(1):** Since \(\kappa_{i}=\frac{1}{r_{i}}\), then \(\lim\kappa_{i,n}=0\), \(\lim\kappa_{j,n}=\frac{1}{c_{1}}\) and \(\lim\kappa_{k,n}=\frac{1}{c_{2}}\). By (15), we have \[\lim h_{i,n}= \lim(\kappa_{i,n}(1-I^{2}_{jk})+\kappa_{j,n}\gamma_{k}+\kappa_{k,n}\gamma_{j})=c_{5}>0,\] \[\lim h_{j,n}= \lim(\kappa_{j,n}(1-I^{2}_{ki})+\kappa_{i,n}\gamma_{k}+\kappa_{k,n}\gamma_{i})=c_{6},\] \[\lim h_{k,n}= \lim(\kappa_{k,n}(1-I^{2}_{ij})+\kappa_{i,n}\gamma_{j}+\kappa_{j,n}\gamma_{i})=c_{7},\] where \(c_{5},c_{6},c_{7}\) are constants. Note that \(c_{6},c_{7}\) may be non-positive. The equalities (14) and (34) imply \[\lim h^{i,n}_{jk}= \lim\frac{r^{2}_{i,n}r^{2}_{j,n}r^{2}_{k,n}}{2A^{n}_{ijk}l^{n}_{ jk}}\kappa_{i,n}h_{i,n}=\frac{c^{2}_{1}c^{2}_{2}c_{5}}{2c_{3}c_{4}}>0,\] \[\lim h^{j,n}_{ki}= \lim\frac{r^{2}_{i,n}r^{2}_{j,n}r^{2}_{k,n}}{2A^{n}_{ijk}l^{n}_{ ki}}\kappa_{j,n}h_{j,n}=\frac{c_{1}c^{2}_{2}c_{6}}{2c_{4}},\] \[\lim h^{k,n}_{ij}= \lim\frac{r^{2}_{i,n}r^{2}_{j,n}r^{2}_{k,n}}{2A^{n}_{ijk}l^{n}_{ ij}}\kappa_{k,n}h_{k,n}=\frac{c^{2}_{1}c_{2}c_{7}}{2c_{4}}.\] Hence the sequences \((h^{i,n}_{jk})_{n\in\mathbb{N}}\), \((h^{j,n}_{ki})_{n\in\mathbb{N}}\) and \((h^{k,n}_{ij})_{n\in\mathbb{N}}\) converge. **(2):** The equality (11) implies \[\lim d_{jk}^{n}=\lim\frac{r_{j,n}^{2}+r_{j,n}r_{k,n}I_{jk}}{l_{jk}^{n}}=\frac{c_{1 }^{2}+c_{1}c_{2}I_{jk}}{c_{3}}>0. \tag{35}\] By (10), we have \[\lim(r_{ijk}^{n})^{2}=\lim[(d_{jk}^{n})^{2}+(h_{jk}^{i,n})^{2}-r_{j,n}^{2}]=c_{8}.\] where \(c_{8}\) is a constant. Note that \(h_{jk}^{i}=r_{ijk}\cos\alpha_{jk}^{i}\). Hence, \[\lim\cos\alpha_{jk}^{i,n}= \lim\frac{h_{jk}^{i,n}}{r_{ijk}^{n}}=\frac{c_{1}^{2}c_{2}^{2}c_{5 }}{2c_{3}c_{4}\sqrt{c_{8}}}>0,\] \[\lim\cos\alpha_{ki}^{j,n}= \lim\frac{h_{ki}^{j,n}}{r_{ijk}^{n}}=\frac{c_{1}c_{2}^{2}c_{6}}{2 c_{4}\sqrt{c_{8}}},\] \[\lim\cos\alpha_{ij}^{k,n}= \lim\frac{h_{ij}^{k,n}}{r_{ijk}^{n}}=\frac{c_{1}^{2}c_{2}c_{7}}{2 c_{4}\sqrt{c_{8}}}.\] Then the sequences \((\alpha_{jk}^{i,n})_{n\in\mathbb{N}}\), \((\alpha_{ki}^{j,n})_{n\in\mathbb{N}}\) and \((\alpha_{ij}^{k,n})_{n\in\mathbb{N}}\) converge. Q.E.D. **Lemma 3.8**.: Assume that the sequence \((u_{i,n})_{n\in\mathbb{N}}\) diverges properly to \(+\infty\) and the sequences \((u_{j,n})_{n\in\mathbb{N}}\) and \((u_{k,n})_{n\in\mathbb{N}}\) converge. If the sequence \((\theta_{ki}^{j,n})_{n\in\mathbb{N}}\) converge to zero, then \[\lim h_{jk}^{i,n}=+\infty,\;\lim h_{ki}^{j,n}=+\infty,\;\lim h_{ij}^{k,n}=-\infty.\] Proof.: Lemma 3.7 shows that \(\lim\theta_{jk}^{i,n}=0\), thus \(\lim(\theta_{ki}^{j,n}+\theta_{ij}^{k,n})=\pi\). Since \(\lim\theta_{ki}^{j,n}=0\), then \(\lim\theta_{ij}^{k,n}=\pi\). Then \[\lim\frac{A_{ijk}^{n}}{r_{i,n}}=\lim\frac{l_{ij}^{n}l_{jk}^{n}\sin\theta_{ki}^ {j,n}}{r_{i,n}}=0. \tag{36}\] By the proof of Lemma 3.7, we have \[\lim h_{jk}^{i,n}=\lim\frac{r_{i,n}^{2}r_{j,n}^{2}r_{k,n}^{2}}{2A_{ijk}^{n}l_{ jk}^{n}}\kappa_{i,n}h_{i,n}=\lim\frac{r_{i,n}^{2}c_{1}^{2}c_{2}^{2}}{2A_{ijk}^{n}c_ {3}}\cdot\frac{1}{r_{i,n}}c_{5}=+\infty,\] where (36) is used and \(c_{1},c_{2},c_{3},c_{5}\) are positive constants. Similar to (35), we have \[\lim d_{ji}^{n}= \lim\frac{r_{j,n}^{2}+r_{i,n}r_{j,n}I_{ij}}{l_{ij}^{n}}=c_{9},\] \[\lim d_{ki}^{n}= \lim\frac{r_{k,n}^{2}+r_{i,n}r_{k,n}I_{ki}}{l_{ki}^{n}}=c_{10}.\] Here \(c_{9},c_{10}\) are positive constants. By (10), we have \[(r_{ijk}^{n})^{2} =(d_{jk}^{n})^{2}+(h_{jk}^{i,n})^{2}-r_{j,n}^{2}\] \[=(d_{ji}^{n})^{2}+(h_{ij}^{k,n})^{2}-r_{j,n}^{2}\] \[=(d_{ki}^{n})^{2}+(h_{ki}^{j,n})^{2}-r_{k,n}^{2}.\] This implies \(\lim r_{ijk}^{n}=+\infty\), \(\lim(h_{ij}^{k,n})^{2}=+\infty\) and \(\lim(h_{ki}^{j,n})^{2}=+\infty\). Therefore, we have the following four cases \((i)\)**:**: \(\lim h_{ij}^{k,n}=+\infty\) and \(\lim h_{ki}^{j,n}=+\infty\); \((ii)\)**:**: \(\lim h^{k,n}_{ij}=-\infty\) and \(\lim h^{j,n}_{ki}=-\infty\); **:**: \(\lim h^{k,n}_{ij}=+\infty\) and \(\lim h^{j,n}_{ki}=-\infty\); **:**: \(\lim h^{k,n}_{ij}=-\infty\) and \(\lim h^{j,n}_{ki}=+\infty\). For the case \((i)\), since \(\lim h^{i,n}_{jk}>0\), \(\lim h^{k,n}_{ij}>0\) and \(\lim h^{j,n}_{ki}>0\). This implies that the center \(c_{ijk}\) of the face-circle \(C_{ijk}\) lies in the interior of the triangle \(\{ijk\}\) by the definition of \(h^{i}_{jk},h^{k}_{ij},h^{j}_{ki}\). However, in this case, \(\lim h^{i,n}_{jk},\lim h^{k,n}_{ij},\lim h^{j,n}_{ki}\) are bounded. This is a contradiction. Both the cases \((ii)\) and \((iii)\) imply \(d_{kj}<0\). This contradicts with the fact that \(d_{rs}>0\) for any \(\{r,s\}\subseteq\{i,j,k\}\). Indeed, the center \(c_{ijk}\) lies in the red region in Figure 4 in the case \((ii)\) and lies in the blue region in Figure 4 in the case \((iii)\). By projecting the center \(c_{ijk}\) to the line determined by \(\{jk\}\), we have \(d_{kj}<0\). This completes the proof. Q.E.D. **Remark 3.9**.: Similar to the proof of Lemma 3.8, if the sequence \((\theta^{k,n}_{ij})_{n\in\mathbb{N}}\) converges to zero, then \(\lim h^{i,n}_{jk}=+\infty,\ \lim h^{j,n}_{ki}=-\infty,\ \lim h^{k,n}_{ij}=+\infty\). Consider a star-shaped \(s\)-sided polygon in the marked surface with boundary vertices \(1,\cdots,s\) ordered cyclically (\(v_{s+1}=v_{1}\)). Please refer to Figure 5. Let \(i\in V\) be a vertex such that the sequence \((u_{i,n})_{n\in\mathbb{N}}\) diverges properly to \(+\infty\) and the sequences \((u_{j,n})_{n\in\mathbb{N}}\) converge for \(j\sim i\). **Lemma 3.10**.: The sequences of inner angles at the boundary vertices in the triangles of a star-shaped polygon converge to non-zero constants. Proof.: As \(\lim u_{i,n}=+\infty\) and \(\lim u_{j,n}=C\) for \(j\sim i\). By Lemma 3.7, for any \(j=1,...,s\), we have \(\lim\theta^{j,n}_{j-1,j}=0\) and hence \(\lim(\theta^{j-1,n}_{i,j}+\theta^{j,n}_{i,j-1})=\pi\). We prove the result by contradiction. Figure 4. Domain of the center \(c_{ijk}\) in the decorated triangle on surface Figure 5. A star triangulation of a star-shaped \(s\)-sided polygon Without loss of generality, we assume \(\lim\theta^{j-1,n}_{i,j}=\pi\) and \(\lim\theta^{j,n}_{i,j-1}=0\) in the triangle \(\{i,j-1,j\}\). Then for \(n\) large enough, we have \[l^{n}_{i,j-1}<l^{n}_{i,j}.\] By Lemma 3.8, we have \(\lim h^{j,n}_{i,j-1}=+\infty,\ \lim h^{j-1,n}_{i,j}=-\infty,\ \lim h^{i,n}_{j-1,j}=+\infty\). Since the edge \(\{i,j\}\) is weighted Delaunay, thus by (8), we have \[h^{j-1,n}_{i,j}+h^{j+1,n}_{i,j}\geq 0.\] This implies \(\lim h^{j+1,n}_{i,j}=+\infty\). In the triangle \(\{i,j,j+1\}\), suppose the sequences \((\theta^{j,n}_{i,j+1})_{n\in\mathbb{N}}\) and \((\theta^{j+1,n}_{i,j})_{n\in\mathbb{N}}\) converge to non-zero constants. By Lemma 3.7, the sequences \((h^{j+1,n}_{i,j})_{n\in\mathbb{N}}\) and \((h^{j,n}_{i,j+1})_{n\in\mathbb{N}}\) converge. This contradicts \(\lim h^{j+1,n}_{i,j}=+\infty\). Hence the sequences \((\theta^{j,n}_{i,j+1})_{n\in\mathbb{N}}\) or \((\theta^{j+1,n}_{i,j})_{n\in\mathbb{N}}\) converge to zero. By Lemma 3.8 and Remark 3.9, we have \(\lim\theta^{j,n}_{i,j+1}=\pi\), \(\lim\theta^{j+1,n}_{i,j}=0\) and \(\lim h^{j,n}_{i,j+1}=-\infty\). Then for \(n\) large enough, we have \[l^{n}_{i,j}<l^{n}_{i,j+1}.\] Please refer to Figure 5. By induction, for \(n\) large enough, we have \[l^{n}_{i,j-1}<l^{n}_{i,j}<l^{n}_{i,j+1}<l^{n}_{i,j+2}<...<l^{n}_{i,j-1}.\] This is a contradiction. Q.E.D. Combining (21), Lemma 3.7 and Lemma 3.10, we have the following result. **Corollary 3.11**.: Assume that the sequence \((u_{i,n})_{n\in\mathbb{N}}\) diverges properly to \(+\infty\) and the sequences \((u_{j,n})_{n\in\mathbb{N}}\) and \((u_{k,n})_{n\in\mathbb{N}}\) converge. Then the sequence \((F_{ijk}(u_{i,n},u_{j,n},u_{k,n}))_{n\in\mathbb{N}}\) converges. Proof.: By the definition of \(F_{ijk}(u_{i},u_{j},u_{k})\) in (21), we have \[F_{ijk}(u_{i,n},u_{j,n},u_{k,n})= -2\mathrm{Vol}^{n}(ijk)+\theta^{i,n}_{jk}u_{i,n}+\theta^{j,n}_{ ki}u_{j,n}+\theta^{k,n}_{ij}u_{k,n}\] \[+(\frac{\pi}{2}-\alpha^{k,n}_{ij})\lambda_{ij}+(\frac{\pi}{2}- \alpha^{j,n}_{ki})\lambda_{ki}+(\frac{\pi}{2}-\alpha^{i,n}_{jk})\lambda_{jk}.\] Combining Lemma 3.7 and Lemma 3.10 gives that \(\lim\theta^{i,n}_{jk}=0\), the sequences \((\theta^{j,n}_{ki})_{n\in\mathbb{N}}\) and \((\theta^{k,n}_{ij})_{n\in\mathbb{N}}\) converge to non-zero constants and the sequences \((\alpha^{i,n}_{jk})_{n\in\mathbb{N}}\), \((\alpha^{j,n}_{ki})_{n\in\mathbb{N}}\) and \((\alpha^{k,n}_{ij})_{n\in\mathbb{N}}\) converge. Combining the continuity of Milnor's Lobachevsky function defined by (20) and the definition of the truncated volume \(\mathrm{Vol}(ijk)\) defined by (19), we have that the sequence \((\mathrm{Vol}^{n}(ijk))_{n\in\mathbb{N}}\) converges. Note that \(\lambda_{ij}=\arccos\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where the equalities (33) and (34) is used in the second line and \(\lim_{x\rightarrow+\infty}\frac{1}{x}\log x=0\) is used in the third line. Therefore, \(\lim F_{ijk}(u_{i,n},u_{j,n},u_{k,n})=c_{11}\). Q.E.D. The following lemma gives an asymptotic expression of the function \(\mathcal{E}\). **Lemma 3.12**.: There exists a convergent sequence \(\{D_{n}\}_{n\in\mathbb{N}}\) such that the function \(\mathcal{E}\) satisfies \[\mathcal{E}(u_{n})=D_{n}+2\pi\left(u_{i^{*},n}\chi(S)+\sum_{j\in V}(u_{j,n}-u_ {i^{*},n})\right).\] Proof.: By (26), we have \[\mathcal{E}(u_{n})= -\sum_{\{ijk\}\in F}F_{ijk}(u_{i,n},u_{j,n},u_{k,n})+2\pi\sum_{j \in V}u_{j,n}\] \[= -\sum_{\{ijk\}\in F}F_{ijk}((u_{i,n},u_{j,n},u_{k,n})-u_{i^{*},n} (1,1,1))-\pi|F|u_{i^{*},n}+2\pi\sum_{j\in V}u_{j,n}\] \[= D_{n}-\pi(2|V|-2\chi(S))u_{i^{*},n}+2\pi\sum_{j\in V}u_{j,n}\] \[= D_{n}+2\pi\left(u_{i^{*},n}\chi(S)+\sum_{j\in V}(u_{j,n}-u_{i^{* },n})\right),\] where \(D_{n}=-\sum_{\{ijk\}\in F}F_{ijk}((u_{i,n},u_{j,n},u_{k,n})-u_{i^{*},n}(1,1,1))\), the equation (22) is used in the second line and \(2|V|-|F|=2\chi(S)\) is used in the third line. The sequence \(\{D_{n}\}_{n\in\mathbb{N}}\) converges by Corollary 3.6 and Corollary 3.11. The following lemma gives the influence of the sequence \((u_{n})_{n\in\mathbb{N}}\) on the area \(A_{ijk}\) of a decorated triangle \(\{ijk\}\). **Lemma 3.13**.: For a discrete conformal factor \(u\in\mathbb{R}^{V}\), let \(\mathcal{T}\) be a weighted Delaunay triangulation of the decorated PE surface \((S,V,dist_{S}(u),r(u))\). Assume the sequences \((u_{j,n}-u_{i^{*},n})_{n\in\mathbb{N}}\), \((u_{k,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) converge for \(\{ijk\}\) in \(\mathcal{T}\) with edge lengths \(l_{ij}^{n},l_{jk}^{n},l_{ki}^{n}\). **(a):**: If \((u_{i,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) converges, there exists a convergent sequence of real numbers \((C_{n})_{n\in\mathbb{N}}\) such that \[\log A_{ijk}^{n}=C_{n}+2u_{i^{*},n}. \tag{37}\] **(b):**: If \((u_{i,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) diverges to \(+\infty\), there exists a convergent sequence of real numbers \((C_{n})_{n\in\mathbb{N}}\) such that \[\log A_{ijk}^{n}=C_{n}+u_{i,n}+u_{i^{*},n}. \tag{38}\] Proof.: Applying (31) to \(A_{ijk}=\frac{1}{2}l_{ij}l_{jk}\sin\theta_{ki}^{j}\) gives \[A_{ijk}^{n}= \frac{1}{2}l_{ij}^{n}l_{jk}^{n}\sin\theta_{ki}^{j,n}\] \[= \frac{1}{2}\sin\theta_{ki}^{j,n}\sqrt{r_{i}^{2}e^{2u_{i,n}}+r_{j} ^{2}e^{2u_{j,n}}+(l_{ij}^{2}-r_{i}^{2}-r_{j}^{2})e^{(u_{i,n}+u_{j,n})}}\] \[\times\sqrt{r_{j}^{2}e^{2u_{j,n}}+r_{k}^{2}e^{2u_{k,n}}+(l_{jk}^{ 2}-r_{j}^{2}-r_{k}^{2})e^{(u_{j,n}+u_{k,n})}}.\] Then \[\log A^{n}_{ijk}= \log(\frac{1}{2}\sin\theta^{j,n}_{ki})+2u_{i^{*},n}\] \[+\frac{1}{2}\log(r_{i}^{2}e^{2(u_{i,n}-u_{i^{*},n})}+r_{j}^{2}e^{2( u_{j,n}-u_{i^{*},n})}+(l_{ij}^{2}-r_{i}^{2}-r_{j}^{2})e^{(u_{i,n}-u_{i^{*},n})+(u_{j,n}-u_ {i^{*},n})})\] \[+\frac{1}{2}\log(r_{j}^{2}e^{2(u_{j,n}-u_{i^{*},n})}+r_{k}^{2}e^{2 (u_{k,n}-u_{i^{*},n})}+(l_{jk}^{2}-r_{j}^{2}-r_{k}^{2})e^{(u_{j,n}-u_{i^{*},n}) +(u_{k,n}-u_{i^{*},n})})\] \[= \log(\frac{1}{2}\sin\theta^{j,n}_{ki})+u_{i,n}+u_{i^{*},n}\] \[+\frac{1}{2}\log(r_{i}^{2}+r_{j}^{2}e^{2(u_{j,n}-u_{i^{*},n})-2(u _{i,n}-u_{i^{*},n})}+(l_{ij}^{2}-r_{i}^{2}-r_{j}^{2})e^{-(u_{i,n}-u_{i^{*},n}) +(u_{j,n}-u_{i^{*},n})})\] \[+\frac{1}{2}\log(r_{j}^{2}e^{2(u_{j,n}-u_{i^{*},n})}+r_{k}^{2}e^{ 2(u_{k,n}-u_{i^{*},n})}+(l_{jk}^{2}-r_{j}^{2}-r_{k}^{2})e^{(u_{j,n}-u_{i^{*},n })+(u_{k,n}-u_{i^{*},n})}).\] If the sequence \((u_{i,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) converges, then \(\log A^{n}_{ijk}=C_{n}+2u_{i^{*},n}\). If the sequence \((u_{i,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) diverges to \(+\infty\), then the sequence \((\theta^{j,n}_{ki})_{n\in\mathbb{N}}\) converges to a non-zero constant in \((0,\pi)\) by Lemma 3.10. This implies \(\log A^{n}_{ijk}=C_{n}+u_{i,n}+u_{i^{*},n}\). In both cases, the sequence \((C_{n})_{n\in\mathbb{N}}\) converges. Q.E.D. ### Proof of Theorem 3.4 Let \(\{u_{n}\}_{n\in\mathbb{N}}\) be an unbounded "good" sequence. Suppose \(\chi(S)<0\) and \(\{u_{n}\}_{n\in\mathbb{N}}\) is an unbounded sequence in \(\mathcal{A}\). Combining \(\chi(S)<0\) and Lemma 3.12, we just need to prove that \(\lim_{n\to+\infty}u_{i^{*},n}=-\infty\). By the definition of "good" sequence, the sequence \(\left(\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right)_{n\in\mathbb{N}}\) converges to a finite number or diverges properly to \(+\infty\). If \(\left(\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right)_{n\in\mathbb{N}}\) converges to a finite number, then the sequence \((u_{j,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) converges for all \(j\in V\). Since the sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) lies in \(\mathcal{A}\), the area \(A_{ijk}\) of each triangle is bounded from above. This implies \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) is bounded from above by (37). Then \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges to a finite number or diverges properly to \(-\infty\). Suppose \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) converges to a finite number. Since \((u_{j,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) converges for all \(j\in V\), then \(\{u_{j,n}\}_{n\in\mathbb{N}}\) are bounded for all \(j\in V\), which implies \(\{u_{n}\}_{n\in\mathbb{N}}\) is bounded. This contradicts the assumption that \(\{u_{n}\}_{n\in\mathbb{N}}\) is unbounded. Therefore, the sequence \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(-\infty\). If \(\left(\sum_{j\in V}(u_{j,n}-u_{i^{*},n})\right)_{n\in\mathbb{N}}\) diverges properly to \(+\infty\), then there exists at least one vertex \(i\in V\) such that the sequence \((u_{i,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) diverges properly to \(+\infty\). By Corollary 3.6, the sequences \((u_{j,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) and \((u_{k,n}-u_{i^{*},n})_{n\in\mathbb{N}}\) converge for \(j\sim i\) and \(k\sim i\). Since the area \(A_{ijk}\) of each triangle is bounded from above, thus \(u_{i,n}+u_{i^{*},n}\leq C\) and \(u_{j,n}+u_{i^{*},n}\leq C\) by (38), where \(C\) is a constant. Then \((u_{i,n}-u_{i^{*},n})+2u_{i^{*},n}\leq C\). This implies \(\{u_{i^{*},n}\}_{n\in\mathbb{N}}\) diverges properly to \(-\infty\). Q.E.D. **Remark 3.14**.: For the case \(\chi(S)>0\), Kourimska [16, 17] gave the existence of PE metrics with constant discrete Gaussian curvatures. However, we can not get similar results. The main difference is that the edge length defined by (5) involves the square term of discrete conformal factors, such as \(e^{2u_{i}}\), while the edge length defined by the vertex scalings only involves the mixed product of the first order terms, i.e., \(e^{u_{i}+u_{j}}\). Indeed, in this case, we can define the set \(\mathcal{A}_{+}=\{u\in\mathbb{R}^{V}|A_{tot}(u)\geq 1\}\), which is an unbounded closed subset of \(\mathbb{R}^{V}\). Under the conditions that \(\chi(S)>0\) and the function \(\mathcal{E}(u)\) attains a minimum in the set \(\mathcal{A}_{+}\), Lemma 3.2 still holds. Using Theorem 3.3, we just need to prove Theorem 3.4 under the condition \(\chi(S)>0\). However, we can not get a good asymptotic expression of the area \(A_{ijk}\). The asymptotic expression of the area \(A_{ijk}\) in (38) involves \(u_{i,n}+u_{i^{*},n}\), which is not enough for this case.
2309.15006
Global viscosity solutions to Lorentzian eikonal equation on globally hyperbolic space-times
In this paper, we show that any globally hyperbolic space-time admits at least one globally defined distance-like function, which is a viscosity solution to the Lorentzian eikonal equation. According to whether the time orientation is changed, we divide the set of viscosity solutions into some subclasses. We show if the time orientation is consistent, then a viscosity solution has a variational representation locally. As a result, such a viscosity solution is locally semiconcave, as the one in the Riemannian case. Also, if the time orientation of a viscosity solution is non-consistent, we analyse its peculiar properties which make this kind of viscosity solutions are totally different from the ones where the Hamiltonians are convex.
Siyao Zhu, Hongguang Wu, Xiaojun Cui
2023-09-26T15:26:02Z
http://arxiv.org/abs/2309.15006v4
# Global viscosity solutions to Lorentzian eikonal equation on globally hyperbolic space-times ###### Abstract In this paper, we show that any globally hyperbolic space-time admits at least one globally defined distance-like function, which is a viscosity solution to the Lorentzian eikonal equation. According to whether the time orientation is changed, we divide the set of viscosity solutions into some subclasses. We show if the time orientation is consistent, then a viscosity solution has a variational representation locally. As a result, such a viscosity solution is locally semiconcave, as the one in the Riemannian case. Also, if the time orientation of a viscosity solution is non-consistent, we analyse its peculiar properties which make this kind of viscosity solutions are totally different from the ones in the Riemannian case. keywords: Global viscosity solution, Lorentzian eikonal equation, weak KAM theory + Footnote †: journal: Elsevier ## 1 Introduction Since the landmark work of Penrose [15][16], the study of Lorentzian causality theory has attracted many researchers in recent decades. In this theory, Lorentzian geodesic flow is a central research object. Eikonal equation, which can be regarded as a special Hamilton-Jacobi equation, plays an important role in the study of Lorentzian geodesic flow. Actually, if the eikonal equation admits a smooth solution, the integral curves to the gradient field are geodesics, e.g. [2, Lemma B.1]. Unfortunately, smooth solutions for such partial differential equations are usually absent. At this moment, viscosity solutions introduced by Crandall and Lions [5] show powerful advantages than other weak solutions (e.g., Viscosity solutions usually have better well-posedness in Dirichlet problem of partial differential equations). On the other hand, a field closely related to viscosity solutions to the Hamilton-Jacobi equation induced by a Tonelli Lagrangian is the so-called weak KAM theory. Weak KAM theory, introduced by Fathi [10], establishes a connection between Aubry-Mather theory for a Tonelli Lagrangian and viscosity solution theory of the associated Hamilton-Jacobi equation. Informally, backward action-minimizing curves, the main object in Aubry-Mather theory, are exactly calibrated curves of some viscosity solutions. In other words, viscosity solutions are nothing but the weak KAM solutions of backward type. This can give an explicit variational representation for global viscosity solutions to the Hamilton-Jacobi equation. For example, in [4], we know the viscosity solutions of eikonal equation are the so-called dl(distance-like) functions on a non-compact complete Riemannian manifold. Such a variational representation can provide us a geometrical intuition for the gradient dynamics of the viscosity solutions. For some recent works on weak KAM theory, Lorentzian geodesic flows and Lorentzian Aubry-Mather theory, we refer interested readers to [2, 4, 7, 9, 11, 14, 18, 19] and the references therein. Inspired by these works, it is interesting to construct a space-time counterpart of weak KAM theory for viscosity solutions to the eikonal equation. However, classical weak KAM theory depends heavily on the positive-definiteness of the Lagrangian. On space-time, non-positive-definiteness of Lorentzian metric makes problem complicated and challenging. Furthermore, non-completeness is a common phenomenon on a space-time, it makes things worse. As far as we know, there are only a few works on this topic. Cui and Jin [7] study space-time with the regular cosmologcial time function \(\tau\) and show that \(-\tau\) is a viscosity solution to Lorenzian eikonal equation. Jin and Cui [14] establish the existence of global viscosity solution to Lorentzian eikonal equation on the Abelian cover of a class A Lorentzian 2-torus. In [3], Bernard and Suhr propose a cone field theory for generalized space-time and establish the equivalence between global hyperbolicity and the existence of a steep temporal function, where the steep temporal function is a smooth (of course, viscosity) subsolution to Lorentzian eikonal equation in our setting. It is not surprising that viscosity solutions on a Riemannian manifold and space-time may have different properties. For example, in the Riemannian case, viscosity solutions are always locally semiconcave, but even on the 2-dimensional Minkowski space-time (\(R^{2},dy^{2}-dx^{2}\)), a viscosity solution may not be locally semiconcave. Actually, it is easy to check that \(f(x,y)=|x|\) is a viscosity solution to the eikonal equation in the 2-dimensional Minkowski space-time. Obviously, \(f\) is not locally semiconcave in any neighbourhood of \(x=0\). In this paper, we will give a much more complete theory on viscosity solutions of eikonal equation on a globally hyperbolic space-time. More precisely, we classify the set of viscosity solutions to the Lorentzian eikonal equation according to whether the time orientation of the solution is consistent. We show that a viscosity solution has a (local) variational representation when the time orientation is consistent. Such a variational representation ensures that the viscosity solution has the similar properties as in the Riemannian case, for instance, local semiconcavity and some properties of weak KAM type. We also study the properties of viscosity solution when the time orientation is non-consistent. In this case, viscosity solutions exhibit many peculiar properties, which are totally different from the ones in the Riemannian case. ## 2 Preliminaries and main results We will use classical terminology as in [2; 4; 7; 11; 14]. Let \((M,g)\) be a space-time (i.e., a connected, time-oriented, smooth Lorentzian manifold with the Lorentzian metric \(g\)), where the signature for the Lorentzian metric is \((-,+,\cdots,+)\). A point \(p\in M\) is usually called an event from the viewpoint of general relativity. For each \(p\in M\), let \(T_{p}M\) be the tangent space at \(p\) and \(TM\) be the tangent bundle of \(M\). A tangent vector \(V\in T_{p}M\) is called timelike, spacelike or lightlike if \(g(V,V)<0\), \(g(V,V)>0\) or \(g(V,V)=0\) (\(V\neq 0\)), respectively. Also, a tangent vector \(V\in T_{p}M\) is called causal if \(V\) is either timelike or lightlike and is called non-spacelike if \(g(V,V)\leq 0\). The set of null vectors consists of zero vector and lightlike vectors. The time-orientation of \((M,g)\) ensures that \(M\) admits a continuous, nowhere vanishing, timelike vector field \(X\). \(X\) is used to separate the causal vectors at each base point into two classes which are called past-directed and future-directed respectively. A causal tangent vector \(V\in T_{p}M\) is said to be past-directed (respectively, future-directed) if \(g(X(p),V)>0\) (respectively, \(g(X(p),V)<0\)). Then a piecewise \(C^{1}\) curve \(\gamma:I\to M\) (\(I\subset R\) is an interval of the real line) is called causal, timelike, lightlike and past-directed or future-directed if the tangent vector \(\hat{\gamma}(s)\) is causal, timelike, lightlike and past-directed or future-directed at every differentiable point \(\gamma(s)\). For two events \(p,q\in M\), if there exists a future-directed timelike (respectively, future-directed causal or constant) curve from \(p\) to \(q\), we say \(p,q\) are chronologically related (respectively, causally related) and denote by \(p\ll q\) (respectively, \(p\leq q\)). For a given \(p\in M\), define chronological past \(I^{-}(p)\), chronological future \(I^{+}(p)\), causal past \(J^{-}(p)\), causal future \(J^{+}(p)\) as follows: \[I^{-}(p)=\{q\in M:q\ll p\},\] \[I^{+}(p)=\{q\in M:p\ll q\},\] \[J^{-}(p)=\{q\in M:q\leq p\},\] \[J^{+}(p)=\{q\in M:p\leq q\}.\] Analogously, for subsets \(A\subseteq M\), we can define \(I^{-}[A]\), \(I^{+}[A]\), \(J^{-}[A]\) and \(J^{+}[A]\). For example, \(I^{-}[A]:=\{q\in M|q\ll s\ \ \text{for some}\ \ s\in A\}\). We call a subset \(A\) of \(M\) is an achronal (respectively, acausal) set if no two points of \(A\) are chronologically related (respectively, causally related ). **Definition 2.1**.: A space-time \((M,g)\) is said to be causal if there are no causal loops. A space-time \((M,g)\) is globally hyperbolic if it is causal and the sets \(J^{+}(p)\cap J^{-}(q)\) are compact for all \(p,q\in M\). Throughout this paper, we always assume that \((M,g)\) is a globally hyperbolic space-time. We are going to use two metrics on \(M\), one is the Lorentzian metric \(g\), the other is an auxiliary complete Riemannian metric \(h\). The length functionals associated to \(g\) and \(h\) are denoted by \(L(\cdot)\) and \(L_{h}(\cdot)\) respectively. Let \(\Omega_{x,y}\) denote the path space of all future-directed causal curves and constant curves \(\gamma:[0,1]\to M\) with \(\gamma(0)=x\) and \(\gamma(1)=y\), for a piecewise \(C^{1}\) causal curve \(\gamma\in\Omega_{x,y}\), \(L(\gamma)\) is given by \[L(\gamma):=\int_{0}^{1}\sqrt{-g(\dot{\gamma}(s),\dot{\gamma}(s))}ds.\] Correspondingly the distance functions associated to \(g\) and \(h\) are denoted by \(d(\cdot,\cdot)\) and \(d_{h}(\cdot,\cdot)\) respectively. More precisely, given any \(x,y\in M\) with \(x\leq y\), then \(d(x,y)\) is given by \[d(x,y)=\sup_{\gamma}\{L(\gamma):\gamma\in\Omega_{x,y}\}.\] The norms associated to \(g\) and \(h\) are denoted by \(|\cdot|\) and \(|\cdot|_{h}\) respectively. Recall that for a non-spacelike vector \(V\in TM\), \(|V|=\sqrt{-g(V,V)}\). Let \(\nabla\) (respectively, \(\nabla_{h}\)) be the gradient induced by the Lorentzian metric \(g\) (respectively, Riemannian metric \(h\)). There is a timelike Lorentzian eikonal equation associated to the Lorentzian metric \(g\): \[g(\nabla u,\nabla u)=-1. \tag{2.1}\] In this paper, we devote ourself to studying viscosity solutions of equation (2.1). Before stating our results, we need to introduce some more definitions for convenience. The next two definitions are about to generalized gradients and viscosity solutions to eikonal equation (2.1). **Definition 2.2**.: [6, Definition 3.1.6] Let \(f:M\to R\) be a continuous function on \((M,g)\). A vector \(V\in T_{p}M\) is called to be a subgradient (resp., supergradient) of \(f\) at \(p\in M\), if there exist a neighborhood \(O\) of \(p\) and a \(C^{1}\) function \(\phi:O\to R\), differentiable at \(p\), with \(f(p)=\phi(p)\), \(\phi(p)\leq f(p)\) (resp., \(\phi(p)\geq f(p)\)) for every \(x\in O\) and \(\nabla\phi(p)=V\). We denote by \(\nabla^{-}f(p)\) the set of subgradients of \(f\) at \(p\), \(\nabla^{+}f(p)\) the set of supergradients of \(f\) at \(p\). With the help of \(\nabla^{-}f(p)\) and \(\nabla^{+}f(p)\), we can state that the definition of viscosity solution as follows: **Definition 2.3**.: A continuous function \(f\) is called a viscosity subsolution of equation (2.1) if for any \(p\in M\), \[g(V,V)\leq-1\ \ \text{for every}\ \ V\in\nabla^{+}f(p).\] Similarly, \(f\) is called a viscosity supersolution of equation (2.1) if for any \(p\in M\), \[g(V,V)\geq-1\ \ \text{for every}\ \ V\in\nabla^{-}f(p).\] A continuous function \(f\) is a viscosity solution of equation (2.1) if it is a viscosity subsolution and a viscosity supersolution simultaneously. Let \(Lip_{loc}(M)\) be the set of locally Lipschitz (with respect to the Riemannian metric \(h\)) functions on \(M\) and \[\mathcal{S}(M):=\{f\in Lip_{loc}(M)|f\text{ is global viscosity solution to equation (2.1)}\}.\] In this paper, \(\mathcal{S}(M)\) is the main object that we want to study systematically. Concretely, we expect to get a classification of \(\mathcal{S}(M)\) and give a variational representation of elements in \(\mathcal{S}(M)\). For this purpose, we need the following definitions. **Definition 2.4**.: [7, Definition 2.8] Let \(\phi:M\to R\) be a locally Lipschitz function defined on the space-time \((M,g)\), then a vector \(V\in T_{p}M\) is called a limiting gradient if there exists a sequence \(p_{k}\subset M\setminus\{p\}\) with \(\lim\limits_{k\to\infty}p_{k}=p\) such that \(\phi\) is differentiable at \(p_{k}\) for each \(k\in\mathbb{N}\), and \(\lim\limits_{k\to\infty}\nabla\phi(p_{k})=V\). Here, the first limit is taken in the sense of manifold topology on \(M\); the second limit is taken in the sense of any fixed chart that contains \(p\). Since the first limit is taken, we know that when \(k\) is sufficiently large, \(p_{k}\) goes into that chart. The second limit does not depend on the choice of chart. We denote \(\nabla^{*}\phi(p)\) to be the set of all limiting gradients of \(\phi\) at \(p\). It should be mentioned that for any \(f\in\mathcal{S}(M)\), if there exists a sequence \(\{x_{i}\}_{i}\subseteq M\backslash\{x\}\) such that \(x_{i}\to x\) as \(i\to\infty\) and \(f\) is differentiable at \(x_{i}\) for each \(i\), then the set \(\{\nabla_{h}f(x_{i})\}_{i}\) has a bound depending only on \(x\). Furthermore, let \(\phi\) be the map such that \(\phi(\nabla_{h}g(\cdot))=\nabla g(\cdot)\) for any smooth function \(g\) on \(M\). Obviously, \(\phi\) is a linear transformation on each fiber of \(TM\) and \(|\phi|_{h}\) is bounded, where \(|\phi|_{h}=\sup\limits_{V\in TM,V\neq 0}\frac{|\phi(V)|_{h}}{|V|_{h}}\) is the operator norm with respect to \(h\). Thus, the set \(\{\nabla f(x_{i})\}_{i}\) is contained in a compact subset of \(TM\). More precisely, \(\lim\limits_{i\to\infty}\nabla f(x_{i})\in\nabla^{*}f(x)\). **Definition 2.5**.: For a Lipschitz continuous function \(f\) on \((M,g)\) where \(\nabla f\) is timelike as long as it exists, * We say the time orientation of \(f\) is future-directed at \(x\) if any limiting gradient \(V\in\nabla^{*}f(x)\) is future-directed timelike, the time orientation of \(f\) is past-directed at \(x\) if any limiting gradient \(V\in\nabla^{*}f(x)\) is past-directed timelike. * We say the time orientation of \(f\) changes at \(x\) if there exist two timelike vectors \(V,W\) in \(\nabla^{*}f(x)\) such that \(V\) is past-directed and \(W\) is future-directed. * We say the time orientation of \(f\) is consistent if for any two points \(x,y\in M\), the vectors in both \(\nabla^{*}f(x)\) and \(\nabla^{*}f(y)\) have the same time orientation. Otherwise, we say the time orientation of \(f\) is non-consistent. **Definition 2.6** (Forward calibrated curve).: For any \(u\in\mathcal{S}(M)\), a future-directed timelike curve \(\alpha:[0,t]\to M\) (\(t>0\)) is said to be a forward calibrated curve of \(u\) if \(u(\alpha(0))=u(\alpha(s))-s\) for any \(s\in[0,t]\). **Definition 2.7** (Partial Cauchy Surface).: Let \(A\subseteq M\) be an achronal set, the edge\((A)\) consists of all points \(p\in\bar{A}\), such that every neighborhood \(U\) of \(p\) contains a timelike curve form \(I^{-}(p,U)\) to \(I^{+}(p,U)\) which does not meet \(A\). Where \(I^{-}(p,U)\) (respectively, \(I^{+}(p,U)\)) denotes the chronological past (respectively, future) of \(p\) with respect to the space-time \((U,g|_{U})\). The set \(A\) is said to be edgeless if edge\((A)=\emptyset\). A subset \(S\) of \((M,g)\) is said to be a partial Cauchy surface if \(S\) is acausal and edgeless in \(M\). It is well known that the level sets of Busemann function are partial Cauchy surfaces on a globally hyperbolic space-time and any Busemann function is a viscosity solution of equation (2.1) on its domain. This motivates us to consider whether \(u_{s}\) is a partial Cauchy surface for any \(s\in\text{Image}(u)\). Here and in the following \(\text{Image}(u)\) denotes the image set of \(u\). In what follows, for any \(u\in\mathcal{S}(M)\) and \(s\in R\), we denote the level set of \(u\) by \(u_{s}\), i.e., \(u_{s}=\{x\in M:u(x)=s\}\). In general, for \(s\in\text{Image}(u)\)\(u_{s}\) is not a Cauchy surface even if \(M\) is globally hyperbolic. We left such an example in Section 4. Our main results of this paper are stated as follows. **Theorem 1**.: _If \((M,g)\) is a globally hyperbolic space-time, then \(\mathcal{S}(M)\neq\emptyset\). In other words, the eikonal equation admits at least one global locally Lipschitz viscosity solution._ In the following we will propose some properties of \(u\in\mathcal{S}(M)\) when the time orientation of \(u\) is consistent. Without loss of generality, we only consider the case where time orientation of \(u\) is always past-directed. More precisely: **Theorem 2**.: _For any \(u\in\mathcal{S}(M)\) for which the time orientation of \(u\) is always past-directed, there exists a \(t_{0}>0\), which is dependent on \(x\), such that_ * _For each_ \(s\in\text{Image}(u)\)_,_ \(u_{s}\) _is a partial Cauchy surface._ * _For any_ \(0<t<t_{0}\)_, we have_ \[u(x)=\inf\limits_{x\leq y,d(x,y)=t}u(y)-t.\] (2.2) * _The function_ \(u\) _is locally semiconcave on_ \((M,g)\)_._ Actually, under the same conditions of Theorem 2, viscosity solutions possess some properties of weak KAM type like the ones in the Riemannian case. Recall that a future-directed, future inextendible timelike geodesic \(\gamma:[0,a)\to M\) is said to be a future-directed timelike ray if \(d(\gamma(0),\gamma(t))=L(\gamma)|_{[0,t]}\) for all \(0\leq t\leq a\). See [2, Definition 8.8] for more details. **Theorem 3**.: _For any \(u\in\mathcal{S}(M)\) for which the time orientation is always past-directed, we have_ * \(u(x)\) _is differentiable at_ \(x\in M\) _if and only if there exists a unique future-directed timelike ray_ \(\gamma_{x}:[0,T)\to M\) _that satisfies_ \[\gamma_{x}(0)=x,\ \ u(\gamma_{x}(t))=u(x)-t\ \ \text{for every}\ t\in[0,T)\] (2.3) _and_ \[\gamma_{x}(0)=-\nabla u(x)\] (2.4) _where_ \(T\) _is the maximal existence time of_ \(\gamma_{x}\)_._ * _For any_ \(x\in M\)_,_ \(u\) _is differentiable at any_ \(\gamma_{x}(t)\) _for_ \(t\in(0,T)\) _and_ \(\gamma_{x}(t)=-\nabla u(\gamma_{x}(t))\)_._ The results of the above theorems are based on the fact that the time orientation of \(u\) is consistent. One may wonder what happens when the time orientation of \(u\) is non-consistent. Because of distinctiveness of causal structure and non-positive-definiteness of the Lorentzian metric, the properties of viscosity solutions to the equation (2.1) are greatly different from the ones in the Riemannian case. Although the situation in this case is much more complicated, as the following results will show, we can still get some results under some assumptions. In order to classify the elements concretely in \(\mathcal{S}(M)\), for any \(u\in\mathcal{S}(M)\), we introduce the peculiar set \(P_{u}\) by \[P_{u}:=\{x\in M|\nabla^{+}u(x)=\emptyset\text{ and }\nabla^{-}u(x)=\emptyset\}.\] **Theorem 4**.: _For any \(u\in\mathcal{S}(M)\) with \(P_{u}=\emptyset\), we have:_ * _If the time orientation of viscosity solution_ \(u\) _changes at point_ \(\bar{x}\)_, then_ \(\nabla^{-}u(\bar{x})\neq\emptyset\)_._ * _The time orientation of_ \(u\) _changes at most once along any inextendilbe causal curve._ By Theorem 4, when the time orientation of \(u\) changed along some future-directed inextendible causal curve, let \(C_{u}\) be the time orientation changing set of \(u\), \[C_{u}:=\{x\in M|\text{ there are }V,W\in\nabla^{*}u(x)\text{ such that }V\text{ is past-directed causal vector},\ W\text{ is future-directed causal vector}\}.\] The following theorem is the characterization of the set \(C_{u}\). **Theorem 5**.: _For any \(u\in\mathcal{S}(M)\), if \(P_{u}=\emptyset\) and the time orientation of \(u\) is non-consistent, then \(C_{u}\) is a partial Cauchy surface._ **Remark 2.1**.: _Recall that \(f(x,y)=|x|\) is a viscosity solution to the eikonal equation on the 2-dimensional Minkowski space-time (\(R^{2},dy^{2}-dx^{2}\)). In this case, \(P_{f}=\emptyset\) and \(C_{f}=\{(x,y)\in R^{2}|x=0\}\). In Section 7, we give an example to show that the peculiar set of viscosity solution of (2.1) can be nonempty._ The remainder of this paper is organized as follows. In Section 3, we construct a function \(u^{+}\) and prove \(u^{+}\in\mathcal{S}(M)\), which completes the proof of theorem 1. In Section 4, we study the properties of level sets of \(u\) when the time orientation of \(u\) is consistent. In Section 5, we prove \(u\) has a variational representation when the time orientation of \(u\) is consistent and as a consequence, \(u\) is locally semiconcave. Based on the results in these two sections, Theorem 2 is proved. In Section 6, we establish the weak KAM properties of \(u\) when the time orientation of \(u\) is consistent. This completes the proof of Theorem 3. In the last two sections, we study the function \(u\in\mathcal{S}(M)\) when the time orientation of \(u\) is non-consistent. Theorem 4 is proved in Section 7. Theorem 5 is proved in Section 8. ## 3 Construction of global viscosity solutions to equation (2.1) In this section, we firstly construct a globally defined function \(u^{+}\) by a steep time function \(\tau\) introduced in [3]. Secondly, we will prove the well-posedness and the semiconcavity of \(u^{+}\) by using the method of support function. Finally, we will show that \(u^{+}\) is a viscosity solution to equation (2.1), which completes the proof of Theorem 1. ### The construction of \(u^{\pm}\) Recall that \((M,g)\) is globally hyperbolic, \(d(\cdot,\cdot)\) is continuous with respect to both two variables and satisfies the reverse triangle inequality: \[d(p,q)+d(q,r)\leq d(p,r)\ \ \ \text{for}\ \ p,q,r\in M\ \ \text{and}\ \ p\leq q\leq r.\] Let \(\gamma\colon[a,b)\to M\) be a past-directed (respectively, future-directed) causal curve. The point \(p\in M\) is said to be a past (respectively, future) endpoint of \(\gamma\) corresponding to \(t=b\) if \[\lim_{t\to b^{-}}\gamma(t)=p.\] A causal curve is said to be past (respectively, future) inextendible if it has no past (respectively, future) endpoint. A causal curve \(\gamma\colon(a,b)\to M\) is said to be inextendible if it is both past and future inextendible. **Definition 3.1** (Cauchy Surface): A Cauchy surface \(S\) is a subset of \(M\) for which every inextendible causal curve intersects exactly once. **Definition 3.2** (Cauchy development): Given a closed subset \(S\) of \((M,g)\), the past Cauchy development or domain of dependence of \(S,D^{-}(S)\), is defined as the set of all points \(q\) such that every future inextendible causal curve from \(q\) intersects \(S\). The future domain of dependence of \(S,D^{+}(S)\), is defined in a similar way by exchanging the roles of the future and the past. The Cauchy development or the domain of dependence is the set \(D(S)=D^{-}(S)\cup D^{+}(S)\). By [3, Theorem 3], there exists a steep temporal function \(\tau:M\to R\) such that \(\tau\) is strictly increasing along any future-directed causal curve and \[|d\tau(V)|\geq\max\{\sqrt{-g(V,V)},|V|_{h}\} \tag{3.1}\] for all timelike vector \(V\in TM\). [3, Corollary 1.8] implies that there exists a diffeomorphism \(M\cong R\times N\) such that \[\tau:M\cong R\times N\to R,\ \ p\cong(s,x)\to s.\] For \(s\in R\), let \(\tau_{s}\) be the level set of \(\tau\), i.e., \(\tau_{s}:=\{p\in M|\tau(p)=s\}\). **Remark 3.1**: _For an inextendible causal curve \(\gamma\colon(a,b)\to M\), by the completeness of \(h\), we have \(\lim_{t\to a^{+}}\tau(\gamma(t))=-\infty\) and \(\lim_{t\to b^{-}}\tau(\gamma(t))=\infty\)._ For \(s\in R\), each \(\tau_{s}\) is a Cauchy surface. For any \(p\in I^{-}[\tau_{s}]\), let \(d(p,\tau_{s}):=\sup_{z\in\tau_{s}}d(p,z)\). Similarly, let \(d(\tau_{s},p):=\sup_{z\in\tau_{s}}d(z,p)\) for any \(p\in I^{+}[\tau_{s}]\). **Remark 3.2**: _For any \(s\in R\) and any \(p\in I^{-}[\tau_{s}]\), \(d(p,\tau_{s})\) is finite. Indeed, by (3.1), for any future-directed causal curve \(\gamma\) connected \(p\) to \(\tau_{s}\), \(\sqrt{-g(\hat{\gamma},\hat{\gamma})}\leq d\tau(\hat{\gamma})\). Integrating on both sides, we have \(L(\gamma)\leq s-\tau(p)<\infty\)._ **Definition 3.3**: For fixed \(s>0\), \(x_{0}\in\tau_{0}\) and for any \(p\in I^{-}[\tau_{s}]\), define: * \(u^{+}_{\tau_{s}}(p):=d(x_{0},\tau_{s})-d(p,\tau_{s})=\sup_{z\in\tau_{s}}d(x_ {0},z)-\sup_{z\in\tau_{s}}d(p,z)\), * \(u^{+}(p):=\lim_{s\to+\infty}u^{+}_{\tau_{s}}(p)\). Similarly, for fixed \(s<0\), \(x_{1}\in\tau_{0}\) and for any \(p\in I^{+}[\tau_{s}]\), define: * \(u^{-}_{\tau_{s}}(p):=d(\tau_{s},x_{1})-d(\tau_{s},p)=\sup_{z\in\tau_{s}}d(z,x _{1})-\sup_{z\in\tau_{s}}d(z,p)\), * \(u^{-}(p):=\lim_{s\to-\infty}u^{-}_{\tau_{s}}(p)\). By the definition of \(u^{+}_{\tau_{\mathrm{g}}}(p)\), the continuity of \(u^{+}_{\tau_{\mathrm{g}}}(p)\) can be derived from the continuity of \(d(p,\tau_{\mathrm{g}})\). Obviously, \(d(p,\tau_{\mathrm{g}})\) is a lower semicontinuous function on \(I^{-}[\tau_{\mathrm{g}}]\) since it is the supremum of a family of continuous functions, so to get the continuity of \(d(p,\tau_{\mathrm{g}})\), we only need to prove the upper semicontinuity. For this purpose, we introduce the following two lemmas. **Lemma 3.1**: _For a partial Cauchy surface \(\Sigma\) and any \(x\in D^{-}[\Sigma]\), we set \(I^{+}_{\Sigma}(x)=\Sigma\cap I^{+}(x)\), \(J^{+}_{\Sigma}(x)=\Sigma\cap J^{+}(x)\). Then \(J^{+}_{\Sigma}(x)=\overline{I^{+}_{\Sigma}(x)}\) is compact._ Proof.: Recall that for \(x\in D^{-}[\Sigma]\), any future-directed causal curve \(\gamma\) with \(\gamma(0)=x\) must strike \(\Sigma\). Reparametrize all inextendible future-directed causal geodesics by \(h\)-arc length, then these geodesics satisfy a system of second order ordinary differential equations. Since \(\Sigma\) is a partial Cauchy surface, we can define a map \(T_{\Sigma}\) sending any \(\dot{\gamma}(0)\in U_{h}T_{x}M\cap\mathcal{C}_{x}\) to the unique intersection point of \(\gamma\) with \(\Sigma\). Where \(U_{h}T_{x}M\) is the unit tangent sphere at \(x\) with respect to Riemannian metric \(h\), \(\mathcal{C}_{x}\) is the future-pointing light cone at \(x\). Applying the theorem of Picard-Lindelof [13, p.8], it is easy to see that \(T_{\Sigma}\) is continuous. Since the \(U_{h}T_{x}M\cap\mathrm{int}\mathcal{C}_{x}\) is diffeomorphic to a bounded open ball in \(R^{\mathrm{Dim(M)}-1}\), here and in the following, \(\mathrm{Dim}(M)\) denotes the dimension of \(M\). Then, \[J^{+}_{\Sigma}(x)=T_{\Sigma}(U_{h}T_{x}M\cap\mathcal{C}_{x})=\overline{T_{ \Sigma}(U_{h}T_{x}M\cap int\mathcal{C}_{x})}=\overline{I^{+}_{\Sigma}(x)}\] is compact. **Remark 3.3**: _By Lemma 3.1, it is easy to see that if \(\Sigma\) is a Cauchy surface, then for any \(x\in I^{-}[\Sigma]\), \(J^{+}_{\Sigma}(x)\) is a compact subset of \(M\). Moreover, there exists a future-directed maximal timelike geodesic segment \(\gamma_{\mathrm{g},\Sigma}:[0,d(x,\Sigma)]\to M\) connected \(x\) to \(\Sigma\) such that \(\gamma_{\mathrm{g},\Sigma}(0)=x\) and \(d(x,\Sigma)=L(\gamma_{\mathrm{g},\Sigma})\)._ **Lemma 3.2**: _[_8_, Lemma 3.2]_ _Let \(U\) be a causal convex neighborhood of \(p\). If \(U\) is small enough, there is a constant \(C>0\), \(T>0\) with the following property: If \(p\in U\) and \(s>2T\) then every maximal unit speed geodesic segment \(\alpha\) from \(p\) to \(\tau_{\mathrm{g}}\) satisfies \(\|\alpha^{\prime}(0)\|_{h}\leq C\)._ Lemma 3.2 shows that the set of initial tangent vectors of these maximal geodesic segments stays in a compact subset of tangent bundle \(TU\). Now suppose \(d(p,\tau_{\mathrm{g}})\) is not upper semicontinuous at \(p\) in \(U\). Then there exists a sequence \(p_{n}\in U\) with \(p_{n}\to p\) but \(\lim\limits_{n\to\infty}d(p_{n},\tau_{\mathrm{g}})>d(p,\tau_{\mathrm{g}})\). Thanks to Remark 3.3, let \(\alpha_{n}\) be a maximal timelike geodesic segment from \(p_{n}\) to \(\tau_{\mathrm{g}}\) and reparametrize it by \(h\)-arc length. By the Limit Curve Lemma [2, Lemma 14.2] and Lemma 3.2, there exists a subsequence \(\alpha_{n_{k}}\) which converges to a timelike curve \(\alpha\) from \(p\) to \(\tau_{\mathrm{g}}\), so we have \[d(p_{n},\tau_{\mathrm{g}})=L(\alpha_{n_{k}})\to L(\alpha)\leq d(p,\tau_{ \mathrm{g}}).\] This contradiction implies the continuity of \(u^{+}_{\tau_{\mathrm{g}}}(p)\) on \(U\). ### A priori estimate of \(u^{+}\) In this subsection we will show that both \(u^{+}\) and \(u^{-}\) are well-defined. Actually, we only need to prove that \(u^{+}(p)\) is well-defined and \(u^{-}(p)\) can be treated analogously. By the Arzela-Ascoli theorem and \(u^{+}_{\tau_{\mathrm{g}}}(x_{0})=0\), it is sufficient to show that \(u^{+}_{\tau_{\mathrm{g}}}\) are locally equi-Lipschitz. **Lemma 3.3**: _For \(p,q\in I^{-}[\tau_{\mathrm{g}}]\) with \(p\leq q\), the function \(u^{+}_{\tau_{\mathrm{g}}}\) satisfies_ \[u^{+}_{\tau_{\mathrm{g}}}(q)-u^{+}_{\tau_{\mathrm{g}}}(p)\geq d(p,q).\] Proof.: By the reverse triangle inequality of Lorentzian distance function \(d\), for any \(q\leq z\in\tau_{\mathrm{g}}\), we have \[d(p,q)+d(q,z)\leq d(p,z)\] Then by the definition of \(u^{+}_{\tau_{s}}\), it is easy to see that \[u^{+}_{\tau_{s}}(q)-d(p,q)\] \[= d(x_{0},\tau_{s})-d(q,\tau_{s})-d(p,q)\] \[= d(x_{0},\tau_{s})-\sup_{z\in\tau_{s}}d(q,z)-d(p,q)\] \[\geq d(x_{0},\tau_{s})-\sup_{z\in\tau_{s}}d(p,z)\] \[= u^{+}_{\tau_{s}}(p).\] **Proposition 3.1**.: \(u^{+}_{\tau_{s}}(x)\) _are locally equi-Lipschitz functions, and there exists a subsequence \(s_{n}\) and a Lipschitz function \(u^{+}(x)\) such that_ \[\lim_{s_{n}\to+\infty}u^{+}_{\tau_{n}}(x)=u^{+}(x)\ \ \text{for any}\ \ x\in M.\] _Moreover, \(u^{+}(x)\) is a locally semiconcave function on \(M\). In particular, when \(s_{n}\to\infty\), \(I^{-}[\tau_{s_{n}}]\to M\)._ In order to prove Proposition 3.1, we need to introduce the concept of support function. **Definition 3.4**.: For each \(s>0\), \(p\in I^{-}[\tau_{s}]\), there exists \(q_{s}\in\tau_{s}\) and \(\gamma\in\Omega_{p,q_{s}}\) such that \(d(p,q_{s})=d(p,\tau_{s})=L(\gamma)\). Choosing \(p_{1,s}\in\gamma\) and a neighborhood \(O\) of \(p\) satisfying \(O\subseteq I^{-}(p_{1,s})\). Define \[u^{+}_{p,s}:O\to[-\infty,+\infty]\] with \[u^{+}_{p,s}(x)=d(x_{0},\tau_{s})-d(x,p_{1,s})-d(p_{1,s},q)\ \ \text{for any}\ \ x\in O.\] Readers may wonder about how does \(p_{1,s}\) affect \(u^{+}_{p,s}\), but by the following lemma, we can always choose some suitable \(p_{1,s}\) such that the choice of \(p_{1}\) does not affect the properties of \(u^{+}_{p,s}\). **Lemma 3.4**.: _Let \(s\to\infty\), there exists a subsequence \(s_{n}\) such that for each \(s_{n}\) the \(p_{1,s}\) in Definition 3.4 can be chosen in a compact subset of \(M\) and the neighborhood \(O\) in Definition 3.4 does not depend on \(s_{n}\) for all \(s_{n}\) large enough._ Proof.: For \(s\) large enough and any \(x\in I^{-}[\tau_{s}]\), we use \(\gamma_{x,s}\) denote a future-directed maximal geodesic segment between \(x\) to \(\tau_{s}\). For each \(s\), we extend \(\gamma_{x,s}\) to be a future-directed inextendible timelike curve and reparameterize it by \(h\)-arc length. These reparametrzed curves are denoted by \(\tilde{\gamma}_{x,s}\). By the fact that \(\tilde{\gamma}_{x,s}(0)=\gamma_{x,s}(0)=x\) and Limit Curve Lemma [2, Lemma 14.2], there exists a subsequence \(s_{n}\) and a future-directed inextendible causal curve \(\tilde{\gamma}_{x}:[0,\infty)\to M\) such that \(\tilde{\gamma}_{x,s_{n}}\) converge to \(\tilde{\gamma}_{x}\) uniformly on any compact subset of \([0,\infty)\). We take a constant \(T>0\) and a neighborhood \(O\) of \(x\) such that \(\tilde{O}\subseteq I^{-}(\tilde{\gamma}_{x}(T))\). Since \(I^{-}\) is inner continuous [2, p, 59], there exists a neighborhood \(U\) of \(I^{-}(\tilde{\gamma}_{x}(T))\) such that \(\tilde{O}\subseteq I^{-}(q)\) for any \(q\in U\). We set \(p_{1,s_{n}}:=\tilde{\gamma}_{x,s_{n}}(T)\). Recall that \(\tilde{\gamma}_{x,s_{n}}(T)\) converge to \(\tilde{\gamma}_{x}(T)\). Without loss of generality, we can obtain that the sequence \(p_{1,s_{n}}\) always stay in a compact neighborhood of \(\tilde{\gamma}_{x}(T)\) and \(O\subseteq I^{-}(\tilde{\gamma}_{x,s_{n}}(T))\). Due to the reverse triangle inequality, it is easy to see that for any \(s>0\), \(u^{+}_{p,s}\) given by Definition 3.4 is a continuous upper support function for \(u^{+}_{\tau_{s}}(x)\) at \(p\), i.e., \(u^{+}_{p,s}(x)\geq u^{+}_{\tau_{s}}(x)\) for all \(x\) near \(p\) with equality when \(x=p\). With the help of \(u^{+}_{p,s}(x)\), we can prove that \(u^{+}_{\tau_{s}}(x)\) are locally equi-Lipschitz. Put \(d_{s}(x)=d(x,\tau_{s})\), it is sufficient to show that the functions \(u^{+}_{\tau_{s}}\) are Lipschitz continuous with a uniform Lipschitz constant for \(x\in O\) and \(s\) large enough. Since \[u^{+}_{\tau_{s}}(x)-u^{+}_{\tau_{s}}(y)=d_{s}(y)-d_{s}(x),\] we only need to show that \(d_{s}\) are equi-Lipschitz continuous functions. To prove this result, the following lemma is useful. **Lemma 3.5**.: _[_2_, Lemma 14.20]_ _Let \(U\) be an open convex domain in \(R^{n}\) and \(f:U\to R\) a continuous function. Suppose that for each \(p\) in \(U\) there is a smooth lower support function \(f_{p}\) defined in a neighborhood of \(p\) such that \(\|d(f_{p})p\|_{h}<L\). Then \(f\) is Lipschitz continuous with Lipschitz constant \(L\), i.e., for all \(x,y\in U\) we have_ \[|f(x)-f(y)|\leq L|x-y|_{h}. \tag{3.2}\] Note \(-u_{p,s}^{+}(x)\) defined in Definition 3.4 is a lower support function to \(-u_{\tau_{s}}^{+}\). To apply Lemma 3.5, we need to establish an estimate for the lower support functions \(-u_{p,s}^{+}(x)\). Thanks to lemma 3.3 in [8], there exists a constant \(L\) independent of \(s\) such that \(u_{p,s}^{+}(x)\) is locally Lipschitz for each \(s\). What is more, the constant \(L\) can be written as \(L=GC\), where \(G=\sup\{g_{i,j}(x):x\in U,1\leq i,j\leq n\}\) and \(C\) is as in Lemma 3.2. Then by the Arzela-Ascoli theorem and \(u_{\tau_{s}}^{+}(x_{0})=0\) (for example, see [11, Lemma 4.4]), there exists a subsequence \(s_{n}\) and a locally Lipschitz function \(u^{+}(x)\) such that \[\lim_{s_{n}\to+\infty}u_{\tau_{n}}^{+}(x)=u^{+}(x)\ \ \text{for any}\ \ x\in M.\] So far, we have obtained the local Lipschitzness of \(u^{+}\). In the following we will show that \(u^{+}\) is a locally semiconcave function. Local semiconvexity (semiconcavity) is defined as follows. **Definition 3.5**.: [7, Definition 2.6] Let \(O\) be an open subset of \(M\). A function \(\psi:O\to R\) is said to be semiconcave if there exists a \(c>0\) such that for any constant-speed geodesic (with respect to \(h\)) path \(\gamma(t)\), \(t\in[0,1]\), \(\gamma(t)\in O\), \[\psi(\gamma(t))\geq(1-t)\psi(\gamma(0))+t\psi(\gamma(1))+c\frac{t(1-t)}{2}d_{ h}^{2}(\psi(\gamma(0)),\psi(\gamma(1))). \tag{3.3}\] A function \(\psi:M\to R\) is said to be locally semiconcave if for each \(p\in M\) there is a neighborhood \(O\) of \(p\) in \(M\) such that (3.3) holds true as soon as \(\gamma(t)\in O\) (\(t\in[0,1]\)); or equivalently if (3.3) holds for some fixed positive number \(c\) as long as \(\gamma\) stays in a compact subset \(K\) of \(O\). Similar definitions for semiconvexity and local semiconvexity are obtained in an obvious way by reversing the sign of the inequality in (3.3). To show \(u^{+}(x)\) is locally semiconcave, we need the following lemma. Although the following lemma is stated in \(R^{n}\), it is still valid for our case. The reason is that we only need to study \(u^{+}(x)\) on \(O\). **Lemma 3.6**.: _[_1_, Lemma 3.2]_ _Let \(U\) be an open convex domain in \(R^{n}\) and \(f:U\to R\) a continuous function. Assume for some constant \(c\) and for all \(p\in U\) that \(f\) has a smooth upper support function \(f_{p}\) at \(p\), i.e. \(f_{p}(x)\geq f(x)\) for all \(x\) near \(p\) with equality holding when \(x=p\), such that \(D^{2}f_{p}(x)\leq ct\) near \(p\). Then \(f-\xi\|x\|_{E}^{2}\) is concave in \(U\), thus \(f\) is semiconcave and twice differentiable almost everywhere in \(U\). In this lemma, \(\|\cdot\|_{E}\) denotes the Euclidean norm on \(R^{n}\)._ _Proof of Proposition 3.1._ Without loss of generality, we can assume that \(\bar{O}\) in Definition 3.4 is a compact subset of \(M\). For any \(x\in O\), let \(\gamma_{x,s}\) be the segment between \(x\) to \(p_{1,s}\). Then by Lemma 3.6 and the definition of \(u_{p,s}^{+}(x)\), we only need to use comparison theory to give an estimation of the Hessian of the distance function \(d(x,p_{1_{s}})(x\in O)\) in terms of upper and lower bounds of the timelike sectional curvatures of 2-planes containing \(\dot{\gamma}_{x,s}(0)\) and the length of \(\dot{\gamma}_{x,s}\), where the Hessian of \(d(x,p_{1,s})\) is defined in terms of the Levi-Civita connection with respect to \(g\). Since \(\gamma\) (see Definition 3.4) is maximal segment between \(x\) to \(q_{s}\), the internal of \(\gamma\) is free of cut points. By [9, Proposition 3.2], [2, Propositions 9.7, 9.29], we can choose \(O\) such that the distance function \(d(x,p_{1,s})\) is a smooth function with respect to \(x\in O\). When \(p_{1,s}\) is close to \(x\), there is a compact subset \(K_{1}\subseteq TM\) that will contain all the tangent vectors \(\dot{\gamma}_{x,s}|_{x\in O}\). This, together with Lemma 3.4, implies that both the timelike sectional curvatures of 2-planes containing \(\dot{\gamma}_{x,s}(0)\) and the length of \(\dot{\gamma}_{x,s}\) are bounded from above. By the method in [14, Theorem 5.10], there exists a \(c(O)>0\) depending only on \(O\) such that \(D^{2}d(x,p_{1,s})\geq-c(O)I\) for \(x\in O\). Then, passing to a subsequence if necessary, we can finally conclude that \(u^{+}(x)\) is locally semiconcave by Arzela-Ascoli theorem. \(\square\) ### \(u^{+}\) is global viscosity solution to equation (2.1) In this subsection, we will prove the function \(u^{+}\) defined in Definition 3.3 is a viscosity solution of Lorentzian eikonal equation (2.1). The proof is slightly modified from the one in [7, Proposition 2.1]. **Lemma 3.7**.: _If \(u^{+}\) is differentiable at \(p\), then \(g(\nabla u^{+}(p),\nabla u^{+}(p))=-1\) and \(\nabla u^{+}(p)\) is a past-directed timelike vector._ Proof.: Assume that \(u^{+}\) is differentiable at \(p\). By the definition of \(u^{+}\), there exists a subsequence \(s_{n}\) of \(s\to\infty\) such that \[u^{+}=\lim_{s_{n}\to\infty}u^{+}_{\tau_{s_{n}}}\] Choosing any smooth future-directed causal curve \(\gamma:[0,T)\to M(T>0)\) with \(\gamma(0)=p\), we have \[u^{+}(\gamma(t))-u^{+}(p) =u^{+}(\gamma(t))-u^{+}(\gamma(0))\] \[=\lim_{s_{n}\to\infty}u^{+}_{\tau_{u}}(\gamma(t))-\lim_{s_{n}\to \infty}u^{+}_{\tau_{u}}(\gamma(0))\] \[=\lim_{s_{n}\to\infty}(d(\gamma(0),\tau_{s_{n}})-d(\gamma(t), \tau_{s_{n}}))\] \[\geq d(\gamma(0),\gamma(t))\] \[\geq\int_{0}^{t}\sqrt{-g(\gamma(t),\hat{\gamma}(l))}dl\] for every \(t\in[0,T)\). Dividing by \(t\) on both sides, we get \[\frac{u^{+}(\gamma(t))-u^{+}(p)}{t}\geq\frac{1}{t}\int_{0}^{t}\sqrt{-g(\hat{ \gamma}(l),\hat{\gamma}(l))}dl. \tag{3.4}\] Letting \(t\to 0_{+}\), the differentiability of \(u^{+}\) at \(p\), together with inequality (3.4), we have \[du^{+}(p)(\hat{\gamma}(0))\geq\sqrt{-g(\hat{\gamma}(0),\hat{\gamma}(0))}.\] Note that \(du^{+}(p)(\hat{\gamma}(0))=g(\nabla u^{+}(p),\hat{\gamma}(0))\), this means that \[g(\nabla u^{+}(p),\hat{\gamma}(0))\geq\sqrt{-g(\hat{\gamma}(0),\hat{\gamma}( 0))} \tag{3.5}\] for any future-directed causal vector \(\hat{\gamma}(0)\). To continue the proof of Lemma 3.7, we need the following two claims. _Claim 1._\(\nabla u^{+}(p)\) is a past-directed timelike vector. First of all, by (3.5) we have \(\nabla u^{+}(p)\neq 0\). Suppose \(\nabla u^{+}(p)\) is spacelike, then there exists a smooth future-directed timelike curve \(\hat{\gamma}:[0,\varepsilon)\to M\) such that \(\hat{\gamma}(0)=p\) and \(g(\hat{\gamma}(0),\nabla u^{+}(p))=0\), this means that \(\nabla u^{+}(p)\) and \(\hat{\gamma}(0)\) are orthogonal with respect to the Lorentzian metric \(g\). On the other hand, form (3.5), we get \(g(\nabla u^{+}(p),\hat{\gamma}(0))>0\), this contradicts the orthogonal hypothesis. Thus, \(\nabla u^{+}(p)\) is a causal vector. Furthermore, \(g(\nabla u^{+}(p),V)\geq 0\) for any future-directed causal vector \(V\in T_{p}M\) implies \(\nabla u^{+}(p)\) is past-directed. To show \(\nabla u^{+}(p)\) is indeed a timelike vector, we need the following lemma **Lemma 3.8**.: _Suppose \(V\in T_{p}M\) is a nonzero past-directed non-spacelike vector and \(g(V,W)\geq\sqrt{-g(W,W)}\) for any future-directed causal vector \(W\), then \(V\) is not lightlike._ The proof of Lemma 3.8 is omitted and we refer to [7, Lemma 2.5]. _Claim 2._\(|\nabla u^{+}(p)|=1\). First we choose a future-directed smooth causal curve \(\delta:[0,\varepsilon)\to M\) with \(\delta(0)=p\), \(\dot{\delta}(0)=-\nabla u^{+}(p)\). Then by inequality (3.5), \[|\nabla u^{+}(p)|^{2}=-g(\nabla u^{+}(p),\nabla u^{+}(p))=g(\nabla u^{+}(p), \dot{\delta}(0))\geq\sqrt{-g(\dot{\delta}(0),\dot{\delta}(0))}=|\nabla u^{+}( p)|.\] Hence, either \(|\nabla u^{+}(p)|\geq 1\) or \(|\nabla u^{+}(p)|=0\). By _Claim 1_, we obtain \[|\nabla u^{+}(p)|\geq 1. \tag{3.6}\] For the rest of the proof, we recall the following fact. By Remark 3.3, there exists a maximal geodesic segment between \(p\) and \(\tau_{s}\) denoted by \(\gamma_{p,s}\), i.e., \(\gamma_{p,s}\) is a future-directed timelike (unit-speed) geodesic \(\gamma_{p,s}:[0,d(p,\tau_{s})]\to M\) with \(\gamma_{p,s}(0)=p\), such that \(d(p,\tau_{s})=L(\gamma_{p,s})\) and for \(0\leq t_{1}<t_{2}\leq d(p,\tau_{s})\), \[d(\gamma_{p,s}(t_{1}),\gamma_{p,s}(t_{2}))=t_{2}-t_{1}.\] Then we get \[\frac{u^{+}(\gamma_{p,s}(t))-u^{+}(\gamma_{p,s}(0))}{t-0}=\frac{ \lim_{s_{n}\to\infty}d(\gamma_{p,s_{n}}(0),\tau_{s_{n}})-d(\gamma_{p,s_{n}}(t),\tau_{s_{n}})}{t-0}=1\] for every \(t\in(0,d(p,\tau_{s}))\). Letting \(t\to 0^{+}\), by the the reverse Cauchy-Schwarz inequality for causal vectors ([17, 2.2.1]) and \(\gamma_{p,s}\) is a timelike (unit-speed) geodesic, we have \[1=du^{+}(p)(\gamma_{p,s}(0))=g(\nabla u^{+}(p),\gamma_{p,s}(0)) \geq|\nabla u^{+}(p)||\gamma_{p,s}(0)|=|\nabla u^{+}(p)|. \tag{3.7}\] Combining inequalities (3.6) and (3.7), _Claim 2_ follows. So far, the proof of Lemma 3.7 is finished. For a set \(B\) in a vector space, the convex hull of \(B\), \(coB\), is the smallest convex set containing \(B\). From convex analysis, we know the following lemma: **Lemma 3.9**.: _If \(\psi\) is a locally semiconvex (resp., semiconcavity) function on manifold \(M\), then it is locally Lipschitz (under any reasonable metric), and \(\nabla^{-}\psi(p)\) (resp., \(\nabla^{+}\psi(p)\)) is nonempty for any \(p\in M\). In this case, \(\nabla^{-}\psi(p)(\text{resp.},\nabla^{+}\psi(p))=co\nabla^{*}\psi(p)\subset T _{p}M\)._ For a proof of Lemma 3.9, we refer to [6, Theorem 3.3.6], where the limiting gradient is called reachable gradient. **Lemma 3.10**.: _Let \((M,g)\) be a globally hyperbolic space-time, then the function \(u^{+}\) defined in definition 3.3 is a viscosity solution to Lorentzian eikonal equation (2.1)._ Proof. By the definition of viscosity solution, we need to show \(u^{+}\) is subsolution and supersolution of equation (2.1). Firstly, for any \(V\in\nabla^{+}u^{+}(p)\), by Lemma 3.9, \[V\in\nabla^{+}u^{+}(p)=co\nabla^{*}u^{+}(p).\] By Lemma 3.7 and the convexity of the set \(\{W\in T_{p}M|\ W\ \text{is past-directed timelike vector and}\ g(W,W)\leq-1\}\), we can finally conclude that \[g(V,V)\leq-1.\] It means that \(u^{+}\) is a subsolution of the equation (2.1). On the other hand, since \(u^{+}\) is locally semiconcave, \(u^{+}\) is differentiable at \(p\in M\) whenever \(\nabla^{-}u^{+}(p)\neq\emptyset\). Thus, Lemma 3.7 implies that \(u^{+}\) is a supersolution. Therefore \(u^{+}\) is indeed a viscosity solution of the equation (2.1) and this completes the proof of Theorem 1. **Remark 3.4**.: _Using the same method, we can finally conclude that \(u^{-}\) defined in Definition 3.3 is also a global viscosity solution to Lorentzian eikonal equation (2.1)._ ## 4 The level sets of viscosity solution of Lorentzian eikonal equation In this section, for any \(u\in\mathcal{S}(M)\) we shall study the level sets of \(u\) when the time orientation of \(u\) is consistent. Firstly, we give an equivalent characterization of time orientation of \(u\). Secondly, we show that \(u_{s}\) is a partial Cauchy surface for each \(s\in\text{Image}(u)\). Finally, we give an example, which shows that in general the level set \(u_{s}\) is not a Cauchy surface. Recall that the time orientation of \(u\) is consistent means that for any \(x,y\in M\), \(V\in\nabla^{*}u(x)\) and \(W\in\nabla^{*}u(y)\), \(V\) and \(W\) have the same time orientation. Without loss of generality, we always assume that the time orientation of \(u\) is past-directed and \(s\in\text{Image}(u)\) in this section. ### Equivalent characterization of time orientation of viscosity solutions **Lemma 4.1**.: _Let \(u\in\mathcal{S}(M)\), then the time orientation of \(u\) is always past-directed if and only if \(u\) is increasing monotonically along any inextendible future-directed causal curve._ Proof.: Assume that the time orientation of \(u\) is always past-directed and let \(\zeta\) be an inextendible future-directed causal curve. We firstly consider the case that \(u\) is differentiable almost everywhere on \(\zeta\). For any \(a<b\in\mathrm{dom}(\zeta)\), we have \[u(\zeta(b))-u(\zeta(a)) =\int_{a}^{b}d_{\zeta(s)}u(\dot{\zeta}(s))ds\] \[=\int_{a}^{b}g(\nabla u,\dot{\zeta})|_{\zeta(s)}ds. \tag{4.1}\] Since \(u\) is locally Lipshcitz with respect to \(h\), for any \(x\in M\), there exists a compact neighborhood \(U_{x}\) such that \[\inf_{z\in U_{x},V\in\partial\mathcal{C}_{z}^{-1}}\left\{\bigg{|}\frac{\nabla u (z)}{|\nabla u(z)|_{h}}-V\bigg{|}_{h}\right\}>0,\] where \(\partial\mathcal{C}_{z}^{-1}:=\{\ V\in T_{z}M|\ V\) is a past-directed lightlike vector and \(|V|_{h}=1\}\). Thus, we have \[\inf_{z\in U_{x},W\in\mathcal{C}_{z}}g(\nabla u(z),W)>0,\] where \(\mathcal{C}_{z}:=\{\ V\in T_{z}M|\ V\) is a future-directed causal vector and \(|V|_{h}=1\}\). Without loss of generality, we assume that \(\zeta(a)\) and \(\zeta(b)\) are both in \(U_{x}\). We parametrize \(\zeta|_{[a,b]}\) with respect to \(h\)-arc length denoted by \(\tilde{\zeta}:[\tilde{a},\tilde{b}]\to M\). By equality (4.1), we have \[u(\zeta(b))-u(\zeta(a)) =\int_{\tilde{a}}^{\tilde{b}}g(\nabla u,\dot{\tilde{\zeta}})|_{ \tilde{\zeta}(s)}ds\] \[\geq d_{h}(\zeta(a),\zeta(b))\inf_{z\in U_{x},W\in\mathcal{C}_{z}}g( \nabla u(z),W). \tag{4.2}\] If \(u\) is not differentiable almost everywhere on \(\zeta\), we can find an open subset \(U\subseteq M\) such that \(\zeta|_{[a,b]}\subseteq U\). Since the time orientation of \(u\) is past-directed, we can find a sequence of future-directed causal curves \(\zeta_{i}\) satisfying * \(\zeta_{i}\subseteq U\) for each i. * \(\zeta_{i}\rightarrow\zeta\) in the \(C^{0}\)-topology. * \(u\) is differentiable almost everywhere on \(\zeta_{i}\). We parametrize these curves \(\zeta_{i}|_{[a_{i},b_{i}]}\) with respect to \(h\)-arc length and use \(\tilde{\zeta}_{i}:[\tilde{a}_{i},\tilde{b}_{i}]\to M\) to denote these curves. Using the same argument above, \[u(\zeta(b))-u(\zeta(a)) =\lim_{i\rightarrow\infty}\int_{\tilde{a}_{i}}^{\tilde{b}_{i}}d_ {\tilde{\zeta}_{i}(s)}u(\dot{\tilde{\zeta}}_{i}(s))ds\] \[\geq\lim_{i\rightarrow\infty}d_{h}(\tilde{\zeta}(\tilde{a}_{i}), \tilde{\zeta}(\tilde{b}_{i}))\inf_{z\in U_{x},W\in\mathcal{C}_{z}^{-1}}g( \nabla u(z),W)\] \[=d_{h}(\zeta(a),\zeta(b))\inf_{z\in U_{x},W\in\mathcal{C}_{z}}g( \nabla u(z),W)\] \[>0.\] By the arbitrariness of \(a\), \(b\) and \(\zeta\), \(u\) is increasing monotonically along any inextendible future-directed causal curve. To prove the other direction of the lemma, we propose the following claim. _Claim:_ Let \(x\) be a differentiable point of \(u\), then there exists a neighborhood \(U\) of \(x\) such that for any differentiable point \(y\in U\), the time orientation of \(u\) at \(y\) is consistent with the time orientation of \(u\) at \(x\). _Proof of Claim._ Actually, if the claim is not true, we can get a sequence \(\{x_{i}\}\) with \(x_{i}\to x\) and the time orientation of \(u\) at \(x_{i}\) is different from the one at \(x\). Then there exists a timelike vector in \(\nabla^{*}u(x)\), which is opposite to the time orientation of \(u\) at \(x\). This contradicts the hypothesis that \(x\) is a differentiable point of \(u\). Furthermore, by the analysis above, \(u\) increases locally monotonically along any future-directed causal curve passing through \(x\) if \(u\) is differentiable at \(x\) and the time orientation of \(u\) is past-directed. If there exists a future-directed timelike vector \(V\in\nabla^{*}u(x)\), by the definition of \(\nabla^{*}\), we can get a sequence of differentiable points \(x_{i}\) such that \(\nabla u(x_{i})\) is future-directed timelike for each \(x_{i}\). By the _Claim_, we can easily find a future-directed causal curve \(\gamma\) in a neighborhood of \(x_{i}\) such that \(u\) is decreasing monotonically on \(\gamma\). One can extend \(\gamma\) to an inextendible causal curve. It is a contradiction to the fact that \(u\) is increasing monotonically along any inextendible future-directed causal curve. Up to now, the proof of Lemma 4.1 is complete. \(\Box\) ### The level sets of \(u\) are partial Cauchy surfaces **Lemma 4.2**.: _Fix \(s\in\text{Image}(u)\), \(u_{s}\) is a closed acausal set._ Proof. Indeed, \(u_{s}\) is a closed subset by the continuity of \(u\). We only need to show that \(u_{s}\) is a acausal set. For \(x_{1},x_{2}\in u_{s}\), if there exists a future-directed causal curve \(\theta\) connecting \(x_{1}\) and \(x_{2}\), without loss of generality, we can assume that \(\theta(0)=x_{1}\), \(\theta(1)=x_{2}\) and \(u\) is differentiable almost everywhere on \(\theta\). By Lemma 4.1, we have \[u(x_{2})>u(x_{1})\] This contradicts the fact \(x_{1},x_{2}\in u_{s}\). \(\Box\) **Lemma 4.3**.: _For each \(s\in\text{Image}(u)\), \(\text{edge}(u_{s})=\emptyset\)._ Proof. On the contrary, for any arbitrarily fixed \(s\in\text{Image}(u)\) and any \(p\in\text{edge}(u_{s})\), by the definition of \(\text{edge}(u_{s})\), there exists a future-directed timelike curve \(\beta:[a,b]\to M\) with \(\beta(a)\ll p\ll\beta(b)\) and \(\beta(t)\cap u_{s}=\emptyset\) for any \(t\in[a,b]\). By Lemma 4.1, \(u\) is strictly monotonically increasing along any future-directed causal curve. Thus we must have \(u(\beta(0))<u(p)<u(\beta(t))\), this inequality together with the continuity of \(u\) implies that there exists a \(t_{0}\in[a,b]\) such that \(u(\beta(t_{0}))=u(p)\). This is impossible, since \(\beta\) is chosen such that \(\beta(t)\cap u_{s}=\emptyset\) for any \(t\in[a,b]\). \(\Box\) Figure 1: A space-time admits a viscosity solution \(u(x,y)\), where the level sets of \(u\) are not Cauchy surfaces in general. Up to now, by Definition 2.7, \(u_{s}\) is a partial Cauchy surface for each \(s\in\mathrm{Image}(u)\). To conclude this section we provide an example, which shows that in general the level sets \(u_{s}\) are not Cauchy surfaces. _Example_. In the 2-dimensional Minkowski space-time \((R^{2},dy^{2}-dx^{2})\), let \(M=I^{+}((-1,0))\cap I^{-}((1,0))\) be the induced space-time. Clearly, \(M\) is globally hyperbolic. It is easy to see that \(u(x,y)=x\) is a globally defined viscosity solution of equation (2.1) on \(M\), but level sets \(u_{t}\) are only partial Cauchy surfaces except \(t=0\). See Figure 1. ## 5 A variational representation of \(u\) In this section, we will give a variational representation of \(u\) when the time orientation of \(u\) is always past-directed. First of all, we propose a variational problem and prove the existence of minimizers. In the second place, we prove equality (2.2) and the existence of forward calibrated curve of \(u\). In the end, we show that \(u\) is a locally semiconcave function. These results in company complete the proof of the Theorem 2. Our proof is motivated by the weak KAM theory for positively definite Lagrangian systems [10]. To go into the details, we first fix a notation. In this section, \(\Omega^{t}_{x}\) denotes the set of all the past-directed piecewisely smooth timelike curve \(\gamma:[0,t]\to M\) with \(\gamma(t)=x\). ### The existence of minimizer to the variational problem Define \[\tilde{u}(t,x):=\inf_{\gamma\in\Omega^{t}_{x}}\left[u(\gamma(0))-\int_{0}^{t} \sqrt{-g(\dot{\gamma}(s),\dot{\gamma}(s))}ds\right]. \tag{5.1}\] Firstly, we shall prove the existence of minimizers in this variational problem. **Lemma 5.1**: _For any \(x\in M\), there exists a \(t_{0}>0\) depending on \(x\) and a past-directed timelike curve \(\gamma_{x}\in\Omega^{t}_{x}\) such that_ \[\tilde{u}(t,\gamma_{x}(t))=u(\gamma_{x}(0))-\int_{0}^{t}\sqrt{-g(\dot{\gamma} _{x}(s),\dot{\gamma}_{x}(s))}ds \tag{5.2}\] _for any \(t\leq t_{0}\)._ For any \(x\in M\) and \(\gamma\in\Omega^{t}_{x}\), \(L(\gamma)\leq u(\gamma(0))-u(x)\), and consequently \(\tilde{u}(t,x)\geq u(x)\). By Lemma 4.1 and inquality (4.2), for any future-directed causal curve \(\gamma:[0,t]\to M\) with \(\gamma(0)=x\), there exists a positive constant \(\delta\) which depends only on \(d_{h}(\gamma(0),\gamma(t))\) and the local Lipschitz constant of \(u\) such that \(u(\gamma(t))-u(\gamma(0))\geq\delta>0\). Therefore, there exists an \(s\in\mathrm{Image}(u)\) such that \(x\in D^{-}(u_{s})\setminus u_{s}\). For any \(z\in J^{+}_{u_{s}}(x)\), we set \(t_{x,z}=\frac{1}{2}d(x,z)\) and \(t_{0}=\max\limits_{z\in J^{+}_{u_{s}}(x)}t_{x,z}\). For the set \(\{p\in I^{+}(x)|d(x,p)=t_{0}\}\), there exists a point \(y\in u_{s}\) such that \(d(x,y)=t_{0}\). Recall that \(J^{+}_{u_{s}}(x)\) is a compact set by Lemma 3.1, then for any \(V_{i}\in U_{h}T_{x}M\cap\mathcal{C}_{x}\), there exists a \(T_{V_{i}}>0\) (parametrized by \(h\)), such that \(\gamma(T_{V_{i}})\in J^{+}_{u_{s}}(x)\) and \[J^{+}(x)\cap J^{-}[u_{s}]=\bigcup_{i}\gamma_{V_{i}}[0,T_{V_{i}}].\] Furthermore, There exists a \(T>0\) independent of \(i\), such that \(\bigcup_{i}\gamma_{i}[0,T_{V_{i}}]\subseteq\bigcup_{i}\gamma_{i}[0,T]\). Since the \(U_{h}T_{x}M\cap\mathrm{int}\mathcal{C}_{x}\) is diffeomorphic to an open ball in \(R^{\mathrm{Dim}(M)-1}\), we have \[J^{+}(x)\cap J^{-}[u_{s}]\subseteq\bigcup_{i}\gamma_{V_{i}}[0,T]\] is a compact subset of \(M\). Since the Lorentzian length functional \(L(\gamma)\) does not rely on parameterization of \(\gamma\), for the minimizer \(\gamma\) of (5.1), we can always assume that \(\gamma\) is a unit speed maximal geodesic, then it is easy to check that \[\tilde{u}(t_{0},x)=\inf_{x\leq y,d(x,y)=t_{0}}u(y)-t_{0}. \tag{5.3}\] Then for any \(t\leq t_{0}\), the minimizer of (5.3) must be contained in \(\{y|d(x,y)=t_{0}\}\cap J^{-}[u_{s}]\), which is a compact subset of \(M\). Because of the upper semi-continuity of \(L(\cdot)\) and the existence of maximal geodesic between any two points in \(M\), we can finally find \(\gamma_{x}\) satisfies equation (5.2). \(\square\) In the rest of this section, for any \(x\in M\), we still use \(\gamma^{t}_{x}\) to denote one of the minimizers in equality (5.1). ### Variational representation and forward calibrated curve In this subsection, we shall show that the function \(u\) has a local representation and a forward calibrated curve. Based on Lemma 5.1, we have the following lemma. **Lemma 5.2**: _For any \(x\in M\) and \(0<t\leq t_{0}\), we have_ \[u(x)=\tilde{u}(t,x)=\inf_{x\leq y,d(x,y)=t}u(y)-t. \tag{5.4}\] Indeed, the second equality has been proved in Lemma 5.1. We only need to show that \(u(x)=\tilde{u}(t,x)\) for any \(t\in[0,t_{0}]\). For \(0<t_{1}<t_{2}<t_{0}\), by Lemma 5.1, we have two past-directed timelike curves \(\gamma_{x}^{t_{1}}\) and \(\gamma_{x}^{t_{2}}\). We define a past-directed timelike curve \(\gamma_{x}^{t_{1},t_{2}}:[0,t_{2}]\to M\) with \(\gamma_{x}^{t_{1},t_{2}}(s)=\gamma_{x}^{t_{1}}(\frac{t_{1}}{t_{2}}s)\) for any \(s\in[0,t_{2}]\). Since the Lorentzian length functional \(L(\gamma)\) does not rely on parameterization of \(\gamma\), it can be concluded that \[\tilde{u}(t_{2},x)-\tilde{u}(t_{1},x)\leq u(\gamma_{x}^{t_{1},t_{2}}(0))-L( \gamma_{x}^{t_{1},t_{2}})-u(\gamma_{x}^{t_{1}}(0))-L(\gamma_{x}^{t_{1}})=0.\] Similarly, it can be inferred that \[\tilde{u}(t_{2},x)-\tilde{u}(t_{1},x)\geq 0.\] Thus, \(\tilde{u}(t,x)\) is a constant with respect to \(t\). Letting \(t\to 0\), we can finally conclude that \[\tilde{u}(t,x)=u(x)\] \(\square\) Up to now, we have shown that \(u(x)=\inf_{x\leq y,d(x,y)=t}u(y)-t\) for any \(t\in[0,t_{0}]\). Without loss of generality, in the rest of this paper, we always assume that \(\gamma_{x}^{t_{0}}\) is a unit speed maximal geodesic segment. For any \(s\in[0,t_{0}]\), we define \[\tilde{\gamma}_{x}^{t_{0}}(s)=\gamma_{x}^{t_{0}}(t_{0}-s).\] Then we have a future-directed unit speed timelike curve \(\tilde{\gamma}_{x}^{t_{0}}:[0,t_{0}]\to M\) satisfying \[u(\tilde{\gamma}_{x}^{t}(s_{2}))-u(\tilde{\gamma}_{x}^{t}(s_{1}))=s_{2}-s_{1}, \tag{5.5}\] for any \(0\leq s_{1}\leq s_{2}\leq t_{0}\). For \(\tilde{\gamma}_{x}(t_{0})\), using Lemma 5.2 again, we can extend \(\tilde{\gamma}_{x}^{t_{0}}:[0,t_{0}]\to M\) to be a future-directed timelike curve (denoted by \(\tilde{\gamma}_{x}^{t_{0}+t_{1}}\)) \(\tilde{\gamma}_{x}^{t_{1}}:[0,t_{0}+t_{1}]\to M\). Obviously, \(\tilde{\gamma}_{x}^{t_{0}+t_{1}}\) satisfies equality (5.5) for any \(s\in[0,t_{0}+t_{1}]\). Inductively, one can easily get a future-directed inextendible timelike curve, denoted by \(\gamma_{x}\). Actually, if \(\gamma_{x}\) has a future endpoint \(p\), since \(u\) is local Lipschitz, then there exists a \(t_{p}>0\) such that \(\gamma_{x}\) can be extended. In the rest of this paper, we always use \(\gamma_{x}:[0,T)\) to denote a forward calibrated curve of \(u\) at \(x\), where \(T\) is the maximal existence time of \(\gamma_{x}\). By definition 2.6, \(\gamma_{x}\) is a forward calibrated curve of \(u\) at \(x\). In the following, we shall show that \(\gamma_{x}\) is a ray. Fix arbitrarily \(t_{1},t_{2}\in[0,T)\), for any timelike curve \(\eta\) connecting \(\gamma_{x}(t_{1})\) and \(\gamma_{x}(t_{2})\), we firstly assume that \(u\) is differentiable almost everywhere on \(\eta\). Then by the reverse Cauchy-Schwarz inequality, we have \[L(\gamma_{x})|_{(t_{1},t_{2})}=u(\gamma_{x}(t_{2}))-u(\gamma_{x}(t_{1}))\geq \int g(\nabla u(\eta),\dot{\eta})\geq L(\eta).\] If \(u\) is not differentiable almost everywhere on \(\eta\), using the method in Lemma 4.1, we still have the above inquality. This means that the inextendible timelike curve \(\gamma_{x}\) is maximal on each segment. In a word, \(\gamma_{x}\) is an inextendible forward calibrated ray. ### Local semiconcavity of \(u\) In this subsection, we will show that \(u\) is locally semiconcave when the time orientation of \(u\) is consistent. Actually, with the help of variational representation of \(u\), one can directly obtain the semiconcavity of \(u\) in a local chart. Here we use the method in Proposition 3.1. **Proposition 5.1**.: _Let \(u\in\mathcal{S}(M)\), suppose that the time orientation of \(u\) is always past-directed, then for any \(x\in M\), there exists an \(s\in\text{Image}(u)\) such that up to a constant,_ \[u(x)=-d(x,u_{s}).\] Proposition 5.1 can be seen as a space-time counterpart of the results obtained in [4], where the author showed that any viscosity solution is somehow a kind of distance like function in the Riemannian case. Distance like functions, introduced by Gromov, play an important role in studying the geometry and topology of some kind of non-compact manifold. For more details, readers can refer [4; 12]. The proof of Proposition 5.1 is based on the following lemmas. **Lemma 5.3**.: _For any unit vector \(V\in\nabla^{*}u(x)\), there exist a constant \(c>0\) and a unique future-directed timelike geodesic \(\gamma_{x,V}:[0,c]\to M\), satisfies_ \[\gamma_{x,V}(0)=x,\ \ \ u(\gamma_{x,V}(t))=u(\gamma_{x,V}(0))+t\ \ \text{for every}\ \ t\in[0,c]\] _and_ \[\gamma_{x,V}(0)=-V.\] Proof.: For any \(V\in\nabla^{*}u(x)\), by the definition of \(\nabla^{*}u(x)\), there exists a sequence of events \(x_{i}\in M\) with \(x_{i}\to x\), such that \(u\) is differentiable at \(x_{i}\) and \(\nabla u(x_{i})\to V\). For each \(x_{i}\in M\), by last subsection, there exists a future inextendible unit speed timelike geodesic \(\gamma_{t}:[0,t_{i})\to M\), such that \[u(\gamma_{t}(t))-u(\gamma_{t}(0))=t\] for every \(t\in[0,t_{i})\). Diving by \(t\) and letting \(t\to 0\), we have \[g(\nabla u(x_{i}),\gamma_{x_{i}}(0))=1.\] On the other hand, by the reverse Cauchy-Schwarz inequality, \[g(\nabla u(x_{i}),\gamma_{x_{i}}(0))\geq|\nabla u(x_{i})||\gamma_{x_{i}}(0)|=1\] Thus, \[\gamma_{t}(0)=-\nabla u(x_{i}).\] For such a sequence of timelike geodesics \(\gamma_{t}\), by the smooth dependence of solutions of ODE on the initial conditions, there exists a future-directed timelike geodesic \(\gamma_{\infty}:[0,T)\to M\) with \(\gamma_{\infty}(0)=x\), \(\gamma_{\infty}(0)=-V\), where \(T\) is the maximal existence time of \(\gamma_{\infty}\), such that \(\gamma_{t}\) converge to \(\gamma_{\infty}\) on any compact subset of \([0,T)\) with respect to \(C^{1}\) topology. Moreover, since the maximal existence time of ODE is lower semi-continuous with respect to initial value, we can find a uniform \(c>0\) such that \(\{\gamma_{t}|_{[0,c]}\}_{i}\) and \(\gamma_{\infty}|_{[0,c]}\) are well-defined. By the continuity of the Lorentzian distance function \(d\), the maximality of \(\{\gamma_{t}\}\) and the upper semicontinuity of the length functional with respect to the \(C^{0}\)-topology, we have, for each \(t\in[0,c]\), \[d(\gamma_{\infty}(0),\gamma_{\infty}(t)) =\lim_{i\to\infty}d(\gamma_{t}(0),\gamma_{t}(t))\] \[=\lim_{t\to\infty}L(\gamma_{t}|_{[0,t]})\] \[\leq L(\gamma_{\infty})|_{[0,t]}\] \[\leq d(\gamma_{\infty}(0),\gamma_{\infty}(t))=d(x,\gamma_{\infty }(t)).\] Thus \(\gamma_{\infty}\) is maximal on each segment. By the continuity of \(u\) and the convergence of \(\gamma_{t}\), \[u(\gamma_{\infty}(t))=\lim_{i\to\infty}u(\gamma_{t}(t))=\lim_{i\to\infty}u( \gamma_{t}(0))+t=u(x)+t.\] Finally, define \(\gamma_{x,V}:=\gamma_{\infty}\), then it is easily seen that \(\gamma_{x,V}\) satisfied all the properties we need. So the proof of Lemma 5.3 is complete. **Lemma 5.4**: _Let \(u\in\mathcal{S}(M)\), suppose that the time orientation of \(u\) is always past-directed, then for any \(x\in M\), there exists an \(s\in R\), which is sufficiently close to \(u(x)\), such that \(d(x,u_{s})=s-u(x)\)._ Choosing a reachable unit vector \(V\in\nabla^{*}u(x)\). By Lemma 5.3, there exists a unique future-directed maximal geodesic segment \(\beta:[0,c]\to M\) such that \(\beta(0)=x\), \(\beta(0)=V\) and \(u(\beta(t))-u(x)=t\). Since \(s\) is sufficiently close to \(u(x)\), there exists a unique \(t_{1}\) such that \(u(\beta(t_{1}))=s\), \(t_{1}=s-u(x)\) and \[d(x,\beta(t_{1}))=\operatorname{length}(\beta|[0,t_{1}])=t_{1}=s-u(x).\] Since \(\beta(t_{1})\in u_{s}\), we have \(d(x,u_{s})\geq s-u(x)\). If \(s_{1}:=d(x,u_{s})>s-u(x)\), applying Fubuni's theorem, we can find an absolutely continuous curve (parameterized by arc-length) \(\gamma:[0,s_{2}]\to M\) with \(\gamma(0)=x\in u_{u(x)}\), \(\gamma(s_{2})\in u_{s}\) such that \(0\leq s_{1}-s_{2}\leq\frac{1}{2}(s_{1}-s+u(x))\) and \(u\) is differentiable almost everywhere along \(\gamma\) (with respect to the \(1\)-dimensional Lebesgue measure). By the reverse Cauchy-Schwarz inequality for causal vectors, we have \[s-u(x) =u(\gamma(s_{2}))-u(\gamma(0))\] \[=\int_{0}^{s_{2}}g(\nabla u,\hat{\gamma})|_{\gamma(t)}dt\] \[\geq\operatorname{length}(\gamma|_{[0,s_{2}]})\] \[=s_{2}\] \[\geq s_{1}-\frac{1}{2}(s_{1}-s+u(x))\] \[>s-u(x).\] This contradiction proves \(d(x,u_{s})=s-u(x)\). \(\square\) _Proof of Proposition 5.1._ For any \(x\in M\), by Lemma 5.1, there exists \(s\in(a,b)\) such that \(x\in D^{-}(u_{s})\) and \(u(x)\) has a variational representation (2.2). Then by lemma 5.4, for another \(x_{0}\in D^{-}(u_{s})\), we have \[d(x,u_{s})=s-u(x), \tag{5.6}\] and \[d(x_{0},u_{s})=s-u(x_{0}). \tag{5.7}\] Let equality (5.7) minus equality (5.6), we obtain, \[d(x_{0},u_{s})-d(x,u_{s})=u(x)-u(x_{0}).\] This means that \[u(x)=-d(x,u_{s})+d(x_{0},u_{s})+u(x_{0}).\] _Proof of Theorem 2._ The first two conclusions of Theorem 2 can be obtained from Lemmas 4.2, 4.3, 5.2. For the last conclusion Theorem 2, we can construct a support function by Propositions 5.1, 3.1 and Lemma 3.6 guarantees the local semiconcavity of \(u\). **Remark 5.1**: _Our results show that when the time orientation of viscosity solution is consistent, viscosity solution is locally semiconcave. Actually, it is not difficult to show that when the time orientation of a function \(u\in Lip_{loc}(M)\) is consistent, \(u\in\mathcal{S}(M)\) if and only if \(u\) is a locally semiconcave function satisfying (2.1) almost everywhere._ ## 6 Weak KAM properties of \(u\) In this section, we will show that \(u\) satisfies some particular properties, so-called weak KAM properties [10] when the time orientation of \(u\) is consistent. Similar properties for Busemann functions and regular cosmological time functions are obtained on space-time [7]. In this section, we always assume that the time orientation of \(u\) is past-directed. Recall that if \(u\) is differentiable at \(x\in M\), then by Lemma 5.3 and subsection 5.2, there exists a forward calibrated ray \(\gamma_{x}:[0,T)\to M\) satisfies \[\gamma_{x}(0)=x,u(\gamma_{x}(t))=u(\gamma_{x}(0))+t\ \ \ \mbox{for every}\ \ \ t\in[0,T)\] and \[\gamma_{x}(0)=-\nabla u(x).\] **Proposition 6.1**.: _If there is a unique future-directed timelike curve \(\gamma_{x}:[0,T)\to M\), such that_ \[\gamma_{x}(0)=x,\ \ u(\gamma_{x}(t))=u(\gamma_{x}(0))+t\ \ \mbox{for every}\ \ \ t\in[0,T). \tag{6.1}\] _Then \(u\) is differentiable at \(x\)._ Proof. By Proposition 5.1 and Theorem 2, for any \(x\in M\) there exists some \(s\in R\) such that \(u(x)=-d(x,u_{s})\) is a locally semiconcave function. Furthermore, \(\nabla^{+}u(x)\neq\emptyset\) for every \(x\in M\). Suppose \(u\) is non-differentiable at \(x\), then \(\nabla^{+}u(x)\) is not a singleton. By Lemma 3.9, \(\nabla^{*}u(x)\) is not a singleton as well. Finally, using Lemma 5.3 the curves satisfying condition (6.1) must not be unique. This completes the proof. \(\square\) **Proposition 6.2**.: _For any \(x\in M\), let \(\gamma_{x}(t):[0,T)\to M\) be a future-directed timelike curve with \(\gamma_{x}(0)=x\), and_ \[u(\gamma_{x}(t))=u(\gamma_{x}(0))+t\ \ \mbox{for every}\ \ t\in[0,T), \tag{6.2}\] _then \(u\) is differentiable at \(\gamma_{x}(t)\) and \(\dot{\gamma}_{x}(t)=-\nabla u(\gamma_{x}(t))\) for any \(t\in(0,T)\)._ Proof. If there is a \(T>s_{0}>0\) such that \(u\) is non-differentiable at \(\gamma_{x}(s_{0})\), by Lemma 5.3, there exists a future-directed timelike geodesic \(\tilde{\gamma}_{\gamma_{x}(s_{0})}:[s_{0},T)\to M\) with \(\tilde{\gamma}_{\gamma_{x}(s_{0})}(s_{0})=\gamma_{x}(s_{0})\), and \[u(\tilde{\gamma}_{\gamma_{x}(s_{0})}(s))=u(\gamma_{x}(s_{0}))+(s-s_{0})\ \ \mbox{for every}\ \ s\in[s_{0},T). \tag{6.3}\] For the sake of convenience, let \(\hat{\gamma}(t)=\gamma_{x}(t)\), for any \(t\geq 0\), \(\tilde{\gamma}(s)=\tilde{\gamma}_{\gamma_{x}}(s)\) for any \(s\geq s_{0}\). By the above assumption and \(\hat{\gamma}(s_{0})\neq\hat{\gamma}(s_{0})\), then the curve \(\hat{\gamma}*\hat{\gamma}|_{[0,s_{0})}:[0,T)\to M\) has a corner at \(\hat{\gamma}(s_{0})\). Here and in the following, \(*\) denotes the conjunction of curves. By the transitivity of chronological relation \(\ll[2\), p.55], for \(0<s_{0}<s_{1}\), we have \(x=\hat{\gamma}(0)\ll\hat{\gamma}(s_{0})\ll\hat{\gamma}(s_{1})\). Since \(M\) is globally hyperbolic, there is a maximal future-directed timelike geodesic \(\sigma:[0,s_{1}]\to M\) such that \(\sigma(0)=x\), \(\sigma(s_{1})=\tilde{\gamma}(s_{1})\). By the smoothness of geodesic and reverse triangle inequality, \[d(\sigma(0),\sigma(s_{1}))=d(\hat{\gamma}(0),\hat{\gamma}(s_{1}))>d(\hat{ \gamma}(s_{0}),\hat{\gamma}(s_{1}))+d(\hat{\gamma}(0),\hat{\gamma}(s_{0}))=s _{1}. \tag{6.4}\] By Lemma 3.3 and inequality (6.4), we obtain \[u(\hat{\gamma}(s_{1})) \geq u(x)+d(x,\hat{\gamma}(s_{1}))\] \[>u(x)+s_{1}. \tag{6.5}\] On the other hand, by (6.2) and (6.3), we have \[u(\tilde{\gamma}(s_{1})) =u(\tilde{\gamma}(s_{0}))+(s_{1}-s_{0})\] \[=u(x)+s_{0}-(s_{1}-s_{0})\] \[=u(x)+s_{1}. \tag{6.6}\] This contradiction proves Proposition 6.2. \(\square\) _Proof of Theorem 3._ The last conclusion of Theorem 3 follows from Proposition 6.2. If \(u\) is differentiable at \(x\), by Proposition 6.2, \(u\) is differentiable everywhere on \(\gamma_{x}(t)\) for any \(t\in[0,T)\). By subsection 5.2, we have \[\nabla u(\gamma_{x}(t))=-\dot{\gamma}_{x}(t)\] for any \(t\in[0,T)\). This is to say, if \(u\) is differentiable at \(x\), there is only one future-directed timelike ray \(\gamma_{x}:[0,T)\to M\) with \(\gamma_{x}(0)=x\), \(\dot{\gamma}(0)=-\nabla u(x)\) and \[u(\gamma(0))=u(\gamma(s))-s\ \ \mbox{for any}\ \ s\in[0,T).\] This together with Proposition 6.1 establish the first conclusion of Theorem 3. ## 7 Classification of globally viscosity solutions of equation (2.1) Up to now, we have discussed the case when the time orientation of the viscosity solution is consistent. In this section, we want to study the situation when the time orientation of viscosity solution is non-consistent. **Lemma 7.1**: _Let \(u\in\mathcal{S}(M)\), for \(x\in M\) if the time orientation of \(u\) does not change at \(x\), then there exists a neighborhood \(U\) of \(x\) such that for any \(y\in U\), the time orientation of \(u\) at \(y\) is consistent with the time orientation of \(u\) at \(x\)._ Without loss of generality, we assume that the time orientation of \(u\) at \(x\) is past-directed. If the conclusion is not true, then there exists a sequence \(\{x_{i}\}\) converging to \(x\) and future-directed timelike vectors \(V_{i}\in\nabla^{*}u(x_{i})\) for each \(i\). By the definition of \(\nabla^{*}\), for each \(i\) there exists a sequence \(x_{i}^{n}\to x_{i}\) as \(n\to\infty\) such that \(u\) is differentiable at every \(x_{i}^{n}\) and \(\nabla u(x_{i}^{n})\to V_{i}\) as \(n\to\infty\). By the diagonal method, we can select a subsequence \(x_{n_{i}}\) satisfying \(x_{n_{i}}\to x\) as \(i\to\infty\). Moreover, \(u\) is differentiable at \(x_{n_{i}}\) and \(\nabla u(x_{n_{i}})\) is future-directed. Then the future-directed timelike vector \(\lim\limits_{i\to\infty}\nabla u(x_{n_{i}})\) must belong to \(\nabla^{*}u(x)\). This contradicts the time orientation of \(u\) at \(x\) is past-directed. \(\Box\) Lemma 7.1, together with Lemma 4.1, implies that if the time orientation of \(u\) at \(x\) dose not change, then \(u\) is stricly monotonic along any future-directed causal curve in a neighborhood of \(x\). **Lemma 7.2**: _Assume that \(u\in\mathcal{S}(M)\) and \(P_{u}=\emptyset\), then for any inextendible future-directed causal curve \(\gamma\), \(u\) can not achieve its maximum at \(\gamma(t)\) for any \(t\in dom(\gamma)\)._ Assume that there exists a future-directed causal curve \(\gamma\colon(a,b)\to M\) and a \(t_{0}\in(a,b)\) such that \(\mu(\gamma(t_{0}))\) is a local maximum of \(u\) along \(\gamma\). For convenience, set \(\bar{x}=\gamma(t_{0})\). By Lemmas 4.1, 7.1, it is easy to obtain that \(\bar{x}\in C_{u}\). Recall that \(P_{u}=\emptyset\), we only need to consider the following two cases: \(u\) is locally semiconvex at \(\bar{x}\) or \(u\) is locally semiconcave at \(\bar{x}\). If \(u\) is locally semiconvex at \(\bar{x}\), then for any \(V\in\nabla^{-}u(\bar{x})\), we can find a neighborhood \(O\) of \(\bar{x}\) and a function \(\phi\in C^{1}(O)\) with \(u(x)\geq\phi(x)\) in \(O\), \(u(\bar{x})=\phi(\bar{x})\) and \(\nabla\phi(\bar{x})=V\). Consequently, for \(x\in O\cap\gamma(t)\), \[\phi(x)\leq u(x)<u(\bar{x})=\phi(\bar{x}).\] then \(\phi(\bar{x})\) is a local maximum along \(\gamma(t)\). Since \(\phi\) is a \(C^{1}\) function, then \(g(\nabla\phi(\bar{x}),\dot{\gamma}(t_{0}))=0\). This means that \(V\) is orthogonal to \(\dot{\gamma}(t_{0})\). On the other hand, \(\nabla^{-}u(\bar{x})\) must contain timelike vectors. This contradiction shows that \(u\) is not locally semiconvex at \(\bar{x}\). If \(u\) is locally semiconcave at a neighborhood \(O\) of \(\bar{x}\), by Lemma 3.9, \[\nabla^{+}u(\bar{x})=co\nabla^{*}u(\bar{x}).\] Since \(\bar{x}\in C_{u}\), \(\nabla^{*}u(\bar{x})\) contains both future-directed vectors and the past-directed vectors. Thus \(co\nabla^{*}u(\bar{x})\) must contain either space-like vectors or the \(0\) vector. Let \(\xi\) be a spacelike vector or the \(0\) vector in \(co\nabla^{*}u(\bar{x})\), then \[g(\xi,\xi)\geq 0. \tag{7.1}\] On the other hand, by the definition of viscosity subsolution, for any \(V\in\nabla^{+}u(\bar{x})\), \[g(V,V)\leq-1. \tag{7.2}\] This contradicts inequality (7.1) and thus Lemma 7.2 is proved. \(\Box\) Proof of Theorem 4.: Recall \(u\in\mathcal{S}(M)\) and \(P_{u}=\emptyset\). Let \(\gamma\) be an inextendible future-directed causal curve. By Lemmas 4.1, 7.2, we get the following three conclusions: * \(u\) is strictly monotonically increasing along \(\gamma(t)\) when the time orientation of \(u\) is always past-directed. * \(u\) is strictly monotonically decreasing along \(\gamma(t)\) when the time orientation of \(u\) is always future-directed. * When the time orientation of \(u\) has changed at \(\gamma(t_{0})\), \(u\) decreases strictly monotonically along \(\gamma(t)\) for \(t<t_{0}\) and increases strictly monotonically along \(\gamma(t)\) for \(t>t_{0}\). In the last conclusion, we know that \(\gamma(t_{0})\in C_{u}\). Conversely, if the time orientation of \(u\) has changed at a point \(\bar{x}\), by the analysis above and Lemma 7.2, we know that \(u\) is locally semiconvex at a neighborhood of \(\bar{x}\). Meanwhile, if there exists a \(T\in\operatorname{dom}(\gamma)\) such that \(u\) is past-directed at \(\gamma(T)\), then for any \(t>T\), the time orientation of \(u\) is past-directed along \(\gamma(t)\). Thus, the time orientation of \(u\) is changed at most once along any inextendible future-directed causal curve. It is important to note that the last conclusion does not depend on the choice of \(\gamma\), i.e., if the time orientation of \(u\) changes at a point \(x\), then the last conclusion holds true for all inextendible future-directed causal curves passing through \(x\). Up to now, our results show that when the time orientation of viscosity solution is consistent, the properties of the viscosity solution are essentially not much different from the case in Riemannian manifold. But when the time orientation of viscosity solution is non-consistent, viscosity solutions have many peculiar properties. According to Theorem 4, we can draw the following conclusion **Proposition 7.1**: _Let \(\phi\), \(\varphi\in\mathcal{S}(M)\) and the time orientation of each of them is consistent, suppose the time orientation of \(\phi\) and \(\varphi\) are identical, then \(\min\{\phi,\varphi\}\in\mathcal{S}(M)\)._ Proposition 7.1 is a straightly conclusion from Theorem 4. Firstly, by Theorem 2, both \(\phi\) and \(\varphi\) are locally semiconcave. By [6, Proposition 1.1.3], we know \(\min\{\phi,\varphi\}\) is locally semiconcave on \(M\). Thus by Remark 5.1, we only need to show that \(\min\{\phi,\varphi\}\) satisfies the equation (2.1) at its differentiable points. Let \(M_{1}:=\{x:\phi(x)\neq\varphi(x)\}\) and \(M_{2}:=\{x:\phi(x)=\varphi(x)\}\). Denote the interior of \(M_{2}\) by \(\operatorname{int}(M_{2})\). Let \(U:=M_{1}\cup\operatorname{int}(M_{2})\), then \(\min\{\phi,\varphi\}\) satisfies the equation (2.1) at its differentiable points in \(U\). Obviously, \(U\) is an open and dense subset, by [6, Proposition 3.3.4], \(\min\{\phi,\varphi\}\) satisfies the equation (2.1) at any differentiable point. So far, we know that \(\min\{\phi,\varphi\}\) is really a viscosity solution to equation (2.1). \(\square\) At the end of this section, we also provide an example, which can illustrate the differences of viscosity solutions in the case of Riemannian manifold and space-time. _Example_. Let \((M,g)\) be the 2-dimensional Minkowski space-time, and \(f(x,y)=\sqrt{2}|x|-|y|\). Clearly, \(f\) is Lipschitz on \(M\). By a simple calculation, \[\nabla f=\left\{\begin{array}{ll}(\sqrt{2},1)&x>0,y>0,\\ (\sqrt{2},-1)&x>0,y<0,\\ (-\sqrt{2},1)&x<0,y>0,\\ (-\sqrt{2},-1)&x<0,y<0.\end{array}\right.\] It is easy to check that \(f\) is indeed a globally defined viscosity solution to equation (2.1). Note that \(f\) is neither semiconcave nor semiconvex on any neighborhood of \((0,0)\), i.e., \((0,0)\in P_{f}\). See the Figure 2. ## 8 Characterization of \(C_{u}\) In this section we will study more precisely the mathematical structure of \(C_{u}\). Let \(\Gamma(M)\) be the collection of all inextendible future-directed causal curves on \(M\). In the following, for any \(\gamma\in\Gamma(M)\), by the proof of Lemma 4.1, we always assume that \(u\) is differentiable on \(\gamma\) almost everywhere. By the definition of \(C_{u}\) and Theorem 4, for any \(x\in C_{u}\), one can easily get an inextendible future-directed causal curve \(\gamma(t)\) satisfying \(\gamma(0)=x\) and \(u\) is differentiable on \(\gamma\) almost everywhere. What is more, \(u\) is strictly decreasing on \(\gamma(t)\) for any \(t<0,u\) is strictly increasing on \(\gamma(t)\) for any \(t>0\). The time orientation of \(u\) is future-directed at \(\gamma(t)\) for any \(t<0\), the time orientation of \(u\) is past-directed at \(\gamma(t)\) for any \(t>0\). Fix \(t>0\), define \[J_{t,x,\gamma}:=J^{+}(\gamma(-t))\cap J^{-}(\gamma(t)).\] \[C_{t,x,\gamma}:=J^{+}(\gamma(-t))\cap J^{-}(\gamma(t))\cap C_{u}\] and \[C_{\eta,t,x,\gamma}:=\{\eta\in\Gamma(M)|\eta(\cdot)\subset J_{t,x,\gamma}\cup J ^{+}(x)\cup J^{-}(x)\},\] then we have the following conclusion. **Lemma 8.1**: _For \(u\in\mathcal{S}(M)\) with \(P_{u}=\emptyset\), and for any \(x\in C_{u}\), \(\eta\in C_{\eta,x,\gamma}\), there exists a \(t_{0}\in R\), such that either \(\eta(t_{0})=x\) or \(\eta(t_{0})\in J_{t,x,\gamma}\setminus J^{\pm}(x)\). Furthermore, \(u\) is strictly decreasing on \(\eta(t)\) for any \(t<t_{0}\), \(u\) is strictly increasing on \(\eta(t)\) for any \(t>t_{0}\)._ The proof of lemma is based on the following claim. _Claim_: For any \(\eta\in C_{\eta,x,\gamma}\) with \(\eta(s_{1})=\gamma(-t)\), \(\eta(s_{2})=\gamma(t)\) for some \(s_{1}<s_{2}\), then there exists a unique \(s_{1}<s_{0}<s_{2}\) such that \(u\) change its time orientation at \(\eta(s_{0})\). _Proof of Claim_. If \(\eta(s_{0})=x\), the claim is clearly valid. Otherwise, assume that there exists an inextendible future-directed causal curve \(\tilde{\eta}\) with \(\tilde{\eta}(s_{1})=\gamma(-t)\), \(\tilde{\eta}(s_{2})=\gamma(t)\) for some \(s_{1}<s_{2}\) and the time orientation of \(u\) is consistent along \(\tilde{\eta}\), by the transitivity of chronological relation \(\ll\), it is easy to see that the time orientation of \(u\) is future-directed at \(\tilde{\eta}(s_{2})\) and the time orientation of \(u\) is past-directed at \(\tilde{\eta}(s_{1})\). This is a contradiction. It is not difficult to see there exists an \(s_{0}\in R\) with \(s_{1}<s_{0}<s_{2}\) such that \(u\) changes its time orientation at \(\eta(s_{0})\) and \(\eta(s_{0})\in J_{t,x,\gamma}\setminus J^{\pm}(x)\). Now we continue to prove Lemma 8.1. For any \(\eta\in C_{\eta,t,x,\gamma}\), if there exist \(s_{1},s_{2}\) such that \(\eta(s_{1})=\gamma(-t)\), \(\eta(s_{2})=\gamma(t)\), one can use the _Claim_. Otherwise, there must be real numbers \(s_{3}<s_{4}\) such that \(\eta(s_{3})=\in J^{+}(\gamma(-t))\setminus I^{+}(\gamma(-t))\) and \(\eta(s_{4})\in J^{-}(\gamma(t))\setminus I^{-}(\gamma(t))\). Without loss of generality, we always assume that \(-t<s_{3}<s_{4}<t\). In the following, we use \(\tilde{\eta}:[-t,s_{3}]\to M\) to denote the future-directed lightlike curve connecting \(\gamma(-t)\) and \(\eta(s_{3})\). Similarly, \(\tilde{\eta}:[s_{4},t]\to M\) used to denote the future-directed lightlike curve connecting \(\eta(s_{4})\) and \(\gamma(t)\). Then we define an inextendible future-directed causal curve \(\zeta_{\eta}\) as follows. \[\zeta_{\eta}(s)=\left\{\begin{array}{ll}\gamma(s)&s\leq-t,\\ \tilde{\eta}(s)&-t<s\leq s_{3},\\ \eta(s)&s_{3}<s\leq s_{4},\\ \tilde{\eta}(s)&s_{4}<s\leq t,\\ \gamma(s)&t<s.\end{array}\right.\] By using the _Claim_ on \(\zeta_{\eta}(s)\), it can be inferred that either \(\zeta_{\eta}(s_{0})=x\) or there exists an \(s_{0}\in R\) such that \(u\) changes its time orientation at \(\zeta_{\eta}(s_{0})\) and \(\zeta_{\eta}(s_{0})\in J_{t,x,\gamma}\setminus J^{\pm}(x)\). By the definition of \(\zeta_{\eta}\), we can obtain that \(s_{0}\) must belong to \((s_{3},s_{4})\) and the time orientation of \(u\) changes at \(\eta(s_{0})\). Then the Lemma 8.1 is proved. Figure 2: A globally defined viscosity solution to equation (2.1) on the 2-dimensional Minkowski space-time. _Proof of Theorem 5._ By Theorem 4, it is easy to see that \(C_{u}\) is an acausal set. Indeed, if \(x,y\in C_{u}\) with \(x\leq y\), we can find a future-directed causal curve connecting \(x\) and \(y\), by the definition of \(C_{u}\), \(u\) changes its time orientation at \(x\) and \(y\), this contradicts Theorem 4. Let \(\{x_{i}\}_{i}\subset C_{u}\) and \(x_{i}\to x\) as \(i\to\infty\). By the definition of \(C_{u}\), for each \(i\), there are two timelike vectors \(V_{i},W_{i}\in\nabla^{*}u(x_{i})\), where \(V_{i}\) is past-directed and \(W_{i}\) is future-directed. Then by the definition of \(\nabla^{*}\), there exists a sequence of differentiable points \(x_{i}^{n}\) of \(u\) such that \(x_{i}^{n}\to x_{i}\), \(\nabla u(x_{i}^{n})\to V_{i}\) as \(n\to\infty\). By the diagonal method, we can select a subsequence \(x_{n_{i}}\) satisfying \(x_{n_{i}}\to x\) as \(i\to\infty\). Moreover, \(u\) is differentiable at \(x_{n_{i}}\) and \(\nabla u(x_{n_{i}})\) is past-directed for each \(i\). Then the past-directed timelike vector \(\lim\limits_{i\to\infty}\nabla u(x_{n_{i}})\) must belong to \(\nabla^{*}u(x)\). In the same way, we can know that \(\nabla^{*}u(x)\) must contain some future-directed timelike vectors. This means that \(x\in C_{u}\), thus \(C_{u}\) is a closed set. Suppose that \(\operatorname{edge}(C_{u})\neq\emptyset\). Then for each \(x\in\operatorname{edge}(C_{u})\), we can construct a closed subset \(C_{\eta,\chi,\gamma}\). By Lemma 8.1, any \(\eta\in C_{\eta,\chi,\gamma}\) must intersect \(C_{u}\) exactly once. This contradicts the fact \(x\in\operatorname{edge}(C_{u})\). Then \(C_{u}\) is a closed acausal set and \(\operatorname{edge}(C_{u})=\emptyset\), i.e., \(C_{u}\) is a partial Cauchy surface. The proof of Theorem 5 is complete. The following example illustrates that \(C_{u}\) is not a Cauchy surface in general. _Example_. In the 2-dimensional Minkowski space-time \((R^{2},dy^{2}-dx^{2})\), let \(M=I^{+}((-1,0))\cap I^{-}((1,0))\) be the induced space-time. For any \(a\in(-1,0)\cup(0,1)\), it is easy to see that \(u(x,y)=|x+a|\) is a viscosity solution to equation (2.1) on \(M\) and \(C_{u}=\{(x,y)\in M|x=-a\}\) is only a partial Cauchy surface. ## Acknowledgments Xiaojun Cui and Siyao Zhu are supported by the National Natural Science Foundation of China (Grant No. 12171234). Hongguang Wu is supported by the National Natural Science Foundation of China (Grant no. 12201073).
2309.03664
Alzheimer Disease Detection from Raman Spectroscopy of the Cerebrospinal Fluid via Topological Machine Learning
The cerebrospinal fluid (CSF) of 19 subjects who received a clinical diagnosis of Alzheimer's disease (AD) as well as of 5 pathological controls have been collected and analysed by Raman spectroscopy (RS). We investigated whether the raw and preprocessed Raman spectra could be used to distinguish AD from controls. First, we applied standard Machine Learning (ML) methods obtaining unsatisfactory results. Then, we applied ML to a set of topological descriptors extracted from raw spectra, achieving a very good classification accuracy (>87%). Although our results are preliminary, they indicate that RS and topological analysis together may provide an effective combination to confirm or disprove a clinical diagnosis of AD. The next steps will include enlarging the dataset of CSF samples to validate the proposed method better and, possibly, to understand if topological data analysis could support the characterization of AD subtypes.
Francesco Conti, Martina Banchelli, Valentina Bessi, Cristina Cecchi, Fabrizio Chiti, Sara Colantonio, Cristiano D'Andrea, Marella de Angelis, Davide Moroni, Benedetta Nacmias, Maria Antonietta Pascali, Sandro Sorbi, Paolo Matteini
2023-09-07T12:01:01Z
http://arxiv.org/abs/2309.03664v1
Alzheimer Disease Detection from Raman Spectroscopy of the Cerebrospinal Fluid via Topological Machine Learning ###### Abstract The cerebrospinal fluid (CSF) of 19 subjects who received a clinical diagnosis of Alzheimer's disease (AD) as well as of 5 pathological controls have been collected and analysed by Raman spectroscopy (RS). We investigated whether the raw and preprocessed Raman spectra could be used to distinguish AD from controls. First, we applied standard Machine Learning (ML) methods obtaining unsatisfactory results. Then, we applied ML to a set of topological descriptors extracted from raw spectra, achieving a very good classification accuracy (\(>87\%\)). Although our results are preliminary, they indicate that RS and topological analysis together may provide an effective combination to confirm or disprove a clinical diagnosis of AD. The next steps will include enlarging the dataset of CSF samples to validate the proposed method better and, possibly, to understand if topological data analysis could support the characterization of AD subtypes. Keywords:Ensembling bagging machine learning deep learning image classification convolutional neural networks. ## 1 Introduction Alzheimer's disease (AD) affects tens of millions of people worldwide, being the most common neurodegenerative disease. Due to the population aging, the number of people affected by AD and other forms of dementia is expected to reach about 152 million by 2050 (World Alzheimer Report 2021 provided by Alzheimer's Disease International, McGill University [https://www.alzint.org/resource/world-alzheimer-report-2021/](https://www.alzint.org/resource/world-alzheimer-report-2021/)). At present, the clinical diagnosis of AD requires a series of neurological examinations (National Institute of Aging - Alzheimer's Association criteria), while the definitive diagnosis is possible only after the patient's death and brain tissue analysis. Therefore, there is a need to improve the accuracy of clinical diagnosis with innovative, cost-effective and specific approaches. Raman spectroscopy (RS) represents a fast, efficient, non-invasive diagnostic tool [9], and the high-precision detection of RS is expected to reduce or replace other AD diagnostic tests. Recently, Raman-based techniques demonstrated significant potential in identifying AD by detecting specific biomarkers in body fluids [13]. Given the increasing number of RS studies, a systematic evaluation of the accuracy of RS in the diagnosis of AD was already performed, showing that RS is an effective and accurate tool for diagnosing AD, though it still cannot rule out the possibility of misdiagnosis [19]. Recently, Raman spectroscopy of tissue samples has been coupled with topological machine learning to support the grading of bone cancer [6], showing the feasibility of a topological approach for multi-label classification. The detection of CSF biomarkers is one of the diagnostic criteria for AD [2] because CSF is more sensitive than blood or other biofluids in the diagnosis of AD. Therefore, RS can be used as an effective tool to analyze CSF samples, as shown previously [14, 12]. Here we propose a novel method based on the collection of the vibrational Raman fingerprint of the proteomic content of cerebrospinal fluid (CSF) and on the topological machine learning analysis of the Raman spectra in order to support the AD diagnosis. The achieved results encourage to keep on investigating topological machine learning tools, not only to establish more safely the proposed methodology by enlarging the experimentation but also to understand if looking at Raman spectra of CSF with the topological lens could also help to characterize AD subtypes. ## 2 Population study and Data Acquisition The study population is made of 24 patients, enrolled in the framework of the Bando Salute 2018 PRAMA project ("Proteomics, RAdiomics & Machine learning-integrated strategy for precision medicine for Alzheimer's"), co-funded by the Tuscany Region, with the approval of the Institutional Ethics Committee of the Careggi University Hospital Area Vasta Centro (ref. number 17918_bio). All of them showed pathological symptoms: the majority of them, 19 subjects, have been diagnosed with AD, while the others have been considered as controls (noAD), even if diagnosed with other neurological conditions: one with vascular dementia, three with hydrocephalus and one with multiple sclerosis. The CSF samples were collected by lumbar puncture, then immediately centrifuged at \(200g\) for 1 min, 20 "C and stored at \(-80\) "C until analysis [15, 17]. On the day of analysis, CSF samples were thawed and centrifuged again at \(4000g\) for 10 min at 4 \({}^{\circ}\)C. The pellet was separated from the supernatant and further used for the analyses. A 2 ul drop of the pellet was deposited onto a gold mirror support (ME1S-M01; Thorlabs, Inc., Newton, NJ), followed by air drying for 30 minutes and acquisition of Raman spectra from the outer ring of the dried drop. A set of five Raman spectra have been collected for each drop-casted sample by using a micro-Raman spectrometer (Horiba, France) in back-scattering configuration, equipped with a laser excitation source tuned at 785 nm (40 mW power, 20 second integration time, 10 accumulations) and a Peltier cooled CCD detector. Finally, a set of five Raman spectra have been collected for each biological sample. In some cases, the same procedure has been replicated two or three times; it resulted in a dataset of 30 acquisitions of RS: 22 belonging to the AD class and 8 to the noAD class. ## 3 Methods After the Raman spectra are acquired, the data enter the following pipeline to return the final predictive model with classification accuracy. For each patient, the average of the five acquisitions of the raw Raman spectrum is computed. Next, the following transformations are applied to the RS: Fourier transform, Welch transform and autocorrelation. We applied the pipeline individually on the original spectra and on each of the transformations listed above. These computations were performed using the Python package SciPy [18]. The spectra enter the pipeline of Topological Machine Learning (TML). For more detailed information on the pipeline, refer to [7]. The pipeline performs a lower star filtration to extract the Persistence Diagrams (PDs). Since the data is a 1D spectrum, the only non-trivial homology group is \(H_{0}\). The PD is vectorized using the following vectorization methods: Persistence Image [1] with parameters \(\sigma\in\{0.1,1,10\}\), \(n\in\{5,10,25\}\), Persistence Landscape [4], Persistence Silhouette [5] and Betti curve [16] all with parameters \(n\in\{25,50,75,100\}\). Finally, these vectors enter one of the following Machine Learning (ML) classifiers: Support Vector Classifier [8], Random Forest Classifier [3] and Ridge Classifier [11]. The validation scheme of the pipeline is the Leave One Patient Out cross-validation (LOPO). This scheme is a generalization of the classic leave one out cross-validation [10], with the difference that all data from the same patient are recursively left in the validation set, instead of a single data. This avoids biased high accuracy due to the similarity of the data coming from the same patient that may otherwise be found both in the training and validation set. In Figure 1, we report the entirety of the dataset of Raman spectra, divided by the class of Alzheimer's disease and the corresponding average with standard deviation. In Figure 2 are shown eight Persistence images, two for each combination AD-noAD and RS-FT. It is interesting to note that in the PIs coming from the Raman spectra the pattern seems more chaotic between the two classes, while in the PIs coming from the Fourier transform there is a clearer division. More in detail, the lit pixels in the PIs of class noAD have a more elongated shape than those of the AD class. This corroborates the results achieved in Section 4. In Figure 3 are shown eight Persistence silhouettes in the same fashion as Figure 2. Again, there is a clearer division for the PSs coming from the Fourie transform. A peak at the tail end of the signal is present for the noAD class. Figure 1: (**a**) The entirety of the dataset of Raman spectra coloured by respective class. (**b**) The corresponding average with standard deviation. Figure 2: First row: Two persistence images (PI) of the Raman spectra for class AD and two for class noAD. Second row: two PI of the Fourier transform for class AD and for class noAD. It appears that the PI obtained from Raman spectra are more chaotic between the two classes. In the PI obtained from the Fourier transform, a clearer division between AD and noAD is observed, with the latter having a more elongated dot. ## 4 Results The results obtained from the pipeline on each of the transformations are shown in Table 1. The Fourier transform is the one that gets clearly the better results. We used as baseline accuracy the value of 0.733, due to the imbalance in classes (i.e. the accuracy achieved by the classifier assigning always the most frequent label to any sample). It is worth pointing out that even standard preprocessing applied to Raman spectra could lead to a classification accuracy below the baseline accuracy. This is probably due to the fact that in our dataset, the signal-to-noise ratio is quite low. On the other hand, the accuracy value (\(>83\%\)) achieved by extracting \(H_{0}\) features from raw spectra is in line with results of [14], while results achieved by extracting topological features after performing the Fourier transform are even better (\(87.5\%\)). \begin{table} \begin{tabular}{c c c} \hline **Method** & **Accuracy** & **Vectorization and Classifier** \\ \hline \(H_{0}\) & 0.833 & PI and Ridge \\ Fourier transform & 0.875 & PS and SVC \\ Welch transform & 0.763 & PI and SVC \\ Autocorrelation & 0.667 & PI and Ridge \\ \hline \end{tabular} \end{table} Table 1: Accuracy result of the TML pipeline for different input. Figure 3: First row: Two persistence silhouettes (PS) of the Raman spectra for class AD and for class noAD. Second row: two PS of the Fourier transform for class AD and for class noAD. Again, it seems that the PS coming from the RS are more chaotic, while in the PS coming from the Fourier transform there is a clearer division between AD and noAD, with the latter having a clear peak at the tail end of the signal. ## 5 Discussion The results described above support strongly that RS and topological analysis together may provide an effective combination to confirm or disprove a clinical diagnosis of AD. Also, the training of the classification ML model trained on the topological features extracted from the Raman spectra acquired on CSF sample does not need the choice or set of any parameters; hence, the proposed methodology may evolve in automatic support to AD diagnosis, which could be easily embedded in a commercial platform of Raman spectroscopy. The above considerations are preliminary and require further confirmation from the statistical viewpoint. From this perspective, the next steps will include enlarging the dataset of CSF samples to validate the proposed method better and, possibly, to understand if topological machine learning could support the characterization of AD subtypes.
2303.17892
Interval Logic Tensor Networks
In this paper, we introduce Interval Real Logic (IRL), a two-sorted logic that interprets knowledge such as sequential properties (traces) and event properties using sequences of real-featured data. We interpret connectives using fuzzy logic, event durations using trapezoidal fuzzy intervals, and fuzzy temporal relations using relationships between the intervals' areas. We propose Interval Logic Tensor Networks (ILTN), a neuro-symbolic system that learns by propagating gradients through IRL. In order to support effective learning, ILTN defines smoothened versions of the fuzzy intervals and temporal relations of IRL using softplus activations. We show that ILTN can successfully leverage knowledge expressed in IRL in synthetic tasks that require reasoning about events to predict their fuzzy durations. Our results show that the system is capable of making events compliant with background temporal knowledge.
Samy Badreddine, Gianluca Apriceno, Andrea Passerini, Luciano Serafini
2023-03-31T08:51:44Z
http://arxiv.org/abs/2303.17892v1
# Interval Logic Tensor Networks ###### Abstract In this paper, we introduce Interval Real Logic (IRL), a two-sorted logic that interprets knowledge such as sequential properties (traces) and event properties using sequences of real-featured data. We interpret connectives using fuzzy logic, event durations using trapezoidal fuzzy intervals, and fuzzy temporal relations using relationships between the intervals' areas. We propose Interval Logic Tensor Networks (ILTN), a neuro-symbolic system that learns by propagating gradients through IRL. In order to support effective learning, ILTN defines smoothened versions of the fuzzy intervals and temporal relations of IRL using softplus activations. We show that ILTN can successfully leverage knowledge expressed in IRL in synthetic tasks that require reasoning about events to predict their fuzzy durations. Our results show that the system is capable of making events compliant with background temporal knowledge. ## 1 Introduction Event detection (ED) from sequences of data is a critical challenge in various fields, including surveillance (Clavel et al., 2005), multimedia processing (Xiang and Wang, 2019; Lai, 2022), and social network analysis (Cordeiro and Gama, 2016). Neural network-based architectures have been developed for ED, leveraging various data types such as text, images, social media data, and audio. Integrating commonsense and structural knowledge about events and their relationships can significantly enhance machine learning methods for ED. For example, in analyzing a soccer match video, the knowledge that a red card shown to a player is typically followed by the player leaving the field can aid in event detection. Additionally, knowledge about how simple events compose complex events is also useful for complex event detection. Background knowledge has been shown to improve the detection of complex events especially when training data is limited (Yin et al., 2020). Some approaches show how knowledge expressed in first-order logic (Vilamala et al., 2023; Apriceno et al., 2021, 2022) can be exploited for complex event detection. Other approaches use temporal logic, such as LTLf, to embed temporal properties in deep-learning architectures processing image sequences (Umili et al., 2022). To the best of our knowledge, all existing methods that incorporate background temporal knowledge in event detection adopt a point-wise approach, defining events based on properties that hold (or do not hold) at specific time points during the event's duration. However, the knowledge representation and formal ontology literature advocates for event-centric representations, where events are treated as "first-class citizens" with properties that cannot be expressed solely in terms of time-point properties (Kowalski and Sergot, 1986; Allen, 1983; Mueller, 2008). The traditional perspective of event representation characterizes events as crisp entities and represents the duration of an event, which is the time span during which it occurs, as a convex subset of integers or real numbers. However, this approach does not account for events that have smooth beginnings or endings, such as a snowfall. Furthermore, even crisp events can benefit from fuzzy semantics in representing relations between them. For example, the statement "Darwin (1809-1882) lived before Einstein (1879-1955)" is not as true as "March 13 comes before March 14," but it is also not entirely false. To address this limitation, knowledge representation formalisms have been proposed for fuzzy intervals and fuzzy relations between them (Ohlbach, 2004; Schockaert et al., 2008). This paper introduces a novel logical framework that enables the specification of dynamically changing propositions, as well as properties and relations between events. We refer to this framework as Interval Real Logic, which is an extension of Real Logic (Badreddine et al., 2022). This logic is designed to capture knowledge properties and relations between objects that evolve over time, including properties and relations between events. Interval Real Logic is interpreted in the domain of real-data sequences, where objects are associated with trajectories, and events are associated with the objects that participate in the event, as well as the temporal interval during which the event occurs. In addition, the paper introduces the differentiable implementation of Interval Real Logic in a neuro-symbolic architecture, Interval Logic Tensor Networks (ILTN), to detect events from data sequences using background knowledge expressed in Interval Real Logic. To effectively propagate gradients through the logic, we propose modified trapezoidal fuzzy membership functions and temporal relations for fuzzy intervals that overcome vanishing gradient issues. We present a prototype implementation of ILTN and conduct basic experiments that yield promising and positive results. The rest of the paper is organised as follows: Section 2 presents related work on fuzzy temporal knowledge and neuro-symbolic approaches for event detection. Section 3 defines the language and the semantics of ILTN. Section 4 defines fuzzy trapezoidal intervals and their temporal relations. In Section 5, the neural architecture used to predict fuzzy events is described. In section 6, the results on artificial experiments are discussed. Finally, in Section 7 conclusions are drawn and directions for future works are briefly outlined. ## 2 Related Work Modeling and reasoning about temporal knowledge is a well-studied problem (Kahn and Gorry, 1977; Allen, 1983; Allen and Hayes, 1985; Jong et al., 1999). Temporal logics like Linear Temporal Logic (LTL) (Pnueli, 1977) and Computational Tree Logic (CTL) (Clarke and Emerson, 1982) assume that the underlying (temporal) information is crisp, and do not consider that the knowledge may be characterized by vagueness and uncertainty. Following the seminal work of Zadeh (1965) on fuzzy sets, different works have been proposed to model both vagueness and uncertainty of temporal knowledge when this is expressed in terms of events and their relations via a fuzzy interval-based temporal model (Dubois and Prade, 1989; Nagypal and Motik, 2003; Ohlbach, 2004; Schockaert and Cock, 2008). These works however are not capable of processing low level information in an efficient way, and do not consider any learning. Indeed, fuzzy event recognition applications (Kapitanova et al., 2012; Dima et al., 2012; Muduli et al., 2018) simply rely on a (fuzzy) rule-based decision system. Recently, neuro-symbolic approaches (Hitzler and Sarker, 2022), which integrate sub-symbolic and symbolic reasoning and allow to effectively integrate learning and reasoning, have been applied in the context of event recognition. A common solutions consists in introducing a symbolic layer refining the output of a pre-trained neural network (Khan et al., 2019, 2019, Xing et al., 2019; Vilamala et al., 2019; Gomez et al., 2020). In (Xing et al., 2020), the symbolic layer is replaced by a neural network trained via knowledge distillation to emulate symbolic reasoning. The drawback is that this "neuro-symbolic" layer has to be re-trained from scratch even for a slight change of the knowledge. More recently, fully end-to-end differentiable neuro-symbolic architectures have been proposed, by encoding temporal reasoning primitives into existing frameworks like DeepProbLog (Vilamala et al., 2021; Apriceno et al., 2021) or Learning Modulo Theories (Apriceno et al., 2022). However, all these approaches reason in terms of time points, making them incapable of fully expressing the properties of temporal events. The solution we propose here aims to overcome these limitations by directly focusing on temporal intervals. LTN (Badreddine et al., 2022) is an end-to-end neuro-symbolic approach based on fuzzy logic where prior domain knowledge is expressed in terms of Real Logic formulas and interpreted using fuzzy logic semantics. LTN has been applied successfully to solve structured tasks like semantic image interpretation (Donadello et al., 2017) and to improve state of the art object classifiers (Manigrasso et al., 2021). A first temporal extension of LTN has been proposed by Umili et al. (2022), where Linear Temporal Logic over finite traces (LTLI) formulas are translated to fuzzy deterministic automaton and applied to solve a sequence classification task. However, as for the other previously mentioned neuro-symbolic approaches, LTLI reasons in terms of time points and thus shares their limitations. By extending LTN to deal with (fuzzy) interval logic primitives we aim to allow them to effectively and efficiently process temporal sequences towards complex event recognition. ## 3 Interval Real Logic Let \(\mathcal{L}_{t}\) be a first-order language that includes terms referring to the _trajectories_ of objects over time. The syntax for terms and formulas in \(\mathcal{L}_{t}\) follows the standard syntax of first-order logic. Similarly, let \(\mathcal{L}_{e}\) be a first-order language, referred to as the _language of events_, which includes a set of symbols \(e_{1},e_{2},\ldots\) each associated with an arity \(m\geq 0\). The terms of \(\mathcal{L}_{e}\) are expressed in the form \(e(t_{1},\ldots,t_{n})\) if \(e\) has arity \(m\) and \(t_{i}\)'s are terms in \(\mathcal{L}_{t}\). Intuitively, \(e(t_{1},\ldots,t_{m})\) denotes an event that involves \(t_{1},\ldots,t_{m}\) as participants. Additionally, we assume that \(\mathcal{L}_{e}\) contains the set of binary predicates that correspond to binary relations between events. **Example 1**.: Suppose that we want to describe the events that happen when two particles move in a 2D space as shown in Figure 1. \(\mathcal{L}_{t}\) and \(\mathcal{L}_{e}\) are used conjointly to describe Figure 1. In \(\mathcal{L}_{t}\), the two particles are denoted by constants \(\mathsf{a}\) and \(\mathsf{b}\). Unary predicates such as \(\mathsf{blue}\), \(\mathsf{red}\), and \(\mathsf{violet}\) are included to describe the particles' colors over time. The atomic formula \(\mathsf{blue}(\mathsf{a})\) expresses that \(\mathsf{a}\) is blue, with its truth value being time-dependent. To describe the proximity of the particles, \(\mathcal{L}_{t}\) uses the binary predicate \(\mathsf{close}\), and \(\mathsf{close}(\mathsf{a},\mathsf{b})\) is true around time \(5^{\prime\prime}\) and false otherwise. In \(\mathcal{L}_{e}\), event symbols are used to describe the events in the figure. For example, \(\mathsf{e0}(\mathsf{a})\) can denote the jump of particle \(\mathsf{a}\), \(\mathsf{e1}(\mathsf{a})\) can denote the color change of \(\mathsf{a}\), and \(\mathsf{e2}(\mathsf{a},\mathsf{b})\) can denote the event of \(\mathsf{a}\) and \(\mathsf{b}\) intersecting. Predicates and functions on events are also included in \(\mathcal{L}_{e}\). For example, unary predicates on events can be used to specify their types, as in the formula \(\mathsf{Jump}(\mathsf{e0}(\mathsf{a}))\), which states that \(\mathsf{e0}(\mathsf{a})\) is of type jump, and \(\mathsf{ChangeOfColor}(\mathsf{e1}(\mathsf{a}))\), which states that \(\mathsf{e1}(\mathsf{a})\) is of type color change. We require that \(\mathcal{L}_{e}\) contains the unary functions on events and the binary relations of events shown in Table 1. In the table and in the rest of the paper we use \(\epsilon\) (possibly with indices) to denote an event term \(e(t_{1},\ldots,t_{m})\). Finally, \(\mathcal{L}_{t}\) contains a unary predicate Active that takes as input an event term. Intuitively, \(\operatorname{Active}(\epsilon)\) returns for every time step of the sequence if the event is running or not. **Example 2**.: Following are some examples of formulas in \(\mathcal{L}_{t}\) and \(\mathcal{L}_{e}\). The atomic formula \[\mathsf{Sunny}(\mathsf{weather})\rightarrow\mathsf{Happy}(\mathsf{John})\] is an example of a \(\mathcal{L}_{t}\) formula that states that John is happy whenever it is sunny. This formula is evaluated along two traces, one for the weather and one for John, and can take different values at different time points. The following \(\mathcal{L}_{e}\) formula \[\mathsf{H}(\mathsf{e1}(\mathsf{John},\mathsf{Mary}))\wedge\mathsf{Meeting}( \mathsf{e1}(\mathsf{John},\mathsf{Mary}))\] states that a meeting between John and Mary happened. The \(\mathcal{L}_{t}\) formula \[\operatorname{Active}(\mathsf{e1}(\mathsf{John},\mathsf{Mary}))\rightarrow \mathsf{Happy}(\mathsf{John})\wedge\mathsf{Happy}(\mathsf{Mary})\] expresses that during the meeting between John and Mary, they were both happy. The \(\mathcal{L}_{e}\) formula \[\forall_{i}x,y.\mathsf{Meeting}(\mathsf{e1}(x,y))\rightarrow\mathsf{e1}(x,y)= \mathsf{e1}(y,x)\] Figure 1: Trajectories of two particles in a 2D space. Several events happen over time. For example, around time \(5^{\prime\prime}\), the two particles intersect. From time \(0^{\prime\prime}\) to \(10^{\prime\prime}\), particle \(a\) rises whereas particle \(b\) accelerates from left to right. Additionally, over the whole trajectory, particle \(a\) is doing a jump while changing color. states that in a meeting event, the roles of the participants are symmetric. Notice that the quantification is on trace variables (not on the events). This is highlighted by the index \(t\) of the universal quantifier. Finally, the \(\mathcal{L}_{e}\) formula \[\forall_{e}x.\mathsf{Meeting}(x)\to\exists_{e}y.\mathsf{PrepareAgenda}(y)\wedge x \text{ bf }y\] expresses that before every meeting there should be an event that is the preparation of the agenda. In this case, the quantification is on event variables, indicated by the index \(e\) of the quantifier. The semantics of the trace logic \(\mathcal{L}_{t}\) and the event-based logic \(\mathcal{L}_{e}\) are defined in the context of a linear discrete structure, which models the progression of time. We use the natural numbers \(\mathbb{N}\) with the standard order \(<\) as the reference structure for time. ### Trace Semantics In \(\mathcal{L}_{t}\), terms are interpreted as (possibly infinite) sequences of data, called _trajectories_. For each time point \(i\in\mathbb{N}\), an \(\mathcal{L}_{t}\) term corresponds to a feature vector in \(\mathbb{R}^{n}\). Specifically, a trajectory is a function \(\mathbf{t}:\mathbb{N}\to\mathbb{R}^{n}\) that assigns a feature vector in \(\mathbb{R}^{n}\) to every time point. We denote the set of trajectories with features in \(\mathbb{R}^{n}\) as \(\mathbb{T}^{n}\). Trace variables in \(\mathcal{L}_{t}\) refer to variables of individuals and are associated with batches of traces. Constants and closed terms (i.e., terms without variables) in \(\mathcal{L}_{t}\) are interpreted as single traces. Formulas in \(\mathcal{L}_{t}\) are evaluated at all time instants. For every time \(i\in\mathbb{N}\), an \(\mathcal{L}_{t}\) formula is associated with a truth value in the range \([0,1]\) that represents the level of truth of the formula at that time. As a result, an \(\mathcal{L}_{t}\) formula is interpreted as a sequence of truth values, which we refer to as a function from \(\mathbb{N}\) to \([0,1]\). The set of such functions is denoted as \(\mathbb{B}\). The formal definition of the semantics for \(\mathcal{L}_{t}\) is based on a _grounding_ function \(\mathcal{G}\) that must satisfy the following conditions: * for every variable \(x\) in \(\mathcal{L}_{t}\), \(\mathcal{G}(x)\in(\mathbb{T}^{n})^{b}\) is a batch of trajectory with the integer size \(b\geq 1\), * for every constant \(c\in\mathcal{L}\), \(\mathcal{G}(c)\in\mathbb{T}^{n}\) is a single trajectory, * for every function \(f\in\mathcal{L}_{t}\), with arity equal to \(m\), \(\mathcal{G}(f):\mathbb{T}^{n_{1}}\times\cdots\times\mathbb{T}^{m_{m}}\to \mathbb{T}^{n}\), that is \(\mathcal{G}(f)\) maps to a function that takes \(m\) input trajectories and returns a trajectory, * for every predicate \(p\in\mathcal{L}_{t}\), with arity equal to \(m\), \(\mathcal{G}(p):\mathbb{T}^{n_{1}}\times\cdots\times\mathbb{T}^{m_{m}}\to \mathbb{B}\), that is \(\mathcal{G}(p)\) maps to a function that takes \(m\) input trajectories and outputs a function from time points to truth values in \([0,1]\). Propositional connectives are interpreted according to fuzzy logic semantics which is applied point-wise. For example, if \(\phi\) and \(\psi\) are \(\mathcal{L}_{t}\)-formulas, then \(\mathcal{G}(\phi\wedge\psi)=T(\mathcal{G}(\phi),\mathcal{G}(\psi))=\{T( \mathcal{G}_{i}(\phi),\mathcal{G}_{i}(\phi))\}_{i\in\mathbb{N}}\), where \(T\) is a t-norm such \begin{table} \begin{tabular}{l l} \hline \hline & **Function symbols of \(\mathcal{L}_{e}\)** \\ \hline \(\mathsf{before}(\varepsilon)\) & what happens before the starting of \(\varepsilon\) \\ \(\mathsf{after}(\varepsilon)\) & what happens after the end of \(\varepsilon\) \\ \(\mathsf{start}(\varepsilon)\) & the starting of \(\varepsilon\) \\ \(\mathsf{end}(\varepsilon)\) & the end of \(\varepsilon\) \\ \([i,j]\) & for \(i\leq j\in\mathbb{N}\) \\ \hline \multicolumn{3}{c}{**Allen’s predicate symbols of \(\mathcal{L}_{e}\)**} \\ \hline \(\varepsilon_{1}\) & \(\varepsilon_{1}\) happens before \(\varepsilon_{2}\) \\ \(\varepsilon_{1}\) & \(\varepsilon_{1}\) happens after \(\varepsilon_{2}\) \\ \(\varepsilon_{1}\) & \(\varepsilon_{2}\) happens immediately after \(\varepsilon_{1}\) \\ \(\varepsilon_{1}\) & \(\varepsilon_{1}\) & the end of \(\varepsilon_{1}\) overlaps the start of \(\varepsilon_{2}\) \\ \(\varepsilon_{1}\) & \(\varepsilon_{2}\) & \(\varepsilon_{1}\) is a starting part of \(\varepsilon_{2}\) \\ \(\varepsilon_{1}\) & \(\varepsilon_{1}\) & happens during \(\varepsilon_{2}\) \\ \(\varepsilon_{1}\) & \(\varepsilon_{1}\) fin \(\varepsilon_{2}\) & \(\varepsilon_{1}\) is an ending part of \(\varepsilon_{2}\) \\ \(\varepsilon_{1}\) & \(\varepsilon_{1}\) eq \(\varepsilon_{2}\) & \(\varepsilon_{1}\) is equal to \(\varepsilon_{2}\) \\ \hline \multicolumn{3}{c}{**Other predicate symbols of \(\mathcal{L}_{e}\)**} \\ \hline \(\mathsf{H}(\varepsilon)\) & the event \(\varepsilon\) actually happened \\ \(\varepsilon_{1}\) & \(\varepsilon_{2}\) & \(\varepsilon_{1}\) is contained in \(\varepsilon_{2}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Basic functions and relations on events as the product t-norm. Universal and existential quantifiers are interpreted as aggregation operators. For example, \(\mathcal{G}(\forall x\phi(x))=\{\prod_{1\leq j\leq b}\mathcal{G}_{l}(\phi( \mathcal{G}_{j}(x)))\}_{i\in\mathbb{N}}\). Finally, we allow a special predicate that maps from events to \(\mathcal{L}_{t}\): * for every event \(\epsilon\), \(\mathcal{G}(\mathrm{Active}(\epsilon)):\mathbb{E}^{n}\to\mathbb{B}\); \(i\mapsto T(\mathcal{I}(\epsilon)(i),\mathsf{H}(\epsilon))\) where \(T\) is a t-norm. The functions \(\mathcal{I}\) and \(\mathsf{H}\), as well as the notation \(\mathbb{E}^{n}\), are defined in Section 3.2. Intuitively, \(\mathrm{Active}(\epsilon)\) maps an event to a boolean trajectory that states _when_ and _if_ the event happens at each timepoint of the trajectory. ### Event Semantics An event is seen as a potentially infinite sequence of data, (i.e., a trajectory) and a mask that indicates the duration of the event. Formally, an event \(\epsilon\in\mathbb{T}^{n_{1}}\times\cdots\times\mathbb{T}^{n_{m}}\times \mathbb{B}\) consists of \(m\) traces, which are the traces of the objects involved in the event \(\epsilon\), and a boolean trace that indicates when the event is active. Specifically, let \(\mathcal{I}(\epsilon)\) denote the boolean trace \(\mathbb{B}\) that is the activation sequence of \(\epsilon\). If \(\mathbf{n}=(n_{1},\ldots,n_{m})\), we denote \(\mathbb{E}^{n}\) as \(\mathbb{T}^{n_{1}}\times\cdots\times\mathbb{T}^{n_{m}}\times\mathbb{B}\), which represents the space of events involving \(m\) objects, each with features in \(\mathbb{R}^{n_{i}}\). The formal semantics of \(\mathcal{L}_{\epsilon}\) is defined in reference to the definition of an event provided in Guarino et al. (2022) and is given in terms of a function \(\mathcal{G}\) that satisfies the following restrictions. * For every event term \(e(t_{1},\ldots,t_{m})\), \(\mathcal{G}(e(t_{1},\ldots,t_{m}))\in\mathbb{E}^{n}\) where \(\mathbf{n}=(n_{1},\ldots,n_{m})\) and \(\mathcal{G}(t_{i})\in\mathbb{R}^{n_{i}}\) for \(1\leq i\leq m\), * for every \([i,j]\in\mathbb{N}\), \(\mathcal{G}(i)=\{1_{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{\mathrm{\mathbf{ \mathrm{\mathbf{\mathrm{\mathbf{\mathbf{\mathbf{\mathbf{\ We also allow special cases of semi-infinite intervals, which we use to define the before and after operators in Section 4.1. **Left-infinity**: A left-infinite fuzzy interval is characterized by the parameters \(I=\{x\mid-\infty,-\infty,c,d\}\). **Right-infinity**: A right-infinite fuzzy interval is characterized by the parameters \(I=\{x\mid a,b,+\infty,+\infty\}\). **Example 4.** With this restriction, we impose that the activation function of every event \(\mathcal{I}(\varepsilon)\) is such that there is a trapezoidal fuzzy interval \(I=\{x\mid a,b,c,d\}\) such that \(\mathcal{I}_{n}(\varepsilon)=I(n)\) for every \(n\in\mathbb{N}\). For ease of notation, in the rest of the paper, we will commonly denote an interval simply by its four parameters \(I=(a,b,c,d)\). ### Basic Operations on Fuzzy Intervals To provide the semantics for the functions and relations of \(\mathcal{L}_{e}\), we first define a set of basic operations on fuzzy intervals. Our operations are inspired by Ohlbach (2004) who defines such operations on any convex and non-convex interval. We simply specialize them on trapezoidal fuzzy intervals. #### 4.1.1 Duration The duration of a trapezoidal fuzzy interval \(A=(a,b,c,d)\), denoted by \(|A|\) is equal to \(\int_{-\infty}^{+\infty}A(x)dx\). If \(A\) is finite then \(|A|=\frac{(c-b)+(d-a)}{2}\), otherwise \(|A|=\infty\). #### 4.1.2 Before and After If \(A=(a,b,c,d)\) then \(\mathsf{before}(A)=(-\infty,-\infty,a,b)\) and \(\mathsf{after}(A)=(c,d,+\infty,+\infty)\) and #### 4.1.3 Start and End If \(A=(a,b,c,d)\) is left-finite then \(\mathsf{start}(A)\) is defined as \((\chi-\frac{\delta}{2},\chi,\chi,\chi+\frac{\delta}{2})\), such that \(\chi=\frac{a+b}{2}\) and \(\delta=\max(\frac{b-a}{2},\delta_{\min})\) where \(\delta_{\min}\) is a small positive value to account for the crisp case \(a=b\). Similarly, \(\mathsf{end}(A)=(\chi-\frac{\delta}{2},\chi,\chi,\chi+\frac{\delta}{2})\) such that \(A\) is right-finite, \(\chi=\frac{c+d}{2}\) and \(\delta=\max(\frac{d-c}{2},\delta_{\min})\). ### Relations between Fuzzy Intervals Ohlbach (2004) defines interval-interval relations by computing the integral of point-interval relations over the points in a set. To avoid the complexity associated with the integrals, and to be more compliant with Allen's definition in the crisp case, we define new relations based on simplified containment ratios. \[A\text{ in }B \coloneqq\frac{|A\cap B|}{|A|}\] \[A\text{ eq }B \coloneqq A\text{ in }B\wedge B\text{ in }A\] \[A\text{ bt }B \coloneqq A\text{ in }\text{before}(B)\] \[A\text{ at }B \coloneqq B\text{ in }\text{after}(A)\] \[A\text{ mt }B \coloneqq\text{end}(A)\text{ eq }\text{start}(B)\] \[A\text{ st }B \coloneqq\text{start}(A)\text{ eq }\text{start}B\wedge\text{end}(A) \text{ bf }\text{end}(B)\] \[A\text{ dt }B \coloneqq\text{start}(A)\text{ at }\text{start}(B)\wedge\text{end}(A) \text{ bf }\text{end}(B)\] \[A\text{ fin }B \coloneqq\text{start}(A)\text{ at }\text{start}B\wedge\text{end}(A) \text{ eq }\text{end}(B)\] \[A\text{ ol }B \coloneqq\text{start}(A)\text{ bf }\text{start}(B)\wedge\text{start}(B) \text{ bf }\text{end}(A)\] \[\wedge\text{end}(A)\text{ bf }\text{end}(B)\] In these definitions, the temporal relations take precedence over the fuzzy conjunction \(\wedge\). For general fuzzy intervals, \(|A\cap B|\) can be hard to compute. However, with trapezoidal intervals, the calculation of \(|A\cap B|\) is derivable analitically by solving simple linear constraints system. We show how this is done in the following subsection. ### Area Intersection Let us calculate \(\text{Area}(A\cap B)\) for any two finite intervals \(A=(a,b,c,d)\) and \(B=(a^{\prime},b^{\prime},c^{\prime},d^{\prime})\). Without loss of generality, suppose that \(a\leq a^{\prime}\). Developing an explicit formula to compute \(\text{Area}(A\cap B)\) is not immediate as the shape of \(A\cap B\) can be a polygon with a varying number of edges (at most \(6\)). For example: We propose to first determine the vertices of the shape \(A\cap B\), and then compute the area of the shape using the shoelace formula. Empty intersectionFirst, we dismiss the case \(d\leq a^{\prime}\), in which the two intervals do not intersect. \(\text{Area}(A\cap B)=0\). In the rest of the section, we assume that an intersection always exists. Bottom verticesWe call bottom vertices of the shape \(A\cap B\), the ones on the line \(y=0\). There are always two. As \(a\leq a^{\prime}\), \((a^{\prime},0)\) is always a vertex of the shape. The second vertex is \((\min(d,d^{\prime}),0)\). Top verticesWe call top vertices the ones on the line \(y=1\). There can be zero, one, or two top vertices that delimit \(A\cap B\), as shown in the below figures: If \(c<b^{\prime}\) or \(b>c^{\prime}\), there are zero top vertices. If \(b^{\prime}=c\), the only top vertex is \((c,1)\). If \(b=c^{\prime}\), the only top vertex is \((b,1)\). In other cases, there are always two top vertices \((\max(b,b^{\prime}),1)\) and \((\min(c,c^{\prime}),1)\). Side verticesTo determine the side vertices that delimit \(A\cap B\), we compute the intersection of the lines drawn by the edges of each trapezoid over the whole \(xy\) plane. Then, we keep the intersections where \(y\in[0,1]\). For example, in the below figure, there is only one intersection that defines a side vertex of \(A\cap B\): Let us denote \(L_{A}\equiv y=\frac{x-a}{b-a}\) the line drawn by the left side of \(A\), and \(R_{A}\equiv y=\frac{x-d}{c-d}\) the line drawn by the right side of \(A\). Similarly, we have \(L_{B}\equiv y=\frac{x-d^{\prime}}{b-a^{\prime}}\) and \(R_{B}\equiv y=\frac{x-d^{\prime}}{c^{\prime}-d^{\prime}}\) defined on \(B\). We are interested in finding the four intersections \(L_{A}\cap L_{B}\), \(L_{A}\cap R_{B}\), \(R_{A}\cap L_{B}\), and \(R_{A}\cap R_{B}\). Each is easy to determine by solving the system of two equations associated with the pair of lines. For example, \(L_{A}\cap L_{B}\) is the point \((\frac{ab^{\prime}-bd^{\prime}}{a-b+b^{\prime}-a^{\prime}},\frac{a-a^{\prime} }{a-b+b^{\prime}-a^{\prime}})\). Once we have determined the intersections, we keep the ones where \(y\in[0,1]\) to define the vertices of \(A\cap B\). Let us cover some of the edge cases about these intersections. Firstly, any of the edge lines can be vertical if the trapezoid is crisp on that edge. For example, if \(a=b\), \(L_{A}\) is defined by the equation \(x=a\). Regardless, the method is the same: we simply use this vertical equation in the system of two equations. Secondly, it is possible that there are no side vertices if some lines are parallel. For example, \(L_{A}\cap L_{B}\) gives no solution if \(a-b=a^{\prime}-b^{\prime}\) (or infinite solutions if the lines are the same). In such cases, we ignore the pair of parallel lines. Finally, it is also possible that a side vertex is a top or bottom vertex if the lines intersect on \(y=0\) or \(y=1\). Area calculationOnce we have determined all the vertices \((x_{i},y_{i})\) of \(A\cap B\), arranged in a counter-clockwise sequence of points, we can calculate the area using the shoelace formula: \[\text{Area}(A\cap B)=\frac{1}{2}\sum_{i=1}^{n}(y_{i}+y_{i+1})(x_{i}-x_{i+1}) \tag{2}\] Semi-infinite intervalsWe sometimes have to compute the area intersection in cases where \(A\) is left-infinite or \(B\) is right-infinite (for example, when using the operators bf or af). However, we can turn these semi-infinite intervals to finite intervals such that the area calculation is unchanged. If \(A\) is left-infinite, we can replace the infinite parameters with any \(a\leq a^{\prime}\) and \(b\leq b^{\prime}\). Similarly, if \(B\) is right-infinite, we can replace the infinite parameters with any \(c^{\prime}\geq c\) and \(d^{\prime}\geq d\). Doing so, we can reuse the method highlighted above. ## 5 Architecture The main objective of introducing Interval Real Logic (IRL) is to use it to impose temporal constraints in a neural architecture for Event Detection. Given a temporal data sequence \(\mathbf{u}=\{u_{i}\}_{i=0}^{T}\) we define a neural architecture, called _Interval Logic Tensor Networks (ILTN)_ that is capable to recognize _if_ and _when_ a set of events \(\mathbf{\varepsilon}_{1},\dots,\mathbf{\varepsilon}_{k}\) happens in the sequence, under the hypothesis, that certain constraints expressed in IRL are (softly) satisfied. We implemented a first simple prototype of ILTN in TensorFlow as a wrapper of LTN (Badreddine et al., 2022). The code is available at [https://github.com/sbadredd/interval-ltn](https://github.com/sbadredd/interval-ltn). The present section describes important design choices that enable the architecture. In the description, we concentrate only on the temporal prediction and not on the classification of the events. ### Neural Architecture Figure 2(b) illustrates how neural networks are used to ground any event \(\epsilon\). An event is characterized by two elements: a truth degree \(\mathsf{H}(\epsilon)\) indicating if the event happens, and by a trapezoid interval defining the membership function and when it happens. Let us call _logits_ the vector of raw (non-normalized) predictions output by the neural model, as is common in the machine learning literature. The truth degree \(\mathsf{H}(\epsilon)\) is easily implemented using a single logit node which is then passed to a sigmoid normalization function and constrained in the interval \([0,1]\). Directly defining the parameters \((a,b,c,d)\) of the interval is difficult as the semantic constraint \(a\leq b\leq c\leq d\) is hard to implement in a neural architecture. Instead, our neural architecture predicts the four values \((a,b-a,c-b,d-c)\). The only semantical constraint on these values is that each is positive. This is easily implemented using four logit nodes which are then passed through softplus activations. ### Smooth Membership Functions We notice an important vanishing gradient issue with trapezoidal interval functions. If \(x\) is in the flat regions \(I(x)=0\) or \(I(x)=1\), then \(\frac{\partial I(x)}{\partial x}=0\). To account for this, we define \(I_{\sim}\), a smooth version of the membership function (1): \[L_{\sim}(x)=\begin{cases}\mathrm{s}_{+}(x-a)&\text{if }x\leq a,\\ \mathrm{s}_{+}(\max(b-x,x-c))&\text{if }b<x\leq c,\\ \mathrm{s}_{+}(d-x)&\text{if }d<x,\\ I(x)&\text{otherwise.}\end{cases} \tag{3}\] Where \(\mathrm{s}_{+}\) is the softplus function defined by: \[\mathrm{s}_{+}(x\mid\beta) =\frac{1}{\beta}\log\left(1+e^{\beta x}\right) \tag{4}\] \[\frac{\partial\mathrm{s}_{+}(x\mid\beta)}{\partial x} =\frac{1}{1+\exp(-\beta x)} \tag{5}\] Notice that, in (3), the inputs to the \(\mathrm{s}_{+}\) function are all negative values. Intuitively, looking at the graph of softplus in Figure 4, \(\mathrm{s}_{+}\) applied to negative values outputs a value that tends to zero with non-negative gradients. Figure 3: Implementation of a temporal predicate symbol from \(\mathcal{L}_{t}\) (left) and of an event symbol (fuzzy interval and happening predicate) from \(\mathcal{L}_{e}\) (right). Examples of sequential neural architectures are Recurrent Neural Networks or Transformers. We use \(I\) and \(I_{\sim}\) to define an artificial operator with distinct properties in the forward pass and backward pass of the computational graph. 1 Let \(\epsilon=e(t_{1},\ldots,t_{m})\) be an event term associated with an interval \(I\) and a corresponding smooth version \(I_{\sim}\). We use: Footnote 1: See also [https://www.tensorflow.org/api_docs/python/tt/custom_gradient](https://www.tensorflow.org/api_docs/python/tt/custom_gradient). \[\epsilon(x) =I(x) \tag{6}\] \[\frac{\partial\epsilon(x)}{\partial x} =\frac{\partial I_{\sim}(x)}{\partial x} \tag{7}\] The motivation is demonstrated in Figure 5. The backward pass \(\frac{\partial I_{\sim}(x)}{\partial x}\) has non-zero gradients everywhere that push \(x\) to fit in the center of the interval. The forward pass remains the accurate evaluation \(I(x)\). Finally, we use the parameter \(\beta\) to ensure the accuracy of the operator. For example, for large negative differences \(x-a\), the output of \(\mathrm{s}_{+}(x-a)\) gets very small and can become zero because of the way computers approximate real numbers. In float32 precision format, this happens with \(x-a>90\) and \(\beta=1\). In such cases, the gradients still vanish. We avoid this issue by setting \(\beta=\frac{1}{T}\), where \(T\) is the largest time difference occuring in our data, or in other words \(T\) is the length of the trace in the experiment. ### Smooth Relations Let \(A=(a,b,c,d)\) and \(B=(a^{\prime},b^{\prime},c^{\prime},d^{\prime})\) be two trapezoids. Without loss of generality, suppose that \(a\leq a^{\prime}\). Similarly to how membership functions has zero gradients on some parts of the domain, the relations in 4.2 have vanishing gradients in two situations. The first is when \(A\) in \(B=0\) and the trapezoids do not intersect. In other words, when \(d<a^{\prime}\). The second is when \(A\) in \(B=1\) and \(A\) is fully contained in \(B\). In other words, when \(a>a^{\prime}\), \(b>b^{\prime}\), \(c<c^{\prime}\), and \(d<d^{\prime}\). Figure 4: The softplus function. Figure 5: Smooth membership function. The forward pass uses \(I(x)\) (top left). The backward pass uses \(\frac{\partial I_{\sim}(x)}{\partial x}\) (bottom right). Again, we solve this by defining a smooth operator for the backward pass relying on the softplus operator: \[(A\text{ in }B)_{\sim}=\begin{cases}\text{s}_{+}(d-a)&\text{if }d<a^{\prime},\\ \text{s}_{+}(a^{\prime}-a+d-d^{\prime})&\text{if }A\text{ fully in }B,\\ A\text{ in }B&\text{otherwise}.\end{cases} \tag{8}\] with the non-vanishing derivatives on the trapezoid edges of \(A\) and \(B\). We use \(A\) in \(B\) in the forward pass of the computational graph and \(\frac{\partial(A\text{ in }B)_{\sim}}{\partial x}\) in the backward pass, where \(x\) is any parameter defining \(A\) or \(B\). Finally, we still set \(\beta=\frac{1}{T}\) for the softplus function. ## 6 Experiments We test the system on synthetic tasks that require a combination of learning and reasoning about temporal relations between fuzzy intervals. Let \(\phi_{1},\ldots,\phi_{n}\) be \(n\) constraints written in Interval Real Logic defining a knowledge base \(\mathcal{K}\). Like in LTN (Badreddine et al., 2022), the grounding of the knowledge base defines a satisfaction level to maximise. Optimising by gradient descent, we have the following loss function: \[L(\mathcal{K},\theta)-(\mathcal{G}(\phi_{1},\theta)\wedge\cdots\wedge\mathcal{ G}(\phi_{n},\theta)) \tag{9}\] where \(\phi_{i}\)'s are \(\mathcal{L}_{e}\) formulas and \(\theta\) is a set of trainable parameters used to define the grounding. We focus the experimental study on the training of events with constraints written using \(\mathcal{L}_{e}\), which is the main innovation of this paper. Specifically, we focus on learning parameters that define the fuzzy trapezoid intervals of events. Table 2 displays a list of training experiments where ILTN maximizes the satisfaction of temporal constraints. All tasks are trained using the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.1. In T1, T2, T3, and T4, the results are obtained after training for 50, 500, 5000, and 200 training steps, respectively. For the logical operators, we use the product t-norm \(u\wedge v=uv\) and the standard negation \(\neg u=1-u\). We highlight the following features: * In T1, T2 and T3, the system learns fuzzy intervals, * In T4, the system learns a time point value \(x\), * T1, T2, and T3 display constraints using Allen's relational symbols \(\text{af },\text{ }\text{bf },\text{ }\text{st },\text{ and }\text{ }\text{ol }\), \begin{table} \begin{tabular}{l l l l} \hline \hline Task & Initial Conditions & Setting & Constraints & Result \\ \hline T1 & \(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\quad\)\(\)\ * T3 and T4 display constraints using membership functions, * T4 displays a constraint using a functional symbol end, * In T1 and T2, \(u\approx v\) is a smooth equality predicate implemented as \(\exp(-|u-v|)\in[0,1]\). ### Challenges for Future Work In all tasks, the system learns to update the event groundings to satisfy the knowledge base. The experiments demonstrate that ILTN can successively backpropagate gradients through the Interval Real logic. Nevertheless, we highlight three limitations of our experiments that future work should explore. Firstly, early stopping was an important factor in our experiments. Continuing training after reaching the maximal satisfaction level could sometimes lead to worsening the results. This is likely due to the smoothing of operators for obtaining non-vanishing gradients. This feature is important early in training. However, once a constraint is satisfied, having vanishing gradients is acceptable. Future work could explore reducing or stopping the smoothening of constraints when their satisfaction levels are high. Secondly, the Adam optimizer is traditionally used with learning rates in the order of 0.001. In comparison, the learning rate of 0.1 used to train our synthetic tasks is unusually high. A lower learning rate led to experiments not converging fast enough. There is likely a scaling issue in the gradients of some operations. This could also explain why T3 required more training steps than the other tasks to reach convergence: this is the only task that mixes relational operators ( od ) and membership functions. The two have gradients scaling differently which can challenge the training. Future work should analyse further the gradient properties of each temporal operator. Thirdly, the present experiments do not showcase yet the power of learning events that depend on input features. For example, in Figure 2(b), the present tasks only learn trapezoid logits that define a trapezoid number. There is no sequential data in input and neural architecture that builds on top of it. Future work should explore more elaborate tasks employing such architectures. ## 7 Conclusions In this paper, we introduce Interval Real Logic (IRL), a two-sorted logic that enables the prediction of properties that evolve within a set of data sequences (traces) and properties of events that occur within the sequences. IRL semantics are defined in terms of sequences of real feature vectors, and connectives and quantifiers are interpreted using fuzzy logic. We represent event duration through trapezoidal fuzzy intervals, and fuzzy temporal relations are defined based on the relationships between the intervals' areas and their intersections. We also present Interval Logic Tensor Networks (ILTN), a neuro-symbolic system that leverages background knowledge expressed in IRL to predict the fuzzy duration of events. To prevent vanishing gradient during learning, we use softplus functions to smooth both events and their relations. We evaluate ILTN's performance on four tasks with different temporal constraints and show that it is capable of making events compliant with background knowledge in all four tasks. In Section 6.1, we suggest several directions for future research. One promising avenue would be to test ILTN on more realistic and complex scenarios, such as those involving real-world data. Scalability is a significant challenge for neuro-symbolic frameworks for event recognition, which we have hoped to address by representing events as a whole, rather than using a point-wise approach.
2301.00229
Metallicity estimation of MW, SMC and LMC classical Cepheids from the shape of the $V$- and $I$-band light curves
Estimating metallicity of classical Cepheids is of prime importance for studying metallicity effect on stellar evolution, chemical evolution of galaxies, and ultimately its impact on period-luminosity relation used in the extragalactic distance scale. We aim at establishing new empirical relations for estimating the iron content of classical Cepheids for short and long-periods based on Fourier parameters from the $V$-band light curves. We calibrate new interrelations of Fourier parameters to convert $V$-band empirical relations into the $I$-band. Then we apply these relation in $V$ and $I$-bands to Cepheids from Milky Way (MW), Small and Large Magellanic Clouds (SMC and LMC) available in the literature. Last, we map the metallicity distribution in these galaxies for investigating potential application in galactic archeology. These empirical relations in $V$ and $I$ bands are able to derive the mean metallicity of a sample of MW, SMC and LMC Cepheids in agreement with literature values within 1$\sigma$. We also show that these relations are precise enough to reconstruct the radial metallicity gradients within the MW from OGLE data. The empirical relations in the $V$ and $I$ bands calibrated in this paper for short and long-period Cepheids provide a new useful tool to estimate the metallicity of Cepheids which are not accessible by spectroscopy. The calibration can be improved with further high-resolution spectroscopic observations of metal-poor Cepheids and homogeneous photometries in $V$ and $I$ bands.
V. Hocdé, R. Smolec, P. Moskalik, O. Ziółkowska, R. Singh Rathour
2022-12-31T15:47:43Z
http://arxiv.org/abs/2301.00229v2
Metallicity estimations of MW, SMC, and LMC classical Cepheids from the shape of the \(V\)- and \(I\)-band light curves+ ###### Abstract Context:Estimating the metallicity of classical Cepheids is of prime importance for studying metallicity effects on stellar evolution and the chemical evolution of galaxies, as well as on the period-luminosity relation used on the extragalactic distance scale. Aims:Our first aim is to establish new empirical relations for estimating the iron content of classical Cepheids for short and long periods based on Fourier parameters from the \(V\)- and \(I\)-band light curves. We go on to apply these relations to Cepheids from data on the Milky Way (MW) as well as the Small and Large Magellanic Clouds (SMC and LMC) from the literature. Methods:We retrieved the metallicities of 586 fundamental-mode Cepheids from spectroscopic determinations in the literature and we found well-sampled light curves for 545 of them in different \(V\)-band catalogs. We then described the shape of these light curves by applying a Fourier decomposition and we fit the empirical relations between the Fourier parameters and the spectroscopic metallicities individually, for short-period (\(2.5<P<6.3\) days) and long-period Cepheids (\(12<P<40\) days). We verified the accuracy of these relations by applying them to \(V\)-band light curves of Cepheids from the Small and Large Magellanic Clouds and comparing these derived metallicities to literature values. We calibrated new interrelations of Fourier parameters to convert these empirical relations into the \(I\) band. We then used these \(I\)-band relations to derive the metallicity of fundamental-mode Cepheids from OGLE-IV for MW, SMC, and LMC (486, 695, and 1697 stars, respectively). Finally, we mapped the metallicity distribution in these galaxies for the purpose of investigating potential applications in galactic archeology. Results:For short-period Cepheids, our best fit is given for a relation based on explicit amplitude terms \(A_{1}\) and \(A_{2}\) of the first and second harmonic, respectively. In the \(V\) and \(I\) bands, these empirical relations are found with an intrinsic scatter (rms) of 0.12 dex. This relation performs well for estimations of [Fe/H] between about \(-0.5\) and 0.1 dex, but it remains uncertain outside this range because of the lack of a spectroscopic metallicity required for the calibration. For long-period Cepheids, we found a metallicity dependence on the Fourier parameters \(A_{1}\), \(\phi_{21}\), and \(R_{14}\). We found an intrinsic scatter of 0.25 dex when using this relation. The empirical relations in the \(V\) and \(I\) bands allow us to derive the mean metallicity of a sample of MW, SMC, and LMC Cepheids that is in agreement with literature values within 1\(\sigma\). We also show that these relations are precise enough to reconstruct the radial metallicity gradients within the MW from OGLE data. Conclusions:The empirical relations in the \(V\) and \(I\) bands that are calibrated in this work for short- and long-period Cepheids provide a useful new tool for estimating the metallicity of Cepheids that are not accessible via spectroscopy. The calibration can be improved with further high-resolution spectroscopic observations of metal-poor Cepheids and homogeneous photometries in the \(V\) and \(I\) bands. ## 1 Introduction Cepheids are yellow supergiant variable stars that are essential for distance-scale determinations as a result of the correlation between their pulsation period and their luminosity (hereafter, the PL relation) (Leavitt, 1908; Leavitt & Pickering, 1912). However, the PL relation is still affected by uncertainties on both the zero point and slope, which represents one of the main error contribution to the extragalactic distance scale (Riess et al., 2022). Chemical composition plays an important role in terms of the evolution and the brightness of Cepheids. As a result, metallicity introduces a bias on the zero point when the PL relation is calibrated in different galaxies. The sign of the metallicity term in the period-luminosity-metallicity (PLZ) relation, which also depends on the given passband, is still under debate from theoretical and observational standpoints, as, for example, in Caputo et al. (2000); Bono et al. (2008); Fiorentino et al. (2013); Gieren et al. (2018); Ripepi et al. (2021); Breuval et al. (2021); Wielgorski et al. (2022); Breuval et al. (2022); De Somma et al. (2022). Determining the metallicity for a large number of Cepheids is also crucial for constraining pulsation and evolution models. Accurate predictions of Cepheid evolution as well as its pulsation properties are necessary to answer questions related to the mass discrepancy among Cepheids (see, e.g., Neilson et al., 2011). The derived stellar mass can be up to 20% lower in pulsation models as compared to stellar evolution models; this discrepancy is still unresolved after 50 years of research (see, e.g., Stobie, 1969; Bono et al., 2006; Keller, 2008). Until now, high-resolution spectroscopic observations have led to determinations of the iron-to-hydrogen ratio [Fe/H] for several hundreds of Cepheids, which are mostly in the vicinity of the Sun (within about 5 kpc). Indeed, these observations are difficult to apply to faint stars that are distant or that are located in the line of sight of a significantly reddened environment such as the Galactic Center. However, this limitation can be overcome thanks to medium- and high-resolution near-infrared (NIR) spectroscopy (Inno et al., 2019; Kovtyukh et al., 2019, 2022c). An interesting alternative to spectroscopy is to carrying out estimations of the metallicity of the Cepheids using the shape of the light curve, since it contains information on its physical properties such as the pulsation period, temperature, and chemical composition. Thus, it is possible to infer these parameters using photometry. In the case of RR Lyrae stars, Jurcsik & Kovacs (1996) demonstrated the possibility of estimating [Fe/H] from the shape of the \(V\)-band light curve described by Fourier parameters. Several authors have thus extended their work to various other photometric wavelengths, such as the \(I_{c}\)-band (Smolec, 2005; Dekany et al., 2021) or the optical and infrared (Mullen et al., 2021). Other recent approaches have made use of learning machine algorithms to infer the physical parameters of Cepheids and RR Lyrae stars from the shape of the light curve (Miller et al., 2015; Hajdu et al., 2018; Bellinger et al., 2020). Among the available photometric methods used to estimate the metallicity, we also note the possibility of inverting the PLZ relation (when the metallicity term is characterized). In addition, Bono et al. (2010) suggested that differences in distance moduli inferred by different Period-Wesenheit relation can also be used to estimate individual metallicities. In the case of Cepheids, several studies have shown the impact of the chemical abundance on the amplitudes (Klagyivik & Szabados, 2007; Bono et al., 2000; Szabados & Klagyivik, 2012; Majaess et al., 2013). An attempt was made by Zsoldos (1995) to calibrate a relation with Fourier parameters. The first reliable empirical relation for short-period Cepheids (\(P<6.3\) days) based on the Fourier parameters \(R_{21}\) and \(R_{31}\) was proposed by Klagyivik et al. (2013). However these relations are based on a small sample of stars and do not cover the more metal-poor regime typical of Magellanic Cloud Cepheids. These relations were used by Clementini et al. (2019) to estimate the metallicity of Cepheids of _Gaia_ DR2. These authors found metallicity distributions for Cepheids residing in the Small and Large Magellanic Clouds (SMC and LMC) and the Milky Way (MW) that are shifted by about \(+0.2\) dex compared to the literature values, which might hint at a calibration issue. On the other hand, there is still no empirical relation available for estimating the chemical abundance of intermediate (\(6<P<10\) days) Cepheids. This is due to the resonance between the fundamental and the second-overtone mode, which strongly affects the shape of the light-curve (Simon & Schmidt, 1976; Buchler et al., 1990). In the case of long-period (\(P>10\) days) Cepheids, Scowcroft et al. (2016) established a relation based on _Spitzer_ infrared colors at 3.6 and 4.5 \(\mu\)m. A complementary relation based on Fourier decomposition of the light curve in common photometric bands would be, however, useful and convenient for directly measuring the metallicity of extragalactic Cepheids. Thus, there is a need for a new photometric metallicity formula that offers a proper calibration and that is applicable over a wide range of pulsation periods. The estimation of [Fe/H] for a large number of Galactic and extragalactic Cepheids will be valuable in the context of carrying out thorough comparisons with evolution and pulsation theoretical models. Combining these estimations with the accurate trigonometric parallaxes of these stars provided by _Gaia_ can also help to recalibrate the metallicity dependence of the PL relationship and to refine the cosmic distance scale. In this paper, we first present a set of new empirical relations for estimating the metallicity of Cepheids, based on the Fourier parameters of the \(V\)-band light curve. We improve (qualitatively and quantitatively) the calibration for the short-period Cepheids (\(2.5<P<6.3\) days) and we provide for the first time an empirical relation for estimating [Fe/H] of the long-period Cepheids (\(12<P<40\) days). Then, we convert these relations to the \(I\)-band and use them to estimate the metallicities of Cepheids in the MW, SMC, and LMC. In Section 2, we present the data set used for spectroscopic metallicities and for the \(V\)-band light curves. In Section 3, we apply a Fourier decomposition of the light curves and we then establish empirical relations for short- and long-period Cepheids in Sections 4 and 5, respectively. In Section 6, we calibrate the interrelations of Fourier parameters and use it to convert \(V\)-band relations into the \(I\) band. Finally, we apply these results to estimate the metallicity of Cepheids in the MW, SMC, and LMC. We then discuss our conclusions in Section 7 and 8. ## 2 Metallicity and V-band light-curve data sets In this section, we first gather the spectroscopic metallicities of Cepheids from the literature, then we cross-match these stars with existing photometric catalogs in the \(V\) band. ### Metallicity #### 2.1.1 Milky Way Cepheids The two largest data sets of metallicities for Galactic Cepheids are given by Luck (2018) and Groenewegen (2018), with 435 and 452 Cepheids, respectively (and with all pulsation modes included). While Luck (2018) homogeneously determined the metallicities, Groenewegen (2018) compiled different data from the literature (mostly from Genovali et al. (2014, 2015)) and translated them into the same scale. In this work, we first re-scaled the [Fe/H] abundance of Luck (2018) to solar abundance value of A(Fe)\({}_{\odot}\)=7.50, given by Asplund et al. (2009). The iron-to-hydrogen ratio is given as: \[\mathrm{[Fe/H]=A(Fe)_{\star}-A(Fe)_{\odot}}, \tag{1}\] where A(Fe) = log(NFe/NH) + 12 is the logarithmic iron abundance with respect to hydrogen. In the following, all the metallicity data sets discussed in this paper will be consistently re-scaled to this solar reference if needed. Then we cross-matched these two data sets to obtain a sample of 478 Milky Way Cepheids. There are 409 stars in common between these two catalogs and we derived a mean offset of \(-0.07\) dex\(\pm\) 0.06, which is used to correct the iron abundance of Groenewegen (2018). We rejected 17 stars out of our final sample because they are discrepant at more than \(2\sigma\) between these two data sets and, thus, the measurements may not be reliable. When possible, metallicity values from Luck (2018) are preferred, since they have been determined from a single homogeneous approach. When abundances are not available (from this same source), we then drew abundance values from Groenewegen (2018). We also followed the preference order among the different references as defined by this author. This merge data set constitutes our main metallicity sample. Additional spectroscopic determination were found in Trentin et al. (2023) (44 fundamental-mode Cepheids), Kovtyukh et al. (2022) (54 fundamental-mode Cepheids), and Ripepi et al. (2021) (19 fundamental-mode Cepheids). From Kovtyukh et al. (2022), we found 13 stars in common with our main metallicity sample. We derived a mean offset of \(-0.06\pm 0.11\) that is used to correct the iron abundance of Kovtyukh et al. (2022). Finally, the fundamental-mode classification is verified directly from a Fourier decomposition of the \(V\)-band light curves presented in the next section. To this end, stars that are discrepant with respect to the Hertzsprung progression of fundamental-mode Cepheids are discarded since they can belong to other pulsation modes (Antonello & Poretti 1986; Antonello et al. 1990). The final metallicity sample consists of 472 fundamental-mode MW Cepheids with spectroscopic metallicities. An overview of the calibration sample construction is given in Table 1. Although our sample might be inhomogeneous because of the different sources for the metallicity, we emphasize that about 90% of metallicities for the MW Cepheids of this sample are taken from a single source (specifically, from Luck (2018), Table A.1). #### 2.1.2 LMC and SMC Cepheids We retrieved the iron abundances of 89 stars in the recent work of Romaniello et al. (2022). However, we note that Romaniello et al. (2022) re-analyzed 21 LMC Cepheids from Romaniello et al. (2008) and found systematic errors in this latter study, yielding an additional offset of \(-0.11\) dex to obtain the new values. Therefore, we assumed that the 14 SMC Cepheids from Romaniello et al. (2008) are affected by the same error, so we retrieved these stars and we corrected by applying this offset. We also found 4 additional Cepheids from the SMC and 5 stars from the LMC cluster NGC 1866 given by Lemasle et al. (2017). From NGC 1866, 2 more Cepheids were retrieved from Molinaro et al. (2012). The final metallicity sample combining MW, LMC, and SMC Cepheids consists of 586 Cepheids which span a [Fe/H] range from about \(-1.0\) to \(+0.4\) dex. It is, however, difficult to attribute consistent uncertainties for each measurement. Indeed, a range of observations and methods have been used to measure the iron abundance and the uncertainties may not be homogeneous between studies. For example, da Silva et al. (2022) studied the impact of different systematics on metallicity measurements. In addition, we cannot derive any systematic difference of [Fe/H] between the two largest data sets, namely, the MW sample of Luck (2018) and the LMC sample of Romaniello et al. (2022), since they have no stars in common. Therefore, we associated a conservative uncertainty of \(\pm 0.15\) dex with every abundance in our sample. This value represents an upper limit for most of the uncertainties and is also consistent with the mean uncertainty from the sample of Luck (2018). ### V-band light curves The first objective of this work is to increase the sample of Cepheids with both spectroscopic metallicity and well-sampled \(V\)-band light curves. Thus, we did not limit our sample to a single catalog, as done by Klagyivik et al. (2013) in using the catalog from Berdnikov (2008); however, we chose to cross-match the metallicity sample with different catalogs in the \(V\)-band. For MW Cepheids, we retrieved 169 stars from the All-Sky Automated Survey (ASAS) catalog (Pojmanski 2002) and 438 stars from the All-Sky Automated Survey for Supernovae (ASAS-SN) catalog (Jayasinghe et al. 2018). We also extracted 349 stars from Berdnikov (2008). Among the 472 fundamental-mode Cepheids with spectroscopic metallicities, we found 438 stars that also have a light curve in the \(V\) band. For SMC and LMC stars, we first collected the light curves from Optical Gravitational Lensing Experiment (OGLE) publicly available1 data (Soszynski et al. 2008, 2010). The SMC and LMC light curves were retrieved from OGLE III and OGLE IV. For 35 stars, the pulsation cycle is poorly covered or light curves are not available. For 29 of them, we retrieved the light curves from ASAS-SN. For the remaining 6 stars, we could not find any \(V\)-band light curves in the literature. Our LMC and SMC sample thus consists of 107 fundamental Cepheids from LMC and SMC with both spectroscopic metallicities and \(V\)-band light curves and an excellent phase coverage. In the next section, we apply Fourier decomposition to every \(V\)-band light curves from the different catalog and we select the best fit for MW Cepheids among the different \(V\)-band catalogs. Footnote 1: [https://ogledb.astrouw.edu.pl/](https://ogledb.astrouw.edu.pl/)\(\sim\)ogle/OCVS/ceph_query.php ## 3 Fourier decomposition of the V-band light-curve Fourier decomposition is a very useful technique for studying the structure of light curves. In particular, the Fourier parameters efficiently describe the bump progression (Hertzsprung 1926). The favored hypothesis to explain the bump progression is a resonance between the fundamental and the second-overtone pulsation modes (Simon & Schmidt 1976) that are characterized by pulsation periods of \(P_{0}\) and \(P_{2}\), respectively. This resonance model is qualitatively reproduced by hydrodynamical models (Buchler et al. 1990) and quantitatively using the amplitude equation (Kovacs & Buchler 1989). Alternatively, the echo model proposes that a wave pressure travels inward from He ionization zone and echoes on the stellar core, then travels back to reach the surface, and produces the observed bump on the light curves (Whitney 1956; Christy 1968, 1975). For arguments against the echo hypothesis, we refer to Klapp et al. (1985). For each light curve, we applied a Fourier series of the form: \[m(t)=A_{0}+\sum_{k=1}^{n}A_{k}{\rm cos}[kout+\phi_{k}], \tag{2}\] where \(m(t)\) is the magnitude observed at the time \(t\) and \(\omega=2\pi/P\), \(A_{k}\), and \(\phi_{k}\) are the amplitude and the phase of the \(k\) harmonic. We used the dimensionless Fourier parameters intro Figure 1: Histogram of the iron abundance of the calibration sample (545 fundamental-mode Cepheids with \(V\)-band light-curve) gathered and re-scaled from the literature. duced by Simon & Lee (1981): \[R_{i1} =\frac{A_{i}}{A_{1}}, \tag{3}\] \[\phi_{i1} =\phi_{i}-i\phi_{1}, \tag{4}\] where \(\phi_{i1}\) are then adjusted to lie between 0 and \(2\pi\). For each curve, the order of the fit, \(n\), is iterated until \(A_{n}/\sigma_{A_{n}}{>}4\). The uncertainty of the fit is given by: \[\sigma^{2}=\frac{\chi^{2}}{N-2n-2}, \tag{5}\] where \(N\) is the number of data points, \(n\) is the order of the fit, and \(\chi^{2}\) is the sum of the squared residuals. This uncertainty is used to apply an iterative 3-\(\sigma\) clipping during the fitting process. For each light curve, the period is estimated using Lomb-Scargle method (Lomb, 1976; Scargle, 1982). The uncertainty on the Fourier parameters were derived following Petersen (1986). For every light curve of the different catalogs in the \(V\)-band, we selected the fit having the smallest uncertainty for Fourier parameter \(\phi_{21}\). This criterion also ensures Fourier parameters are accurate enough for establishing empirical relations. A selection based on the fit uncertainty as defined by Eq. 5 is also possible, but this criterion is less accurate when selecting the best fits. We also inspected visually the light curves to ensure the overall quality of the fits in particular for the period ranges studied in this paper in Sect. 4 and 5. The final sample of Cepheids with well sampled \(V\)-band light curves and spectroscopic metallicities consists of 545 stars, presented in Table 1. The metallicity distribution of this sample is presented in Fig. 1, the distribution of the fit uncertainty in our sample is shown in Fig. 2, and Fourier parameters are presented in Fig. 3. We note that a significant fraction of Cepheids are binary or multiple systems (Szabados, 2003; Kervella et al., 2019). Following Klagyivik et al. (2013), we also included binary stars in our final sample since in most cases, the companions do not affect the amplitude ratios and phases. However, the presence of a companion typically leads to smaller peak-to-peak amplitudes and Fourier amplitudes on a magnitude scale. Although most of the companions are main-sequence stars with a negligible contribution to the total luminosity (Karczmarek et al., 2022), some Cepheids can have red giant companion that can affect significantly the luminosity and, thus, the amplitudes (Pilecki et al., 2021; Karczmarek et al., 2022). Similarly, several studies have shown the occurrence of circumstellar envelopes (CSE) around Cepheids (Kervella et al., 2006; Merand et al., 2006; Gallenne et al., 2013; Nardetto et al., 2016; Gallenne et al., 2021). Thus, these envelopes could decrease the amplitude in case of constant emission in the infrared (Hocde et al., 2020, 2021; Kovtyukh et al., 2022). In the following, we also work with Fourier amplitudes, therefore, we must keep in mind that binarity and CSE can be a source of scattering in our results. ## 4 Empirical metallicity relation for short-period Cepheids (2.5 < P< 6.3 days) ### Choice of period range and Fourier parameters In this section, we extend (quantitatively and qualitatively) the metallicity relation for short-period Cepheids in the \(V\)-band, following the work of Klagyivik et al. (2013). In their study, the authors calibrated linear metallicity relations depending on \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline & MW & LMC & SMC & \\ \hline \multirow{3}{*}{Metallicity (Sect. 2.1)} & Luck (2018) (435) & Romaniello et al. (2022)(89) & Romaniello et al. (2008)(14) & \\ & Groenewegen (2018) (452) & Lemasle et al. (2017) (5) & Lemasle et al. (2017) (4) & \\ & Ripepi et al. (2021) (19) & Molinaro et al. (2012) (2) & & \\ & Trentin et al. (2023) (44) & & & \\ & Kovtyukh et al. (2022a) (54) & & & \\ \hline Total cross-match [Fe/H] & fundamental only : 472 & 96 & 18 & 586 \\ \hline \multirow{2}{*}{\(V\) band (Sect. 2.2)} & ASAS (169) & OGLE IV (72) & OGLE III (4) & \\ & ASAS-SN (438) & OGLE III (3) & ASAS-SN (14) & \\ & Berdikov (2008) (349) & ASAS-SN (15) & & \\ \hline Total ([Fe/H]+\(V\) band) & 438 & 89 & 18 & 545 \\ \hline \end{tabular} 1 \end{table} Table 1: Summary of the data sets used to compose the calibration sample for empirical metallicity relation from the \(V\) band (see Sects. 2 and 3). Figure 2: Histogram of the uncertainty from the Fourier fitting of the 545 Cepheids. (see Eq. 5). Figure 3: Fourier parameters of the final sample of the 545 fundamental-mode Cepheids in the \(V\) band with spectroscopic metallicities. The different colors refer to metallicity above and below the median of the sample, as indicated in Figure (a). The peculiar appearance of \(\phi_{41}\) with a break around seven days was noted by Simon & Moffett (1985); Kovacs et al. (1990). The latter proposed an unknown atmospherical effect since these authors did not observe this break in the radial velocity curves. the amplitude ratios \(R_{21}\) and \(R_{31}\) between 2.5 and 6.3 days. The lower boundary of these relations, defined as \(\log\,P=0.4\) (\(\approx 2.5\) days), corresponds to a local maxima on the amplitude ratio \(R_{21}\). This can be observed for SMC and LMC Cepheids in the \(I\) band in Soszynski et al. (2008, 2010). On the other hand, above \(P=6.3\) days, the amplitude ratios are increasingly affected by the pulsation period when they approach the \(P_{2}/P_{0}=0.5\) resonance at 10 days. Thus Klagyivik et al. (2013) focused on Cepheids with pulsation period between 2.5 and 6.3 days where the effect of the period is mitigated in particular for \(R_{21}\). This influence of the period can be better observed by displaying Cepheids of the SMC, LMC and our star sample on Figs. 4a and 4b. The Fourier parameters for the SMC and LMC were derived by applying our Fourier decomposition in Sect. 3 to the \(V\)-band light curves of classical Cepheids from SMC and LMC from OGLE-IV (Udalski et al., 2015, 2018; Soszynski et al., 2017). From these figures, we observe that \(R_{21}\) and \(R_{31}\) have a smooth linear dependence with the pulsation period. In the following, we choose to fit linear relations based on an unique period interval between 2.5 and 6.3 days. We re-analyze relations based on \(R_{21}\) and \(R_{31}\) but we also explore others possible linear combinations between the different Fourier coefficients and the pulsation period. ### Fit of the empirical relation with an orthogonal distance regression In this part, we fit the linear relations based on \(R_{21}\) and \(R_{31}\) and we also test all possible combinations of Fourier parameters on a two-by-two basis. We considered the following set of Fourier parameters: \(R_{21},R_{31},R_{41},\phi_{21},\phi_{31},\phi_{41}\). In addition we also take into account the amplitudes \(A_{1}\) and \(A_{2}\) and the pulsation period, \(P\). We selected the stars where \(\sigma_{\phi_{21}}<0.05\) to ensure the overall accuracy of the Fourier parameters. We employed an orthogonal distance regression (ODR) to take into account both the errors on the Fourier parameters and those on the metallicity. We adopted the ODR routine of the SciPy package2. To summarize, we fit linear relations of the following form: Footnote 2: [https://docs.scipy.org/doc/scipy/reference/odr.html](https://docs.scipy.org/doc/scipy/reference/odr.html) \[\mathrm{[Fe/H]}=aX+b, \tag{6}\] for \(R_{21}\) and \(R_{31}\), and then the following: \[\mathrm{[Fe/H]}=aX+bY+c, \tag{7}\] for all possible combinations. In order to compare the performance of the different fits to predict accurately and precisely the metallicity, we used a Monte Carlo cross-validation analysis (Xu & Liang, 2001). This technique consists of randomly selecting 80% of the data set to fit a relation, then testing its ability to estimate the metallicity on the remaining 20% of the sample. This is done by deriving the average difference with the expected [Fe/H] values to evaluate the bias and the precision is estimated by deriving the rms. This procedure is repeated 1000 times and the final fitted coefficients, rms, and bias are given by the average of all the results (see Table 2). From these results, the best fit is achieved by the combination of parameters \(A_{1}\)-\(A_{2}\): (see Fig. 5a): \[\mathrm{[Fe/H]}=(6.27\pm 0.53)\,A_{1}+(-11.73\pm 0.93)\,A_{2}+(-0.59\pm 0.07), \tag{8}\] with \(\mathrm{rms}=0.12\) dex. In Fig. 5, we compare this result with a linear relation based on \(R_{21}\) only, as in Klagyivik et al. (2013). The relation based on \(A_{1}A_{2}\) presents a lower rms than the one based on \(R_{21}\) and is in better agreement for lower and higher metallicity range. Thus, the \(A_{1}A_{2}\) relation performs better to recognize the light curves features sensitive to metallicity. In order to further assess the capability of the \(A_{1}A_{2}\) relation to estimate the metallicity, we test it in the next section on the SMC and LMC sample and compare the results to the literature. ### Testing the empirical relations to the SMC and LMC sample We retrieved \(V\)-band light curves of classical Cepheids from SMC and LMC from OGLE-IV (Udalski et al., 2015, 2018; Soszynski et al., 2017). We first removed the stars that overlap with our calibration sample. We then consistently applied the same Fourier decomposition algorithm as established in Sect. 3. We also considered only the stars having more than 30 photometric measurements and with an uncertainty of \(\sigma_{\phi_{21}}<\)0.05. We used the Fourier parameters obtained for the Cepheids with a pulsation period between 2.5 and 6.3 days to estimate the metallicity of these stars using the empirical relation previously determined. The final sample consists of 52 stars from the SMC and 1206 stars from the LMC in the period range of 2.5\(-\)6.3 days. As noted by Soszynski et al. (2008, 2010), there are fewer Cepheids in this period range in the SMC since the distribution of the pulsation periods is dependent on metallicity (Becker et al., 1977; Bono et al., 2000). Indeed, metal-poor Cepheids in the SMC have a lower mass cut-off to cross the instability strip than the more metal-rich Cepheids from the LMC. As a result, the majority of short-period Cepheids in the SMC have pulsation period below 2.5 days. The estimated [Fe/H] for SMC and LMC Cepheids of our sample are presented in Fig. 6a. \begin{table} \begin{tabular}{l|c|c} \hline \hline Param. & rms (dex) & bias (dex) \\ \hline \(A_{1}\)-\(A_{2}\) & 0.123 & -0.001 \\ \(A_{1}\)-\(R_{21}\) & 0.126 & -0.000 \\ \(A_{2}\)-\(R_{21}\) & 0.128 & -0.000 \\ \(A_{1}\)-\(R_{41}\) & 0.129 & -0.001 \\ \(A_{1}\)-\(A_{3}\) & 0.129 & -0.000 \\ \(A_{1}\)-\(R_{31}\) & 0.134 & -0.002 \\ \(R_{21}\)-P & 0.134 & -0.000 \\ \(R_{21}\)-\(\phi_{21}\) & 0.139 & -0.001 \\ \(R_{31}\)-\(R_{21}\) & 0.139 & -0.000 \\ \(A_{2}\)-\(R_{41}\) & 0.140 & -0.001 \\ \(A_{3}\)-\(R_{31}\) & 0.141 & -0.002 \\ \(R_{31}\)-\(\phi_{31}\) & 0.142 & -0.003 \\ \(R_{41}\)-\(R_{21}\) & 0.142 & -0.002 \\ \(R_{31}\) & 0.143 & -0.001 \\ \(A_{2}\)-\(R_{31}\) & 0.143 & -0.000 \\ \(A_{3}\)-\(R_{21}\) & 0.143 & -0.001 \\ \(R_{41}\)-\(\phi_{41}\) & 0.143 & -0.000 \\ \(R_{21}\)-\(\phi_{31}\) & 0.143 & -0.002 \\ \(R_{21}\)-\(\phi_{41}\) & 0.144 & -0.002 \\ \(R_{31}\)-P & 0.144 & -0.000 \\ \(R_{31}\)-\(\phi_{21}\) & 0.145 & -0.001 \\ \(R_{41}\)-\(\phi_{41}\) & 0.145 & -0.002 \\ \hline \end{tabular} \end{table} Table 2: Results of the ODR fitting and Monte Carlo analysis. Only the best results with a final rms below 0.145 dex are presented. From this histogram, we can see that the relation is able to distinguish the populations of the SMC and the LMC. The mean metallicity of the LMC objects is \(-0.19\pm 0.11\) dex, which is within the uncertainties and in agreement with values found in the literature based on supergiant measurements, such as \(-0.30\pm 0.20\) dex (Russell & Bessell, 1989); \(-0.27\pm 0.15\) dex (Hill et al., 1995);\(-0.40\pm 0.15\) dex (Andrievsky et al., 2001);\(-0.34\pm 0.11\) dex (Urbaneja et al., 2017), and \(-0.409\pm 0.076\) (stat.)\(\pm 0.10\) (syst.) dex (Romaniello et al., 2022). Moreover, based on 1206 Cepheids from the LMC, we found a narrow metallicity distribution of the LMC of \(\sigma\)=0.11 dex (see Fig. 8a). This is in agreement with the findings of Romaniello et al. (2022) who found a dispersion of \(\sigma\)=0.076 dex from a sample of 89 Cepheids, and the dispersion of 0.069 mag observed in the near-infrared Hubble Space Telescope LMC period-luminosity relation (Riess et al., 2019). However, this relation seems to overestimate the metallicity for the SMC with a mean [Fe/H] of \(-0.30\pm 0.14\) dex. In the case of the SMC, the overestimation is significantly higher than for spectroscopic measurements of 14 long-period SMC Cepheids, giving [Fe/H]=\(-0.75\pm 0.08\) dex (Romaniello et al., 2008), while it is in marginal agreement within about \(1\sigma\) with other mean metallicity estimations based on giant and supergiant stars (see Fig. 8b): \(-0.65\pm 0.20\) dex (Russell & Bessell, 1989); \(-0.62\pm 0.14\) dex, (Trundle et al., 2007); and B stars with \(-0.70\pm 0.20\) dex (Korn et al., 2000). We note however that these studies are based on few stars only, whereas we derived the metallicity from a larger sample of 52 Cepheids. Figure 4: Amplitude ratios \(R_{21}\) and \(R_{31}\) against the pulsation periods for the OGLE SMC and LMC as (a) and (b), and with our star sample shown for consistency. The dashed vertical line shows the cut at a period of \(P=2.5\) days. Figure 5: Comparison of [Fe/H] from the literature and from the fitted empirical relations based on (a) \(A_{1}A_{2}\) and (b) \(R_{21}\) for short-period Cepheids. The number of points used in the fit is indicated (Ndat) as well as the rms of the fit. Dashed lines represent the rms deviation to guide the eye. This overestimation of [Fe/H] could be attributed to the poor coverage of the low metallicity regime in our calibration sample (only four stars lower than \(-0.6\) dex). Moreover, a linear relation might not be valid at low metallicity, as has been noted by Klagyivik et al. (2013). It is also possible that the SMC testing sample is (statistically) not large enough to properly determine the mean and the dispersion of the metallicity distribution. A more significant result for the mean metallicity of the SMC Cepheids can be obtained from \(I\)-band OGLE-IV light curves (explained in Sect. 6), which are available for a much larger sample of stars. Finally, part of the spread observed in these distributions may be caused by the influence of other physical effects than metallicity such as the location in the instability strip. Indeed the amplitude of Cepheids, as well as their amplitude ratios, depend on the location in the instability strip (Sandage & Tammann, 1971; Sandage et al., 2004). The amplitudes are higher close to the blue edge and decrease toward the red edge as the damping from convection increases. In order to illustrate this effect, we display the LMC Cepheids with pulsation period between 2.5 and 6.3 days in the instability strip in Fig. 9. We used the Weshenheit index, \(W_{I,VI}\), as a function of the color \(m_{V}-m_{I}\) corrected by extinction (Skowron et al., 2021). From this figure we clearly see that lower amplitude stars, with \(A_{1}<0.25\) mag, are concentrated at the red edge of the strip. Hence, the stars which are closer to the blue or the red edge Figure 6: Normalized histograms (unit area) of the metallicity estimation of the SMC (blue, 52 stars) and LMC (black, 1206 stars) short-period Cepheids from the relations established in Sect. 4.2. The mean metallicity of these distributions and the standard deviation around the mean are indicated in the legend. Figure 7: Relations between the first and second harmonic \(A_{1}\) and \(A_{2}\) used in the empirical relation for short-period Cepheids between 2.5 and 6.3 days shown in (a) and (b). The vertical strip represents the values of \(A_{1}\) where a cut can be applied (for \(A_{1}\) between 0.20 and 0.25 mag) to remove the stars of low amplitudes to mitigate the effect of location inside the instability strip. have different amplitudes, thus potentially affecting the empirical metallicity estimation based on \(A_{1}A_{2}\). The relation between \(A_{1}\) and \(A_{2}\) can be seen in Fig. 7. From this figure, we can observe that the relation between \(A_{1}\) and \(A_{2}\) is not linear anymore below about \(A_{1}\)=0.20-0.25 mag. On the contrary, above \(A_{1}\)=0.20-0.25 mag, the amplitude ratio, \(R_{21}\), is saturated to about 0.45 and most of the dispersion is attributed to a metallicity effect. Thus, we propose to cut all stars with \(A_{1}<0.20\) mag from the sample in order to mitigate the effect due to the location inside the instability strip. This limit is defined only approximately in order to show the influence on the derived metallicity. A more restrictive cut at \(A_{1}\)=0.25 mag can be applied if the objective favors the precision of individual measurement over the statistical size of the sample. By removing stars for which \(A_{1}<0.20\) mag, we can obtain a slightly better distinction between the SMC and LMC, as we see in Fig. 5(b). However, the mean metallicity and the deviation from the mean are not significantly affected because the majority of the stars are above this threshold. Although the mean of the distribution remains unchanged, applying this cut to the empirical relation based on \(A_{1}\) and \(A_{2}\) prevents contaminations from stars with intrinsically low pulsation amplitudes and stars that would be significantly blended by a companion in the visible. Therefore, we recommend that this cut be applied when using the empirical relations based on \(A_{1},A_{2}\). We also note that some stars of our calibrating sample are slightly below this threshold. However, removing them does not change the result of the fit and we decided to keep them in the calibration set. We discuss the use of these cuts in Sect. 7. Finally, we applied the relation obtained by Klagyivik et al. (2013) in the \(V\)-band for the same period range (see Fig. 10). As discussed in the previous section, relation based on \(R_{21}\) and \(R_{31}\) result in an overfitting of the metallicity around 0 dex; thus, we are not able to generalize for low and metal-rich stars. For that reason, the distributions observed in Fig. 10 are shifted by about +0.2 dex compared to our results, in agreement with verifications carried out by Clementini et al. (2019). ## 5 Empirical metallicity relation for long-period Cepheids (12 < \(P\)< 40 days) ### Fit of the empirical relation with ODR technique Finding a correlation between the Fourier parameters and the metallicity for long-period Cepheids is a challenge mainly because there are fewer spectroscopic observations of metal-poor long-period Cepheids. Moreover these stars are colder, on average, and thus the convection affects the photosphere and the resulting light curves. However, the recent spectroscopic metallicities obtained by Romaniello et al. (2022) for 89 metal-poor Cepheids from the LMC allow us to compare with metal-rich Cepheids from the Milky Way for the first time. In the case of long-period Cepheids between 10 and 40 days, a visual inspection of Fig. 2(a) shows that there is a concentration of metal-poor Cepheids at the top of the \(\phi_{21}\) branch, while metal-rich Cepheids seem to have lower values. Some evidence in the literature supports this observation. Observationally, this effect was noticed Figure 8: Comparison of mean [Fe/H] derived from the literature (grey bars) and from [Fe/H] estimations presented in this paper (black bars). For each reference, the number of stars used to derive the mean is indicated. LMC metallicity from Romaniello et al. (2022) is presented taking account systematics. For details, see Sect. 4.3 for short-period Cepheids in the \(V\) band, Sect. 5.2 for long-period Cepheids in the \(V\) band, and Sect. 6.2 for the \(I\)-band relation. Figure 9: Luminosity-color diagram of LMC Cepheids between 2.5 and 6.3 days, using \(V\)- and \(I\)-band magnitudes. by Pont et al. (2001), who studied metal-poor Cepheids in the outer part of the MW. Theoretically, this phenomenon was reproduced by Buchler (1997, 1998), who used radiative pulsation models. However, it could not be compared to observations at that time. The amplitudes of long-period Cepheids may also be affected by the metallicity. The models computed by Bono et al. (2000) indicate that metal-rich Cepheids with a pulsation period of 10-30 days pulsate with higher peak-to-peak amplitudes than metal-poor ones. This estimation was observationally supported by Majaess et al. (2013), who found that metal-rich, long-period Cepheids display larger \(V\)-band amplitudes than their counterparts in very metal-poor galaxies. In order to quantitatively assess whether or not the shape of the \(V\)-band light curves depends on the metallicity, we followed the method of the previous section by fitting several possible combinations of Fourier parameters. We considered the following set of Fourier parameters: \(A_{1},R_{21},R_{31},R_{41},\phi_{21},\phi_{31}\), and \(\phi_{41}\), as well as the pulsation period \(P\), and [Fe/H]. We selected the Cepheids with pulsation periods from 12 days and no longer than 40 days. Indeed, our data set contains too few stars above \(P\)=40 days, and longer periods could behave differently, as we can guess from Fig. 3, or be affected by possible resonance effects (Antonello 1998). We also selected Cepheids with \(\phi_{21}\) between 2.5 and 3.3 to remove the stars which could be affected by the resonance around 10 days and with \(\sigma_{\phi_{21}}<0.05\). Our final data set consists of 120 long-period Cepheids. As in the case of short-period Cepheids, we employed ODR technique to take into account errors on both Fourier parameters and metallicity. We tested all combinations of parameters, two by three, and we also tested all possible combinations, three by three. We estimated the rms and the bias of this relation following the method used in the previous section. We randomly selected 80% of the sample to fit a relation and tested it on the 20% remaining objects; then we repeated the procedure 1000 times. We found a significantly better result in the case of the triplet \(A_{1}-R_{41}-\phi_{21}\). The histograms of their distribution are presented in Fig. 11. From these figures, we can see that \(R_{41}\) and \(\phi_{21}\) decrease with larger metallicity, while the amplitude of the harmonic \(A_{1}\) seems to increase with metallicity. The [Fe/H] from the literature and Figure 11: Histogram of \(A_{1}\), \(\phi_{21}\) and \(R_{41}\) for Cepheids with pulsation period between 12 and 40 days for metal-rich ([Fe/H]\(\sim\)0.19) and metal-poor ([Fe/H]\(<\)\(-\)0.19) stars. [Fe/H]=\(-\)0.19 dex represents the median of the metallicity sample of long-period Cepheids. Figure 10: \(V\)-band metallicity estimation using (Klagyivik et al. 2013) [Fe/H] obtained from the fit is presented in Fig. 12. We obtained the following equation: \[\mathrm{[Fe/H]}= \left(3.94\pm 0.46\right)A_{1}+\left(-5.80\pm 0.85\right)R_{41}\] \[+(-0.93\pm 0.16)\,\phi_{21}+\left(1.67\pm 0.39\right). \tag{9}\] We find rms\(=0.25\) dex for estimating individual metallicity and the relation is accurate with a bias of 0.001 dex. ### Testing the empirical relations within the SMC and LMC samples We followed the method employed in Sect. 4.3 to check the validity of this empirical relation on a sample of SMC and LMC stars. The Cepheids of the LMC were retrieved from OGLE-IV. However, the long-period Cepheids of the SMC are poorly sampled in OGLE-IV overall. Thus, we compiled the light curves from OGLE III for Cepheids in the SMC instead. We applied a Fourier decomposition (see Sect. 3) and we removed the stars with \(\sigma_{\phi_{21}}\)\(>\)0.05 as well as those that were already used for the calibration. We selected Cepheids with \(\phi_{21}\) between 2.5 and 3.3, consistently with the calibration sample. The final sample consists of 44 stars from the SMC and 32 stars from the LMC. The result is presented in Fig. 13. As it can be seen from this figure, the empirical relations for long-period Cepheids are able to aptly distinguish between SMC and LMC populations of Cepheids. The mean of the distributions is \(-0.19\pm 0.16\) and \(-0.47\pm 0.21\) dex for the LMC and SMC, respectively. While the tested sample is relatively small, the mean distributions are in agreement within \(1\sigma\) with the mean values from the literature (presented in Sect. 4.3 and Fig. 8. In conclusion, the empirical relation based on Fourier parameters \(A_{1}-R_{41}-\phi_{21}\) is found to perform well in deriving accurate results for the mean metallicity for a sample of long-period Cepheids between 12 and 40 days. The estimation of individual metallicity, however, has to be associated with an uncertainty of \(\pm 0.25\) dex. In the next section, we establish empirical relations in the \(I\)-band and we further test them thanks to a larger sample of stars with well-sampled light curves in the \(I\) band (from OGLE-IV). ## 6 Empirical relations in the \(I\)-band using Fourier interrelations ### Conversion using interrelations The near-infrared \(I\)-band is particularly interesting for observing obscured galactic or extragalactic Cepheids since it is less affected by extinction than in the visible. Therefore empirical calibrations in the \(I\)-band could be useful for studying the effect on the metallicity on the PL relation and also for galactic archeology. However, it is more difficult to calibrate this relation in the \(I\) band because there are fewer \(I\)-band light curves available with a good sampling in the MW for short- and long-period Cepheids. Although the empirical relations are yet to be calibrated for the \(I\) band, we can use interrelations between Fourier parameters in the \(V\) and \(I\) bands to transform \(V\)-band relations into the \(I\)-band. Interrelations consist of linear relations that exist between the same Fourier parameters in different bands and they can be used to convert the \(V\)-band Fourier parameters into the \(I\)-band and vice versa (Hendry et al. 1999; Kanbur et al. 2000; Ngeow et al. 2003). Moreover, Ngeow et al. (2003) emphasized that the change in metallicity does not affect significantly the interrelations between the Fourier coefficient in different bands. Hence, applying these interrelations is shown to be convenient in the case of Cepheids, however, it is less so in the case of RR Lyrae stars, where the interrelations are affected by metallicity (Skowron et al. 2016). Thus, our empirical relations (Eqs. 8 and 9) can be converted to the \(I\) band. We chose to re-calibrate interrelations in the \(V\) and \(I\) bands using the sample of SMC and LMC OGLE data. In particular, we focus on the relevant Fourier parameters needed for the conversion, within the specific period ranges adopted by the empirical relations. For the short-period Cepheids between 2.5 and 6.3 days, we crossed-matched the stars in common between OGLE LMC for the \(V\) and \(I\) bands (1342 stars) and we fit the linear (ODR) relations between \(A_{1}(V)\) and \(A_{1}(I)\) as well as \(A_{2}(V)\) and \(A_{2}(I)\) amplitudes (see Fig. 11). We obtain the following interrelations: Figure 12: Comparison of the fitted [Fe/H] with [Fe/H] from literature for long-period Cepheids between 12 and 40 days. Figure 13: Normalized histogram (unit area) for estimated metallicity of SMC and LMC long-period Cepheids (44 and 32 stars, respectively) based on an empirical relation in the \(V\) band. \[A_{1}(V) =(1.7035\pm 0.0066)\,A_{1}(I)+(-0.0023\pm 0.0011), \tag{10}\] \[A_{2}(V) =(1.7011\pm 0.0046)\,A_{2}(I)+(-0.0002\pm 0.0003). \tag{11}\] For the long-period Cepheids between 12 and 40 days, we used both SMC and LMC OGLE data to maximize the number of stars for the calibration (giving 120 stars in total; see Fig. 16) to obtain the following interrelations: \[A_{1}(V) =(1.6332\pm 0.0303)\,A_{1}(I)+(0.0060\pm 0.0070), \tag{12}\] \[R_{41}(V) =(1.0399\pm 0.0239)\,R_{41}(I)+(-0.0056\pm 0.0034),\] (13) \[\phi_{21}(V) =(0.8389\pm 0.0184)\,\phi_{21}(I)+(0.0693\pm 0.0633). \tag{14}\] Then, there are two equivalent strategies for converting the \(V\)-band metallicity relations into the \(I\)-band using interrelations. One possibility is to convert the Fourier parameters of our calibrating sample and then to re-process the fit as done in the \(V\)-band in the last sections. The other idea is to directly convert the empirical \(V\)-band relations. We applied the latter possibility in the following, by substituting the precedent relation into \(V\)-band empirical relations. For the short-period Cepheids we obtain: \[[\mathrm{Fe/H}]=(10.69\pm 0.90)A_{1}+(-19.96\pm 1.58)A_{2}+(-0.60\pm 0.07). \tag{15}\] This relation is valid for \(P\) between 2.5 and 6.3 days. The cuts that can be applied for \(A_{1}(I)\) are between 0.12 and 0.16 mag (corresponding to \(A_{1}(V)\) between 0.20 and 0.25 mag) to mitigate the effect of location of the star inside the instability strip. For the long-period Cepheids, we obtain: \[[\mathrm{Fe/H}]=(6.43\pm 0.76)A_{1}+(-6.03\pm 0.89)R_{41}\] \[+(-0.77\pm 0.13)\phi_{21}+(1.66\pm 0.40), \tag{16}\] with \(\phi_{21}(I)\) between 2.9 and 3.85 (corresponding to \(\phi_{21}(V)\) between 2.5 and 3.3). As a first check, it is important to assess whether these relations in the \(I\) band are consistent within the uncertainties with empirical relation in the \(V\)-band. In order to compare these relations, we consistently applied the empirical relations for the stars in common. The result is shown in Fig. (a)a. Applying a linear fit to the data, we found that the [Fe/H] estimations in the \(V\) and \(I\) band are colinear with no offset: [Fe/H]\({}_{V}\)= 1.01(\(\pm\)0.01)[Fe/H]\({}_{I}\) +0.005(\(\pm\)0.002). The resulting [Fe/H] estimation in the \(I\) band is also consistent with the \(V\) band within 0\(\pm\)0.04 dex, namely, this is below the rms of 0.12 dex for \(V\)-band empirical relation. We repeated the procedure for the long-period Cepheids in our sample (see Fig. (b)b). After applying a linear fit, we obtained [Fe/H]\({}_{V}\)= 1.02(\(\pm\)0.03)[Fe/H]\({}_{I}\) +0.001(\(\pm\)0.014), which is also consistent with colinearity and no offset. The resulting estimation in the \(I\) band is in agreement in the \(V\) band within 0\(\pm\)0.10 dex, namely, below the \(V\)-band rms of 0.25 dex. ### Testing the empirical relations to MW, SMC, and LMC OGLE-IV sample In order to further test the performance of the \(I\)-band relation, we first verified the metallicity distribution obtained for the MW, SMC, and LMC. Similarly to previous sections, we applied these empirical relations to OGLE-IV light curves in the \(I\) band to derive the metallicity of SMC, LMC, and MW Cepheids. For the MW Cepheids, we gathered the light curves from the Galactic Disk and the Bulge. In case of short-period Cepheids, we cut all stars with \(A_{1}<0.12\) mag (corresponding to \(A_{1}<0.20\) mag in \(V\)-band) from the sample. This was done to mitigate the effect of the location within the instability strip. The testing sample is thus composed of 394 stars from the MW, 635 stars in the SMC and 1649 stars in the LMC. The results are presented in Fig. (a)a. We can see that the \(I\)-band relation is able to distinguish the three populations of stars. For each distribution, we derived a mean metallicity for the SMC: \(-0.38\pm 0.17\) dex, LMC: \(-0.20\pm 0.11\) dex, and MW: \(-0.13\pm 0.18\) dex - these results are in agreement with the mean values from the literature. As a comparison, we also applied the empirical relation from Klagyivik et al. (2013) based on \(R_{21}\) in the \(I\) band in the same period range (see Fig. (b)b). As noticed by Clementini et al. (2019), this relation overestimates the metallicities by about +0.2 dex. This problem is mitigated when using the relation calibrated in this paper. For long-period Cepheids, the testing sample is composed of 92 stars from the MW, 60 stars in the SMC, and 48 stars in the LMC. The empirical relation performs well to obtain the Figure 14: Comparison of metallicities derived with both the \(V\)-band and \(I\)-band empirical relations. Black line indicates a one-to-one correspondence and thick red line is the fit between the two bands. Blue dashed lines represent the rms of the empirical relation in the \(V\) band. distribution of the expected metallicity for these galaxies (see Fig. 15c), although the sample of star is much smaller than the short-periods. For each distribution, we derived a mean metallicity for the SMC: \(-0.40\pm 0.27\) dex and LMC: \(-0.22\pm 0.19\) dex; these results are in agreement within \(1\sigma\) with both the mean values from the literature and our previous results for short-period Cepheids. For the MW sample, we derived a mean [Fe/H] of \(+0.22\pm 0.30\) dex, which offers a large spread, in agreement to what is observed for short-period Cepheids. We show in the next section that this effect is due to the metallicity gradient in the Galaxy. ### Mapping the metallicity distribution in the MW, SMC, and the LMC In this part we use our metallicity estimations for the fundamental-mode Cepheids in the MW, SMC, and LMC and we retrieved their coordinates from 3D maps available in the literature. Then, we tested whether our relations are precise enough to obtain information on the metallicity gradient in these galaxies for potential applications in galactic archeology. For stars with a metallicity estimation of the MW Cepheids, we cross-matched the OGLE-IV sample in the \(I\)-band with the stars from the Galactic 3D map from Skowron et al. (2019). For the literature sample, we cross-matched the 472 fundamental Cepheids of our metallicity sample (see Table 1) with the map from Skowron et al. (2019) and Kovtyukh et al. (2022). As a result, we obtained 868 stars in the MW, with 460 having metallicity from the literature and 408 others estimated by our relations. The galactic coordinates and the distance measurements of these stars (when available) allow us to plot the metallicity distribution in the MW (see Fig. 16a). By adopting a Sun galactocentric distance of 8.1\(\pm\)0.1 kpc (Bobylev & Bajkova 2021), we can also display the metallicity as a function of the galactocentric radius \(R_{G}\) (see Fig. 16b). From these distributions we observe that spectroscopic observations are mostly given in the vicinity of the Sun while the OGLE sample covers farther distances. This is possible thanks to the sensitivity of the spectroscopy, which limits the observation to the closest (bright) stars, while saturating the OGLE detectors. We also note that for galactocentric distances below about 7 kpc, it is nearly entirely long-periods Cepheids that are observed. The higher level of interstellar extinction at these distances toward the galactic center likely limits the observation of faint short-period Cepheids. We fit a straight line for three different samples, taking into account the uncertainty on the galactocentric distance and the metallicity. For the sample of the spectroscopic metallicity that we gathered from the literature (460 stars), we obtain: \[[{\rm Fe/H}]=(-0.0598\pm 0.0018)\,R_{G}+(0.5263\pm 0.01848). \tag{17}\] For the metallicity estimations from \(I\)-band light curves only (408 stars): \[[{\rm Fe/H}]=(-0.0562\pm 0.0023)\,R_{G}+(0.5514\pm 0.0280); \tag{18}\] and the sample with both literature and estimations (868 stars): \[[{\rm Fe/H}]=(-0.0556\pm 0.0015)\,R_{G}+(0.5156\pm 0.0163). \tag{19}\] The relation obtained only from the literature sample is in agreement with previous results for the near- and far-side of the disk (Genovali et al. 2014; Luck 2018; Minniti et al. 2020), which provide a gradient between \(-0.05\) and \(-0.06\) dex kpc\({}^{-1}\). The empirical relation in the \(I\) band performs well to reconstruct a metallicity gradient in the galaxy that is on the same order of magnitude as reported in the literature, with \(-0.056\) dex kpc\({}^{-1}\). It is Figure 15: Normalized histograms (unit area) of the metallicity estimation of the SMC (blue) and LMC (black) fundamental Cepheids and the MW (red) fundamental Cepheids (GD+BULGE) from the shape of the \(I\)-band light curves. For short periods, SMC, LMC, and MW(GD+BULGE): 635, 1649, and 394 stars, respectively. For long periods, SMC, LMC and MW(GD+BULGE): 60, 48 and 92 stars, respectively. Page 13 of 20 difficult to adequately compare the slope obtained from the empirical relations with the results from the literature. Indeed, the OGLE-IV MW sample encompasses regions that are different from the spectroscopic observations, in terms of both the radial and azimuthal angle (see small vs. big points in Fig. 16a; also, a larger figure is presented in the annex for visualization Fig. C.1). Our result suggests that empirical relations in the \(V\) and \(I\) bands are useful tools for obtaining information on the metallicity distribution in the Galaxy, particularly in regions that are not accessible by spectroscopy up to 20 kpc from the Sun. We also mapped the SMC and the LMC with our metallicity estimation by using the distances and coordinates of Cepheids for the SMC and the LMC (Jacyzyn-Dobzreniecka et al., 2016) (see Figs. 16e and 16c). After cross-matching, we obtained 1561 LMC Cepheids and 674 SMC Cepheids with estimated metallicities. Interestingly, we observe higher metallicities in the LMC bar in the LMC map. We display the metallicity as a function of the galactocentric radius in Figs. 16f and 16d. Since the distribution of the relatively young population of Cepheids is highly non-uniform, in contrast to older RR Lyrae, for example, displaying the metallicity as a function of the galactocentric radius has the sole purpose of showing non-correlations. In order to observe the metallicity gradient for the SMC and the LMC, we followed the method described in Skowron et al. (2016) for the case of RR Lyrae stars. For each bin of 1 kpc, we derived the median of the metallicity and we derived the standard deviation (see red crosses and dashed lines in Figs. 16d and 16f). We find similar results to those obtained from Skowron et al. (2016), with an apparent decrease of the median metallicity within 4 kpc from the center in the case of the LMC, while no slope is visible in the SMC. Because of the large scatter of the metallicity gradient observed in these galaxies, as observed in a similar was for RR Lyrae metallicities (Skowron et al., 2016), our findings on the slope are only qualitative in scope. In conclusion, our metallicity estimates are able to not only reproduce the peak of metallicity distributions of galaxies (as shown in the previous sections), but they can also reproduce the trends of the metallicity gradient in MW. Therefore, the empirical relations calibrated in this paper can be applied to further studies in galactic archeology. ### Quantitative test: Investigation of systematics using Cepheid-cluster pairs A relevant quantitative test can be performed by comparing Cepheid metallicity with open-cluster metallicity in order to investigate possible systematics. A recent paper investigated the open-cluster (OC) housing MW classical Cepheids (Hao et al., 2022) from the Gaia DR3 release. They identified 39 probable open cluster-and-Cepheid pairs. Most of these Cepheid-OC pairs are in the Sun's vicinity and thus are already part of our calibrating sample. For that reason, this MW sample cannot be independently used to check systematics in our relations. Alternatively, we can use OC-Cepheid pairs in the LMC or SMC to perform those tests at a lower metallicity. This is also interesting since we expect systematics at the more metal-poor regime of our empirical relations. The two massive LMC clusters NGC 1866 and NGC 2031 have the largest known Cepheid population (Wek & Stetson, 1993; Testa et al., 2007; Musella et al., 2016). In NGC 1866 we found light-curves for five Cepheids from OGLE. Among these stars three are already part of our calibration sample (Lemasle et al., 2017) and one has a low amplitude (LMC-CEP-3731, \(A_{1}\)(I)\(<\)0.12 mag). Thus, we only have an estimation for LMC-CEP-3721, with [Fe/H]\(=-0.14\pm 0.12\) dex. Although one photometric estimation is not enough to make a comparison, this measurement seems to be biased by at least +0.2 dex compared to various NGC 1866 spectroscopic measurements (see Lemasle et al., 2017). We note, however, that the light curves for other Cepheids in NGC 1866 might be obtained by combining photometry from different instruments (see Musella et al., 2016). In NGC 2031, we found five others Cepheids that have their light curves available from OGLE and we determined the following metallicities using our empirical relations: OGLE LMC-CEP-2377 (\(-0.08\) dex); LMC-CEP-2371 (\(0.03\) dex); LMC-CEP-2391 (\(-0.17\) dex); LMC-CEP-2385 (\(-0.18\) dex); and LMC-CEP-2375 (\(-0.27\) dex), which yields the average [Fe/H] of \(-0.13\pm 0.10\) dex. To our knowledge, the only available spectroscopic metallicity value for NGC 2031 is \(-0.52\pm 0.21\) dex (Dirsch et al., 2000). On the other hand, Chilingarian & Asa'd (2018) obtained [Fe/H]\(=-0.14\pm 0.02\) dex from a fitting of integrated optical spectra. They also found that the metallicity of NGC 2031 is in excellent agreement with age-metallicity relation and models of the LMC's chemical enrichment history. Our empirical value is discrepant with respect to the spectroscopic measurement, but it is in agreement with the latter. Hence, it is unclear whether or not our estimations are affected by systematics with regard to this cluster. In the case of the SMC, the cluster NGC 330 is the brightest young cluster that also is host to several Cepheids (Sebo & Wood, 1994). Among the classical Cepheids in NGC 330, our empirical relation is valid (period and amplitude ranges, light-curve quality, etc.) only for OGLE-SMC-CEP-2634 and we derived [Fe/H]\(=-0.61\pm 0.12\) dex. Several metallicity estimates obtained from different methods are available for NGC 330, which also complicates the comparison. Using high-resolution spectroscopy Hill (1999) found that NGC 330 is metal-deficient compared to the field stars (\(-0.82\pm 0.11\) dex vs \(-0.69\pm 0.10\) dex). More recently, Narloch et al. (2021) found \(-0.98\pm 0.08\) (stat.)\(\pm 0.10\) (syst.) dex from stronger photometry. Based on these values, our empirical estimation might by biased by about +0.2 dex or more at low metallicity. However we cannot firmly conclude since we have based our comparison on only one empirical estimation. In conclusion, it is difficult at this stage to check the potential systematics in empirical relations by comparison with cluster metallicity. Clusters generally host one Cepheid (see Dinnbier et al. (2022) and reference therein), thus, identifications of a significant number of OC-Cepheid pair in MW and beyond, together with reliable metallicities measurements, are necessary for further investigations of these systematics. ## 7 Discussion In this paper, we provide new empirical relations for estimating the metallicity for short and long-period Cepheids in the \(V\)-band, and we used interrelation from the literature between \(V\) and \(I\) bands to convert these relations into the \(I\)-band. For individual metallicity determination, uncertainties of \(\sigma\)=0.12 dex and \(\sigma\)=0.25 dex have to be attributed for short- and long-period Cepheids respectively. As noted by Cacciari et al. (2005) in the case of RR Lyrae stars, photometric formulae are good tools to study mean metallicities in a population, but are less precise when determining the metallicity of individual stars. Hence our empirical relations can be more adapted to characterise large population of Cepheids. Finally, we also tested the relations in the \(I\)-band to map the metallicity distribution of MW, SMC and LMC and we were able to recover the metallicity gradient Figure 16: Metallicity distribution of the Milky Way (868 stars), LMC (1561 stars), and SMC (674 stars) from empirical metallicity relations in the \(I\) band: a) small points represent stars with metallicity from the literature while bigger points are new metallicity estimation. Dashed circle encloses the solar neighbour region which concentrates most of the spectroscopic measurements; (b), (d), and (f): Estimated Cepheid metallicities versus the galactocentric distance for MW, LMC, and SMC. Red dashed lines are the \(1\sigma\) deviation in each bin of 1 kpc around the median value of each bin. MW and LMC maps are plotted in a larger size in Figs. 15 and 16. consistent with the literature. However, our calibration sample lacks spectroscopic measurements of metal-poor Cepheids below about \(-0.5\) dex especially for the short-periods. Thus our relation has to be used with caution when measuring metallicity of metal-poor galaxies or at the outer most region of the Milky Way. On the other hand, the calibration of such empirical relations can be still improved in the future. A way to improve the calibration would be to obtain an homogeneous data set for spectroscopic metallicity, determined with a single method which mitigates the systematics measurements (da Silva et al., 2022). Homogeneous sample of light curves in both \(V\) and \(I\)-bands will be also helpful. In this paper we also assumed simple linear relation between the Fourier coefficients. Klagyivik et al. (2013) suggested this relation might not be linear at low metallicity in the case of \(R_{21}\) and this is also likely the case for the relation based on \(A_{1}A_{2}\). Further investigations are needed to find a unique relation between the Fourier coefficients that would be able to generalize from poor to rich metallic Cepheids. Apart from the assumption of a linear relation, several physical source of scattering can affect the estimation of the metallicity. As discussed in the introduction, the presence of a companion or constant CSE emission would blend the amplitudes of the harmonics used in our relations and thus can be a source of scattering for empirical metallicity relations. Another significant source of scattering in these relations comes from physical effect such as the location in the instability strip that we were not able to correct for. As an alternative, we proposed to use selection threshold on \(A_{1}\) in the case of short-period Cepheids. Although this cut was defined approximately, we think it is however necessary to mitigate the dependence of the amplitude on the location within the instability strip. It is thus important to better understand the influence of the location within the instability strip on the Fourier parameters to correct for this effect. Theoretically, the impact of the metallicity on the Fourier parameters remains to be explained, just as it is for RR Lyrae. The metallicity dependence of the Fourier parameters can help to constraint hydrodynamical models of pulsation (Paxion et al., 2019). In this paper we also did not analyze the possibility of metallicity dependence between 6 and 10 days because of the strong sensitivity of the Fourier parameters to the \(P_{2}/P_{0}=0.5\) resonance. A thorough understanding of the influence of the resonance on the Cepheid light curves is a prerequisite for the calibration of metallicity relation in this period range. Using these relations for Cepheids in MW, SMC and LMC we have shown their capabilities for galactic archeology applications. Another possible application is to determine the metallicity term in PL relation. In order to study the metallicity term authors attribute in general the average metallicity of the host galaxy (measured from a restricted sample of stars) to the sample of Cepheids used in PL relation (see, e.g., Breuval et al., 2022). Our empirical relations offer the possibility to either derive the mean [Fe/H] of the studied sample used in PL relation, or to correct individually the stars. Although the associated uncertainties for these photometric [Fe/H] are of the order of 0.12-0.25 dex, our relations offer the advantage to be applicable to a significant number of stars. This relation can be difficult to apply for galaxies beyond our local group as observed with HST or JWST. Indeed our empirical relations are limited to stars for which the photometry provides a good coverage of the pulsation cycle for applying a Fourier decomposition. This is not the case for Cepheids observed by space telescopes, which typically contain only a few epochs and make use of templates to reconstruct the light-curve. A second limitation comes from the blending, which affects the extragalactic Cepheids (\(D>10\) Mpc) more significantly, since the resolution of telescopes is not able to separate Cepheids from the stellar background (see, e.g., Mochejska et al., 2000; Riess et al., 2020). Although Fourier parameters (\(R_{i1}\) and \(\phi_{i1}\)) are only slightly affected by the blending (Antonello, 2002), this is not the case of the amplitudes and the harmonics on a magnitude scale. Thus an estimation of the blending for correcting the amplitudes would be necessary when applying empirical relations. ## 8 Conclusion In this work, we gathered 586 [Fe/H] abundances for fundamental Cepheids from the literature and we transformed them into a unified scale. We then retrieved well-sampled light curves for 545 of these stars from different \(V\)-band catalogs. Using this data set, we performed a Fourier decomposition of the light curves. Then we fit the linear relations with all possible combinations of Fourier parameters (up to four at the same time) to provide the best empirical relations for estimating the metallicity of short-period (\(2.5<P<6.3\) days) and long-period Cepheids (\(12<P<40\) days). In the case of short-period Cepheids, we opted to use relations based on explicit amplitudes \(A_{1}\) and \(A_{2}\). Testing these relations on a sample of SMC and LMC Cepheids show that the relations are able to distinguish the populations. In the case of long-period Cepheids, we found that the metallicity can be estimated from the Fourier parameters \(A_{1}\), \(\phi_{21}\), and \(R_{41}\). We also found that this empirical relation can accurately derive the mean metallicity of a sample of SMC and LMC Cepheids. For individual metallicity determinations, precisions of \(\sigma\)=0.12 dex and \(\sigma\)=0.25 dex have to be attributed for short- and long-period Cepheids respectively. These relations can be more efficient for estimating the average metallicity of a group of Cepheids rather than isolated stars. We then established new interrelations between \(V\) and \(I\) bands using OGLE data to convert these relations into the \(I\) band. We have shown that our empirical relations are able to derive the mean metallicity of OGLE sample of the MW, SMC, and LMC with a proper agreement with spectroscopic mean estimates found in the literature. We also tested the relations in the \(I\) band to map the metallicity distributions of the MW, SMC, and LMC and we were able to derive a metallicity gradient that is consistent with the literature. Finally, we find that the calibration of empirical relations can be still improved on the basis of further spectroscopic observations and homogeneous photometries in the \(V\) and \(I\) band. ###### Acknowledgements. The authors would like to thank Dorota Skowron and Anna Jacyszyn-Dobrzenieck for their valuable comments and discussion. VH, RS, OZ and RSR are supported by the National Science Center, Poland, Sonata BIS project 2018/30/E/ST9/00598. This research made use of the SIMBAD and VIZIER databases at CDS, Strasbourg (France) and the electronic bibliography maintained by the NASA/ADS system. This research also made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2018, 2022).